-Actually i downloaded the sample tutorial for gstreamer from the link,

http://cgit.freedesktop.org/~slomo/gst-sdk-tutorials/

git://people.freedesktop.org/~slomo/gst-sdk-tutorials

  • Now i had modified the following code in the tutorial 3,

    -(void) app_function
     {
    GstBus *bus;
    GSource *bus_source;
    GError *error = NULL;
    
    GST_DEBUG ("Creating pipeline");
    
    pipeline = gst_pipeline_new ("e-pipeline");
    
    
    /* Create our own GLib Main Context and make it the default one */
    context = g_main_context_new ();
    g_main_context_push_thread_default(context);
    
    /* Build pipeline */
    // pipeline = gst_parse_launch("videotestsrc ! warptv ! videoconvert ! autovideosink", &error);
    
    
    source = gst_element_factory_make("udpsrc", "source");
    
    g_object_set( G_OBJECT ( source),   "port", 8001, NULL );
    
    GstCaps *caps;
    
    caps = gst_caps_new_simple ("application/x-rtp",
                                "encoding-name", G_TYPE_STRING, "H264",
                                "payload", G_TYPE_INT, 96,
                                "clock-rate", G_TYPE_INT, 90000,
                                NULL);
    
    g_object_set (source, "caps", caps, NULL);
    
    
    
    
    rtp264depay = gst_element_factory_make ("rtph264depay", "rtph264depay");
    h264parse = gst_element_factory_make ("h264parse", "h264parse");
    vtdec = gst_element_factory_make ("vtdec", "vtdec");
    glimagesink  = gst_element_factory_make ("glimagesink", "glimagesink");
    
    gst_bin_add_many (GST_BIN(pipeline), source,  rtp264depay, h264parse, vtdec, glimagesink, NULL);
    
    
    
    
    if (error) {
        gchar *message = g_strdup_printf("Unable to build pipeline: %s", error->message);
        g_clear_error (&error);
        [self setUIMessage:message];
        g_free (message);
        return;
    }
    
    /* Set the pipeline to READY, so it can already accept a window handle */
    gst_element_set_state(pipeline, GST_STATE_READY);
    
    video_sink = gst_bin_get_by_interface(GST_BIN(pipeline), GST_TYPE_VIDEO_OVERLAY);
    if (!video_sink) {
        GST_ERROR ("Could not retrieve video sink");
        return;
    }
    gst_video_overlay_set_window_handle(GST_VIDEO_OVERLAY(video_sink), (guintptr) (id) ui_video_view);
    
    /* Instruct the bus to emit signals for each received message, and connect to the interesting signals */
    bus = gst_element_get_bus (pipeline);
    bus_source = gst_bus_create_watch (bus);
    g_source_set_callback (bus_source, (GSourceFunc) gst_bus_async_signal_func, NULL, NULL);
    g_source_attach (bus_source, context);
    g_source_unref (bus_source);
    g_signal_connect (G_OBJECT (bus), "message::error", (GCallback)error_cb, (__bridge void *)self);
    g_signal_connect (G_OBJECT (bus), "message::state-changed", (GCallback)state_changed_cb, (__bridge void *)self);
    gst_object_unref (bus);
    
    /* Create a GLib Main Loop and set it to run */
    GST_DEBUG ("Entering main loop...");
    main_loop = g_main_loop_new (context, FALSE);
    [self check_initialization_complete];
    g_main_loop_run (main_loop);
    GST_DEBUG ("Exited main loop");
    g_main_loop_unref (main_loop);
    main_loop = NULL;
    
    /* Free resources */
    g_main_context_pop_thread_default(context);
    g_main_context_unref (context);
    gst_element_set_state (pipeline, GST_STATE_NULL);
    gst_object_unref (pipeline);
    
    return;
    

    }

-Now am running the application in the ipad,and Application starts playing.

  • Now am entering background and returning to foreground the Gstreamer streaming updates are not visible in the UI,but in the xcode's network usage I could see the packets receiving....:(

Thanks in advance....iOS GEEKS....

2

There are 2 answers

1
rsacchettini On BEST ANSWER

Update: Get UDP to work.

After further investigation I got UDP h264 streaming to work on linux (PC x86) but the principle should be the same on IOS (specifically avdec_h264 (used on PC) has to be replaced by vtdec).

Key differences between the TCP and UDP pipelines:

Server side:

  • IP : The 1st element which confused me between UDP and TCP server sides : On UDP server, the IP address specified on the udpsink element is the client side IP, i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000

While on the TCP server side, the IP is the one of the server side (host parameter on tcpserversink) i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

  • Video stream payload/format: In order for the client to be able to detect the format and size of the frames, the TCP server side makes use of gdppay, a payloader element, in its pipeline. On the client side the opposite element, a de-payloader is used gdpdepay in order to be able to read the received frames. i.e.

gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

The UDP server side does not use the gdpay element, it leaves the client side to use a CAPS on its udpsink see below in the client side differences.

Client side

  • IP: The UDP client does NOT need any IP specified. While the TCP client side needs the server IP (host parameter on tcpclientsrc) i.e. gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false
  • Video stream payload/format:like mentionned in the previous paragraph, the TCP server side uses payloader gdppay while the client side uses a de-payloader to recognize the format and size of the frames.

Instead the UDP client has to explicitely specify it using a caps on its udpsrc element i.e. CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96'

gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false`

How to specify the caps : it is a bit hacky but it works: run your UDP server, with the verbose option -v i.e. gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000

You'll get the following log:

Setting pipeline to PAUSED ... Pipeline is PREROLLING ... /GstPipeline:pipeline0/GstH264Parse:h264parse0.GstPad:src: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:src: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956 /GstPipeline:pipeline0/GstUDPSink:udpsink0.GstPad:sink: caps = application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, sprop-parameter-sets=(string)"J2QAKKwrQCgC3QDxImo\=\,KO4fLA\=\=", payload=(int)96, ssrc=(uint)3473549335, timestamp-offset=(uint)257034921, seqnum-offset=(uint)12956 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0.GstPad:sink: caps = video/x-h264, width=(int)1280, height=(int)720, parsed=(boolean)true, stream-format=(string)avc, alignment=(string)au, codec_data=(buffer)01640028ffe1000e27640028ac2b402802dd00f1226a01000428ee1f2c /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: timestamp = 257034921 /GstPipeline:pipeline0/GstRtpH264Pay:rtph264pay0: seqnum = 12956 Pipeline is PREROLLED ... Setting pipeline to PLAYING ...

Now copy the caps starting with caps = application/x-rtp This is the one specifying the rtp stream format. As far as I know the one that really is mandatory to get the UDP client to recognize the rtp stream content and then initialise the playing.

To wrap it up and avoid confusion, find complete command line examples below, using raspivid with a Raspberry pi. if you want to try it ( on linux )

UDP

  • Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! udpsink host=$CLIENTIP port=5000
  • Client: CAPS='application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96' gst-launch-1.0 -v udpsrc port=5000 caps=$CAPS ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false

TCP

  • Server: raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=$SERVERIP port=5000

  • Client: gst-launch-1.0 -v tcpclientsrc host=$SERVERIP port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false enable-last-buffer=false

Note: Raspivid could easily be be replaced by a simple h264 file using cat i.e. cat myfile.h264 | gst-launch...

0
rsacchettini On

I recently tried getting a live streaming working from a RaspberryPi to IOS8 using hardware h264 decoding, using the Apple VideoToolBox API thru the "vtdec" gstreamer plugin.

I looked at many tutorials, namely from braincorp (https://github.com/braincorp/gstreamer_ios_tutorial)

and Sebastian Dröge: http://cgit.freedesktop.org/~slomo/gst-sdk-tutorials/

I got the latter one to work, tutorial 3 modified:

  • Server pipeline on RaspberryPi using pi Camera and Raspivid + gstreamer:
    raspivid -t 0 -w 1280 -h 720 -fps 25 -b 2500000 -p 0,0,640,480 -o - | gst-launch-0.10 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96 ! gdppay ! tcpserversink host=ServerIPRaspberryPi port=any_port_on_Rpi

  • client side pipeline one IOS 8 device:
    tcpclientsrc host=ServerIPRaspberryPi port=any_port_on_Rpi ! gdpdepay ! rtph264depay ! h264parse ! vtdec ! glimagesink

or the same with instead of glimagesink autovideosink.

This solution works and several clients can be used simultaneously. I tried getting udpsink to work instead of tcpserversink, but not luck so far it never worked.

===IMPORTANT===
Also the factory way using gst_element_factory_make() + gst_bin_add_many (GST_BIN(pipeline), ...) never worked. Instead I used the pipeline = gst_parse_launch(...) method.

So in our case, on the IOS client side:
pipeline = gst_parse_launch("tcpclientsrc host=172.19.20.82 port=5000 ! gdpdepay ! rtph264depay ! h264parse ! vtdec ! autovideosink", &error);

Possible reason: There is a page documenting differences, and how to port code from gstreamer 0.10 and 1.0: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/manual/html/chapter-porting-1.0.html

We noted that while using the "factory method" there were various elements of the pipeline missing depending on whether we were using gstreamer 1.0 or 0.1 i.e. trph264depay or avdec_h264 (used on other platforms i.e. linux client side) to decode h264 instead of IOS specific vtdec).

We could hardly get all elements together using the Factory method but we managed using the "gst_parse_launch()" function without any problems, on IOS and linux.

So in conclusion, while we haven't tested and got the UDP sink to work, then try the TCP way using tcpclientsrc element instead, get it working, then only once it works, try find your way to udp and pls let us know if you get to an end.

Best Regards, hope this helps many of you.

Romain S