On-the-fly Encoding

When recording directly into audio/video container file, GStreamer is used to encode audio and video payload s of event s and write the resulting streams into an output file.


This approach can only be used when GStreamer audio and video encoders can process all data at the rate with which it arrives.

Recording into a log file as a intermediate step and then encoding the data offline can mitigate this problem.


The rsb-gstreamer project includes the scripts record_ogv.sh and record_mp4.sh which allow encoding audio data and video data into a video container file. Both scripts are used as follows

$ record_{ogv,mp4}.sh OUTPUTFILE VIDEOSCOPE ( AUDIOSCOPE | _ )

_ can be used to indicate that a particular component should not be recorded.


This example demonstrates encoding audio and video event s into a video container file with synchronized audio and video streams.

  1. Start the recording pipeline

    $ RSB_TRANSPORT_SPREAD_ENABLED=1 record_ogv.sh video.ogv /video/camera1 /audio/mic1


    This part of the example assumes a configuration that uses Spread transport, see Common Environment Variables.

  2. Start the data source (or data sources)


GStreamer Pipeline

The following diagram contains a simplified illustration of a minimal GStreamer pipeline that can be used to generate video files from RSB event s containing audio and video data. The vorbisenc, theoraenc and oggmux elements can be replaced with different encoders and container multiplexers as required.:

audio source--->/     \--->rsbaudiosrc--->vorbisenc---+
                \     /                               v
                / RSB \                             oggmux--->filesink
                \     /                               ^
video source--->/     \--->rsbvideosrc--->theoraenc---+

The audio and video sources in the above diagram can be arbitrary components which publish suitable audio and video event s using RSB.