
The pointers pointed to a large array of buffers (256) and each buffer was 384K in size.

I used a circular linked list to hold pointers to the H.264 packet to be written to disk and placed that in another thread. You can pull the images from the camera as YUV420 or UYVY422 and encode it to H.264 but depending on the screen resolution you could still have large I-Frame packets in the 175K+ range. I just completed a similar task but I did not use python cause i am am old C programmer from way back and do not like the syntax of python but that is my problem. Here is some additional information about the camera formats:Ĭode: Select all $ v4l2-ctl -list-formats-ext I have understood that with the RPi camera module it would be possible to utilize GPU hardware encoding, but would it be also possible somehow with these USB cameras? H.264) and saving it to a manageable-sized file while displaying the video? Small additional delay (<= 2 seconds) would be acceptable in the displayed video stream.

Is there a way to encode (or transcode) these kinds of video streams to a more compressed format (e.g. I have tried saving the recordings to different SD cards (UHS-1 and UHS-3) and USB 3 pen drives, but frames are dropped with all of these.

With the current approach, frames are dropped in the recordings and the resulting file size with mjpeg encoding is way too large even with smaller video resolutions and reduced framerate. However, saving the streams in the file system seems more problematic. ! queue ! matroskamux ! queue ! filesink location=testvideo.mkv sync=falseĭisplaying two mjpeg video streams in the TKinter application is working ok. Code: Select all gst-launch-1.0 -v v4l2src device=/dev/video0 ! image/jpeg,width=1280,height=960,framerate=30/1,format=MJPG ! tee name=t ! queue leaky=1 ! jpegdec ! xvimagesink sync=false t.
