I am trying to add SimCam1s AI camera to VisionCapsules runtime environment.
What I have done so far includes:
- Add the metadata containing the results of the deep learning algorithm to the SEI part of the IDR frame, and then pack it into RTP packets.
2.Use GStreamer pipeline to verify the correctness of metadata and video data
Rtsp source --> h264 file pipeline to verify metadata:
gst-launch-1.0 -e rtspsrc location=rtsp://SIMCAM:183HX9@192.168.1.107/live ! rtph264depay ! ‘video/x-h264, stream-format=(string)byte-stream’ ! filesink location=file.h264
Rtsp source --> decode --> mp4 file pipeline to verify video data:
gst-launch-1.0 -e rtspsrc location=rtsp://SIMCAM:183HX9@192.168.1.107/live! rtph264depay ! ‘video/x-h264, stream-format=(string)byte-stream’ ! decodebin ! avenc_mpeg4 ! mp4mux ! filesink location=file.mp4
Does the brainframe support adding new appsink, such as sub_sink?
I plan to write a gstreamer plugin to extract sei data from h264 video stream.