How to add AI camera to BrainFrame

I am trying to add SimCam1s AI camera to VisionCapsules runtime environment.

What I have done so far includes:

  1. Add the metadata containing the results of the deep learning algorithm to the SEI part of the IDR frame, and then pack it into RTP packets.

2.Use GStreamer pipeline to verify the correctness of metadata and video data

Rtsp source --> h264 file pipeline to verify metadata:
gst-launch-1.0 -e rtspsrc location=rtsp://SIMCAM:183HX9@192.168.1.107/live ! rtph264depay ! ‘video/x-h264, stream-format=(string)byte-stream’ ! filesink location=file.h264

Rtsp source --> decode --> mp4 file pipeline to verify video data:
gst-launch-1.0 -e rtspsrc location=rtsp://SIMCAM:183HX9@192.168.1.107/live! rtph264depay ! ‘video/x-h264, stream-format=(string)byte-stream’ ! decodebin ! avenc_mpeg4 ! mp4mux ! filesink location=file.mp4

Does the brainframe support adding new appsink, such as sub_sink?
I plan to write a gstreamer plugin to extract sei data from h264 video stream.

Thank you for your questions. Currently BrainFrame doesn’t support extracting AI metadata, but we’re interested in adding this feature.

Our plan is to extract SEI data in BrainFrame by default and pass it to Capsules “process_frame” method along with the numpy frame.

Perhaps if your GStreamer plugin for extracting SEI metadata can be shipped with BrainFrame once we see that it works well with the rest of the system?

I’ll discuss this with the rest of the team.

Hello Alex,I am a novice to GStreamer, so it took me a week to learn this framework. At present I have written a simply H264SeiFilter to extracting our metadata in SEI. In this plugin, I implemented a tcp service to send metadata to the python tcp client.

  1. What should I do next to pass it to Capsules “process_frame” method?
  2. How does Brainframe get data from GStreamer’s appsink?
  3. Could you give me some advice?

Hi george, thank you for your work thus far. Right now it’s our turn to add metadata extraction to the BrainFrame side. Would you be willing to share your H264SeiFilter Gst plugin and an example of how to run it with your SEI metadata example video?

From there, we can add the plugin to BrainFrame deployments and add the feature to the OpenVisionCapsule spec.

Of course I am happy to share it, but this plugin is just a simple demo, currently only extracting the metadata of our camera. I am currently installing brainframe, and plan to share a more comprehensive plugin after completing the overall verification

Hi Alex,thanks for your help, BrainFrame Server & Client have been installed. How can I register my gstreamer plugin to BrainFrame Client?
I try to copy my gstreamer plugin to the path “./v0_23_2_client/lib/gstreamer-1.0”. Then I run BrainFrame Client with custom pipelines:
rtspsrc location={url} ! rtph264depay name="buffer_src" ! video/x-h264, stream-format=byte-stream ! h264seifilter ! decodebin ! videoconvert ! appsink name="main_sink"
config

err

By the way, the pipeline without custom plugin works:

rtspsrc location={url} ! rtph264depay name=“buffer_src” ! video/x-h264, stream-format=byte-stream ! decodebin ! videoconvert ! appsink name=“main_sink”

Hi George!

1: Creating a custom BrainFrame Server with your plugin

The plugin needs to be added on the BrainFrame Server container as well. To do that, we can edit the server container by making a custom image on your local machine (for testing purposes).

Create a file and call it Dockerfile, with this directory structure

.
├── Dockerfile
└── h264seifilter.so

Then, inside of Dockerfile:

FROM dilililabs/brainframe:0.23.2
WORKDIR /usr/local/lib/gstreamer-1.0/
ADD h264seifilter.so .

# Always end the file in the /brainframe directory so that the 
# brainframe_server executable is in the PATH
WORKDIR /brainframe

Then, to build the container, simply run

docker build --tag custom_brainframe:0.23.2 .

2. Running the Custom BrainFrame Server

Now, we need to add a docker-compose.override.yml so that you can run BrainFrame Server with your own custom image instead of the default dilililabs/brainframe:0.23.0 image.

To do that, go to your docker-compose.yml that you downloaded from our downloads section, and make a directory like this:

.
├── docker-compose.override.yml
├── docker-compose.yml
└── license_file

Inside of docker-compose.override.yml put the following:

version: '2.4'
services:
  api:
    image: custom_brainframe:0.23.2

Now, to start the BrainFrame server, you just have to run the following in that directory

docker-compose down && docker-compose up

Helpful Commands

If you need to debug inside of the container, you can always enter it by doing the following command:

docker run -it --entrypoint bash dilililabs/brainframe:0.23.2
1 Like

Hi Alex, great! It works now.

I have some question:

  1. Why should the custom plugin been added in both Sides (Server & Client side) ?
  2. Which side The custom pipeline work on,Server or Client?
  3. It seems that the default pipeline don’t support H264 NALU with SEI, so I have to filter SEI data from the raw video stream.

Hi George!

Custom pipelines are used by both the server and the client, so if you use the h264seifilter plugin on your custom pipeline, it has to be available in both places.

In what way are you finding it unsupported? It’s true that BrainFrame doesn’t currently extract SEI data automatically. Are you finding that SEI data interferes with stream decoding?

I can open the rtsp stream with SEI data using vlc player without any error,however brainframe client displays connecting.
When I filter the SEI data, It works.

I have sent my source code to you from WeChat.
Hope it is helpful for your plan to integrate extraction of SEI data by default.
If anything I could help , please let me know.

1 Like