I see that you’re using a custom pipeline to reduce latency. The pipeline looks fine, so perhaps the RTSP stream is outputting a format that BrainFrame doesn’t expect. Does this stream connect properly without a custom pipeline?
I think in order to diagnose the custom pipeline problem, we’ll need more logs. Please create a new file at /usr/local/share/brainframe/.env (assuming you used the BrainFrame CLI to install) with the following contents:
GST_DEBUG=3
Then, try creating the stream with your custom pipeline again and send me the new logs. Hopefully this will provide us with enough information to diagnose the problem.
Actually, it looks like there’s a typo in your custom pipeline. The caps filter vidleo/x-raw,format=(string)BGR should instead be video/x-raw,format=(string)BGR. Here’s the full pipeline with the fix:
So, I guess there’s maybe some buffer in brainframe server side. But I can not find any docs on this details.
really appreciate if you can tell us more details …
BrainFrame (Server) doesn’t have any buffers, but the BrainFrame Client does. The Client has to buffer until it receives inference results from the server, so that it can pair up frames to their bounding boxes. The client does this even if there are no capsules loaded on the server.
Hi Alex,
Our app is just show the video from IP camera, and has a latency about 2s.
Thanks for your sharing about the buffer details. It’s really helpful for us.
Did you know that the BrainFrame Client is actually Open Source? The source code can be found here. All of the logic for displaying video + bounding boxes can be found in that repository. It’s also easy to package and build a snap for it to distribute a modified version.
Feel free to ask any questions if this is something you are interested in.