Pipeline with low latency problem

hello ~:

with blew configuration, I can not connected to my stream… , could u plz help me on it ?

  • stream address
rtsp://***:***@192.168.2.252:554/h264
  • pipeline string:
rtspsrc location="{url}" latency=0 ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! vidleo/x-raw,format=(string)BGR ! appsink name="main_sink"
  • server logs:
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:03 +0000] "GET /api/alerts?stream_id=6&limit=100&offset=0 HTTP/1.1" 200 2 "-" "python-urllib3/1.26.2"
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:03 +0000] "GET /api/zone_alarms?stream_id=6 HTTP/1.1" 200 2 "-" "python-urllib3/1.26.2"
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:03 +0000] "GET /api/zones?stream_id=6 HTTP/1.1" 200 66 "-" "python-urllib3/1.26.2"
core_1            | ERROR:bf_streaming.url(rtsp://***:***@192.168.2.252:554/h264):Pipeline state is null. Restarting...
core_1            | WARNING:bf_streaming.url(rtsp://***:***@192.168.2.252:554/h264):Error message on pipeline for rtsp://***:***@192.168.2.252:554/h264: ../libs/gst/base/gstbasesrc.c(3127): gst_base_src_loop (): /GstPipeline:pipeline8/GstRTSPSrc:rtspsrc13/GstUDPSrc:udpsrc65:
core_1            | streaming stopped, reason not-linked (-1)
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:04 +0000] "GET /api/streams HTTP/1.1" 200 386 "-" "python-urllib3/1.26.2"
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:04 +0000] "GET /api/alerts?stream_id=6&limit=100&offset=0 HTTP/1.1" 200 2 "-" "python-urllib3/1.26.2"
http_proxy_1      | 192.168.3.15 - - [21/Apr/2021:02:12:04 +0000] "GET /api/zone_alarms?stream_id=6 HTTP/1.1" 200 2 "-" "python-urllib3/1.26.2"

Hi delpanz, welcome to the forum!

I see that you’re using a custom pipeline to reduce latency. The pipeline looks fine, so perhaps the RTSP stream is outputting a format that BrainFrame doesn’t expect. Does this stream connect properly without a custom pipeline?

I think in order to diagnose the custom pipeline problem, we’ll need more logs. Please create a new file at /usr/local/share/brainframe/.env (assuming you used the BrainFrame CLI to install) with the following contents:

GST_DEBUG=3

Then, try creating the stream with your custom pipeline again and send me the new logs. Hopefully this will provide us with enough information to diagnose the problem.

Actually, it looks like there’s a typo in your custom pipeline. The caps filter vidleo/x-raw,format=(string)BGR should instead be video/x-raw,format=(string)BGR. Here’s the full pipeline with the fix:

rtspsrc location="{url}" latency=0 ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

Thanks a lot , tyler:

it’s fixed by change rtph264depay to rtph265depay.
but unfortunately, it’s seems not work, there’s still has a 5s latency…

Could you send the logs with the GST_DEBUG=3 change that Tyler recommended? Perhaps TCP is being used instead of UDP. Logs would be very helpful! :smiley:

Hello alex:

Here’s my log with GST_DEBUG=3 and command "brainframe compose logs -f "
plz help to check it… thanks a lot
bf_logs.txt (2.2 MB)

Hmm, I wasn’t able to see anything in those logs.

Could you try adding the protocol to rtspsrc?

rtspsrc location="{url}" latency=0 protocol=udp

I think that should force UDP. I’m curious if this will improve latency for you.

seems not work…
and i will reinstall the brainframe and test again

Hi Alex:
The latency become lower after reduce the video source size from 4k to 720P.
But still has a latency of 1s.

and here’s my pipeline:

rtspsrc location="{url}" latency=0 protocol=udp ! rtph265depay name="buffer_src" ! decodebin ! videoconvert ! video/x-raw,format=(string)BGR ! appsink name="main_sink"

So, I guess there’s maybe some buffer in brainframe server side. But I can not find any docs on this details.
really appreciate if you can tell us more details …

Thank you.

Can I ask what your application is?

BrainFrame (Server) doesn’t have any buffers, but the BrainFrame Client does. The Client has to buffer until it receives inference results from the server, so that it can pair up frames to their bounding boxes. The client does this even if there are no capsules loaded on the server.

Hi Alex,
Our app is just show the video from IP camera, and has a latency about 2s.
Thanks for your sharing about the buffer details. It’s really helpful for us.

Thank you.

Got it, thank you for the info.

Did you know that the BrainFrame Client is actually Open Source? The source code can be found here. All of the logic for displaying video + bounding boxes can be found in that repository. It’s also easy to package and build a snap for it to distribute a modified version.

Feel free to ask any questions if this is something you are interested in.