BrainFrame Forum

Question about Customizing the Client

There is several questions about brainframe , Could you please give the answer?

  1. Logo customization. Is it possible to customize our logo on brainframe?
  2. With regard to integration with other systems, can the alarm information be pushed to other software platforms?
  3. The picture information and the video storage before and after the alarm. Is it possible to store the video / picture before and after the alarm point?
  4. Can the display information, text(for example, “person”,“car”) and framework in the video be modified or the current format fixed? Or they are decided by capsules?
  5. For “detector_people_and_vehicles_fast.cap” ,how can I change “person” to “people”,or change “car” to “汽车” ?

Thank you for posting this on the forum. Here’s my answer from the PM.

  1. Logo customization. Is it possible to customize our logo on brainframe?

This is not currently a supported feature, but we have done custom logos for enterprise clients.

  1. With regard to integration with other systems, can the alarm information be pushed to other software platforms?

Yes absolutely! Our REST API is documented under https://aotu.ai/docs/api/ and has all of the APIs for alerts and alarms. I also recommend trying out our Python library (it is an interface for the REST API) and can be downloaded here: https://aotu.ai/docs/downloads/#python-api The python library has methods such as api.get_all_alerts(), api.get_alert_frame(alert_id), and other helpful functions (all of these functions are possible through the REST API as well).

  1. The picture information and the video storage before and after the alarm. Is it possible to store the video / picture before and after the alarm point?

Currently there is no video storage. What we support is called “Alert Frames”, which is a picture taken at the start of the alert. Maybe we could add support for a picture taken at the end of an alert?

  1. Can the display information, text(for example, “person”,“car”) and framework in the video be modified or the current format fixed? Or they are decided by capsules?

Well, the names “person” or “car” or “train” are determined by the capsule. Capsules are an open source format, so if you want to make your own capsule you can do so (source code can be found here github.com/opencv/open_vision_capsules As for what gets rendered on the screen, the client has a “Render Configuration” in the toolbar. You can decide what gets overlayed. Also, Capsules have configurable elements. Under “Global Plugin Configuration”, select “Detector Person and Vehicles” and you can choose to filter out people, vehicles, or animals.

  1. For “detector_people_and_vehicles_fast.cap” ,how can I change “person” to “people”,or change “car” to “汽车” ?

Unfortunately that is not currently possible without changing the source code of the capsule.

Thiel,hello!

Two other questions:

  1. Can the framework be modified after recognizing “person” and “car”? For example, from the current square to a circle?

  2. We use a camera in the LAN for testing. It seems that there is a delay of about 5s from the action to the Brainframe display. How can this delay be optimized?

Thank you for the questions!

  1. Can the framework be modified after recognizing “person” and “car”? For example, from the current square to a circle?

The bounding boxes are actually not boxes, but rather exact renders of the polygon from the capsule. Capsules can output shapes of any number of sides, as you can see in the DetectionNode definition from the openvision capsules repository, the DetectionNode object has a parameter self.coords which contains a list of (x, y) points. If you made it output a polygon that approximates a circle, then the BrainFrame client would output a circle.

  1. We use a camera in the LAN for testing. It seems that there is a delay of about 5s from the action to the Brainframe display. How can this delay be optimized?

Yes, this is possible through custom pipelines! By default, BrainFrame will “buffer” frames in order to ensure a more stable streaming experience. In order to prevent that, try using the pipeline below:

rtspsrc location={url} latency=0 ! rtph264depay name="buffer_src" ! decodebin ! videoconvert ! appsink name="main_sink"

The “latency=0” is what changes. By default, the latency is 3000ms with BrainFrames pipeline. Please test this out, we will add it to our docs in the next release! Cheers.