Add Features to Existing Capsule

Hi,
I’d like to add some new features to an existing capsule, is there a way to do that? Or I need to create a brand new capsule?

What kinds of features do you want to add?

Before someone can help you @alex.thiel @tyler.compton
Have you read the information here, may find something you need?
https://openvisioncapsules.readthedocs.io/en/latest/

I have tried several person detection capsules. I’d like to find the coordinates for each detected bounding box to calculate distance between each person. Is there a way to do that? Thank you.

Hi there, thank you for posting on our forum!

There’s two main entrypoints for a developer who is using BrainFrame to make use of inference information.

  1. The capsules system, where as you know, it is possible to modify code that will run on the BrainFrame server.
  2. The REST API, which will allow you to interface with BrainFrame and extract inference data realtime.

I’d like to find the coordinates for each detected bounding box to calculate distance between each person.

It sounds as though right now you are wanting to do post processing of data, rather than create a new deep learning model or extract information from the frame itself. I recommend using the REST api to extract data in real-time and do post processing with it. Here’s a snippet where I stream inference results from BrainFrame and look at the coordinates of the returned bounding boxes:

To run this code, first install the brainframe-api python wrapper:

pip3 install brainframe-api

Now, you can run the following code to print out detections, live

from brainframe.api import BrainFrameAPI, bf_codecs

api = BrainFrameAPI("http://localhost")

assert len(api.get_stream_configurations()), \
    "There should be at least one stream already configured!"

# Get the inference stream
for zone_status_packet in api.get_zone_status_stream():
    # Organize detections results as a dictionary of {stream_id: [Detections]}
    detections_per_stream = {
        stream_id: zone_status.within
        for stream_id, zone_statuses in zone_status_packet.items()
        for zone_name, zone_status in zone_statuses.items()
        if zone_name == "Screen"
    }

    # Iterate over each stream_id, detections combination
    for stream_id, detections in detections_per_stream.items():
        # Look at detection coordinates here
        for detection in detections:
            detection: bf_codecs.Detection
            print(f"Class={detection.class_name}, coords={detection.coords}")

This code will connect to the BrainFrame REST API using a python wrapper and connect to the inference output stream. I recommend adding your coordinate measuring code using the REST api in this manner.