Weekly Developer Q&A: April 22nd

Thank you to all of the developers that showed up and asked questions.

Below is a rough outline of the questions and answers from the Zoom chat. The questions have been auto-translated from Chinese to English using Google Translate, so it may not be perfect.

Questions and Answers


Q: The local client is configured and there is an alarm message, but there is no data in the db file in the volumes folder

A: The data does get saved in the ‘brainframe’ database, under volumes/mysql. If you see an alert in the client, then it also got saved in the database.


Q: Are the libraries or modules provided by you now rigorously tested to reach the product level of system integration?

A: Yes. Our server has over 250 automated integration tests that run constantly, checking the health of the product. We ensure that all tests pass before finalizing a release. Furthermore, each capsule we release has automated tests (which can be found here https://github.com/opencv/open_vision_capsules/tree/master/tests )


Q: Does the server support containerized cluster deployment?

A: We allow custom containerized clustered deployments for select enterprise customers at the moment.


Q: Does the platform currently only support .pb format models?

A: Currently we support the following frameworks: https://aotu.ai/docs/capsule_development/runtime_environment/#importing
This includes tensorflow .pb, some tflite files, caffe files (using cv2.dnn module), and openvino models (.xml, *.bin).


Q: How is the pb model encapsulated into .cap?

A: The capsules are packaged using the package_capsule function found here: https://github.com/opencv/open_vision_capsules/blob/master/vcap/vcap/loading/packaging.py

You can easily package a capsule without using code, by following these instructions How to package up a capsule and run it with BrainFrame?


Q: Does the platform have testing tools for model performance

A: We have a benchmarker, but we haven’t made it public yet. If you need it, we can send it to you. Perhaps we will add it to the next release.


Q: Does the platform have any requirements for model performance? If the model performance is not high, does the video stream freeze?

A: The video streaming will not be stuck, but low model performance will slow down the rest of the capsules on that particular stream


Q: Are there any restrictions on the support of these apps? Hardware platform restrictions or other restrictions? What are the restrictions and conditions for enterprise applications?

A: BrainFrame doesn’t limit you, but it is important to test brainframe for the use case you want, and pick your hardware around that. We are working on getting a dedicated dev kit that we benchmark our models on and guarantee certain levels of performance for different capsules.
As for our licensing, developers can get a free license file by making an account at aotu.ai, which will support 2 videostreams and 7 days worth of recorded inference data.


Q: Does the platform have requirements for front-end collection hardware equipment? What is the minimum performance required for background analysis hardware support?

A: The cameras must support rtsp or http livestreams in order to be ingested. Ideally, h264 encoded for best performance. 720p or higher is best for our existing capsules at https://aotu.ai/docs/downloads/#capsules. As for the minimum hardware for the backend, you can find more information here: https://aotu.ai/docs/user_guide/server_setup/#recommended-hardware-and-software


Q: Can the self-trained model be directly converted into a capsule supported by the platform?

A: Yes, you can definitely wrap your own models as capsules! You can find out more about capsule development here: https://aotu.ai/docs/capsule_development/introduction/


Q: Does the platform support front-end edge computing devices?

A: Yes, for select enterprise customers we are currently doing this. We will be releasing more features on this front in the future.


Q: How does the platform support custom application scenario models?

A: First of all, you can customize the capsules. This allows you to extract whatever custom information you need for your use case.

Second, it is possible to integrate BrainFrame using our REST API https://aotu.ai/docs/api/ and our Python Library that wraps around our REST api: https://aotu.ai/docs/downloads/#python-api

Thirdly, you can extract the inference information from brainframe + use the client for lines, zones, and alarms, and integrate into your custom application scenario.

Thirdly, you can use our customizable dashboard to display data for your custom scenario: https://aotu.ai/docs/dashboard/getting_started/


Q: Is there any special difference between face recognition and existing ones?

A: Yes! Face recognition and other “recognition” type capsules output “encodings”. These are vector representations of the object that BrainFrame can use to compare to existing templates. It is possible to upload template faces using the client, and our Identities API: https://aotu.ai/docs/api/#tag/Identity-Control

More information on “Encoding”: https://aotu.ai/docs/capsule_development/inputs_and_outputs/#encoding


Q: Is there a limit to the number of faces (for face recognition)

A: No, BrainFrame does not impose a limit. However, the overall Precision / Recall will get lower as the size of the face dataset increases.

1 Like