it shows that the docker is running but my client cannot connect the server, the info is 502 Bad Gateway. the web displayed on Linux is 302 Find. Could anyone please help me?
Hi Meggie, welcome to the forum!
This can happen when the BrainFrame core service fails to start. Would you mind running this command for me and attaching the results? It must be run in the same directory as the
docker-compose logs core > core_logs.txt
Thank you so much and this is my log. core-logs.txt (306.4 KB)
It looks like BrainFrame is running properly. It may be an issue with our HTTP proxy. Would you mind trying to restart it with the following command?
docker-compose restart http_proxy
i restart the proxy but it still doesn’t work
i found the problems thank u very much
I’m glad to hear it! What did the problem end up being?
i found that if i shut down the server while the client is running, the database will be ruined. so i reloaded the database and the server went well
could i ask one more question? does the api support .h5 and keras frame?
Good question! Right now our supported libraries are listed here. We ship with Tensorflow 1.15, which includes keras within it.
You should be able to import keras and use it like normal (including loading h5 files with h5py, or keras wrappers)
import tensorflow.keras as keras
By the way, you can get an interactive python interpreter to experiment with the shipped libraries by running the following line:
docker run -it --entrypoint python3 aotuai/brainframe_core:0.26.0
I’d love to help you more through your capsule development journey. Do you have any source code for the model you could share, so I can assist you in converting it to a capsule?
Question: Is this model Open Source? I can help you much more if you share me the model/source- I’ll help you get started with building the capsule.
It’s great that you’re digging into the tf_image_classification.py source code- that will be helpful for you to understand how developers should implement a backend.
Some more information on tf_image_classification.py: It was created to support image classification models that were trained with the Tensorflow Slim repository, and we have used that repository to fine-tune train many different architectures that were pretrained on imagenet.. It won’t work on just any model, it will only work with models that were trained with that repository, because they followed certain naming conventions for the inputs and outputs, etc.
If you want to use the Keras model directly, I think it would be simpler to make a new Backend that can load the model, run inference, etc, using Keras exclusively.
Hi Meggie, you got that error because the BaseBackend requires that a Backend.close() method exist.
I’ve taken the liberty of building a capsule for you, for your reference. I’ve never tested it- if you send me the model I can fix any issues with it. Note that you will need a person detector for this capsule! I recommend detector_person_and_vehicles_fast.
Please take a look at the capsule. If you send me the model.h5, I can help validate, and even help you build the batch_predict method for extra speed!
I think I forgot to resize the image before input… Also, it’s important to know how this model was trained- is it rained using RGB or BGR channel order? BrainFrame gives frames in BGR order, so we may need to flip the channels to increase the accuracy.
classifier_violence_v1.zip (2.0 KB)
Thanks, I’ll take a look at the model.
don’t know whether i can use frames with input-type as NodeDescription.Size.NONE and create a new attribute on person detection from other capsule.
Technically, you can accept “NONE” as input and output a detection with an attribute classification of “violent” or “non-violent”. Its a bit weird, but it’s what I did with the detector_fire_fast on our downloads page. For your case, it makes more sense to accept “person” as input and crop and classify the person detection, I would think? BY THE WAY: When you accept “person” as input, the “frame” parameter is still the whole frame. It’s up to the capsule developer if they want to crop the frame to the person or not.
That, or if you want to label all people in the frame as ‘violent’ or ‘non violent’, you can accept NodeDescription.Size.ALL “person” as input, classify the whole frame, and then label all of the people as “violent” or “non_violent”.
The pictures are RGB and I want to use fine-tuned vgg16 to classify.
Good to know. You will definitely want to flip the channels before inputting!
Are there more toturials
Yes! We have some upcoming tutorials on our staging.aotu.ai website. I will PM you with the details.
Hi @Meggie I took your model, and modified my code to work on it. Here’s the resultant capsule: https://drive.google.com/file/d/1ARSeKzn2qL8yeTHCUNjQKKK_GlMP3hw2/view?usp=sharing
This should be able to load onto CPU or GPU, and the inference now works. It’s up to you to test the accuracy!
You’ll notice that in the process_frame method I take the frame and the detection_node, and crop the frame to the person. You might want to modify that logic.
Please test it and tell me if you find any bugs!
I’m glad this worked! If you’re interested (and if the capsule works properly), we are always excited to push capsules to open source via our open source capsule_zoo.
You are interested in sharing this capsule with the world, feel free to make a pull request to the repo. As long as the capsule works and you have the licensing for it, we’d love to have it!