-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using IR LED and mono cameras for skeleton detection #27
Comments
You have to modify the pipeline in order to replace the ColorCamera node with a MonoCamera node and slightly modify the ImageManip nodes to convert the mono frame to a color frame (because the NeuralNetwork nodes after the ImageManip nodes take color frame as input). I think you should need to add something like this: IBut it is not guaranted that the models work well on mono frames. I know for instance that hand palm detection does not work well on mono frames. |
Thanks for the hint. I guess i can pick for MonoCamera the left or right camera.
Would i need to comment out the right camera pipeline since i am going to use it as my main camera already? I can't just simple change As you already mentioning i need to convert one of the MonoCamera images in to a RGB to hand to balzepose. i get
I wonder if your "DepthAI Pipeline Graph" can help me set this up easier? |
"DepthAI Pipeline Graph" will just help you to visualize the nodes and the connections between the nodes.
I think the error you get ( |
@geaxgx Hello, I also used the grayscale camera to load the algorithm of blazepose, updated the pipeline, and added color_ The cam node is changed to mono_ Cam, and added the operation of gray2bgr, but the output detection frame and posture point are not on my body, but on my upper left, and the posture point is too jittery. I changed the resolution of the internal frame to 720p, but the effect has not changed. May I ask if my changes are not complete enough? I hope to get an early reply. Thank you |
@shamus333 I can't say without having a look at your code. |
@geaxgx OK, I have modified part of the code, and now it is uploaded to you in the form of an attachment. The running instruction is Python 3 demo py -e --lm_ M Lite, depthai version is 2.17.0.0. The phenomenon is shown in the picture, (The source file is too large. I only uploaded the modified file to you. You can directly overwrite the modified code on the basis of the source code) |
It seems to be a problem with the padding. I am busy and can't spend time on your problem now but I had a quick glance to your code. Also can you give me the result printed for this line: |
Thanks @shamus333 for posting your code. It is very helpful to see how ti use mono_manip.
I did remove FYI i am working on the OAK-D Pro POE not the USB version. I commented everything related to
I too see the offset even though i am doing on I also see this |
i have tested the monoCamera + tracking in the dark outside and it works OKish. It works best when using the heavy model.' |
@stephanschulz I think you should align the stereo depth frame with the mono camera instead of the color camera: |
@geaxgx thanks for the tip. The options seem to be AUTO, RGB, LEFT, RIGHT Since i do I did notice that detection and tracking is much better on the RGB image, probably due to the fact that the model was trained on RGB data. But i need to use it outside in the dark. |
That's what I would have tried too.
If you use the depthai demo, you can have a feeling about the influence of these values: |
I am now trying to increase the monocamera resolution, which i think is done with
Maybe the error does not come from the left or right camera but rather from
|
@stephanschulz When you use an ImageManip node, you must specify the maximum output size, like it is done there: depthai_blazepose/BlazeposeDepthaiEdge.py Line 280 in a3ce15a
So in your case, depending on the resolution width x height you use, you should have: mono_manip.setMaxOutputFrameSize(width*height*3)
|
that works. |
I am able to display the right mono camera and can see the IR LED is illuminating the scene.
But the blazepose seems to be run on the RGB camera.
I would like to use the camera in a dark environment and would like to learn how to let blazepose use the mono camera that can see in the dark thanks to the IR LED.
The text was updated successfully, but these errors were encountered: