You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a question.
I am using the URDF model to visualize the real time movement of the model in RViz. I have attached IMU sensors on the joints (mainly upper joints 2 sensors on each hand and one attached to the pelvis, in total 5) and streams fused Euler angle data.
I have a joint state publisher that subscribes to the data and I send it to the respective joints. This can rotate the joints. However I am planning to use a camera to capture the human motion and visualize the motion in the URDF model.
Therefore, I am capturing the motion capture of human using a stereo camera and I am using openpose to extract information 3D points of a person and can visualize it in RViz. Do you think there is any way to use the visual Point data to replicate the same motion in the URDF?
Please let me know if you need more information
Thanks
The text was updated successfully, but these errors were encountered:
AswinkarthikeyenAK
changed the title
Real-time simulation in RVIZ using data obtained from camera and IMU
Real-time simulation in RVIZ using data obtained from camera
Oct 22, 2020
Hi all,
This is a question.
I am using the URDF model to visualize the real time movement of the model in RViz. I have attached IMU sensors on the joints (mainly upper joints 2 sensors on each hand and one attached to the pelvis, in total 5) and streams fused Euler angle data.
I have a joint state publisher that subscribes to the data and I send it to the respective joints. This can rotate the joints. However I am planning to use a camera to capture the human motion and visualize the motion in the URDF model.
Therefore, I am capturing the motion capture of human using a stereo camera and I am using openpose to extract information 3D points of a person and can visualize it in RViz. Do you think there is any way to use the visual Point data to replicate the same motion in the URDF?
Please let me know if you need more information
Thanks
The text was updated successfully, but these errors were encountered: