-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualize arm reach during drive mode #14
Comments
I'm using three.js on the operator side to render the circle on the ground I also updated sensors.js to send the camera position/rotation from the robot browser to the operator. The circle seems to be lined up when the camera is pointing straight down, as the camera moves it becomes less aligned. |
I wonder if there is a simplification of the problem because of the object being rendered is a circle.. assuming the camera rotation axis center is also the center of the robot base that it rotates around: if the camera is pointing straight down, it should see a perfect circle (though might not fit into the frame), as it looks higher and higher (increased tilt) the circle will get skewed on the y axis (vertical) and less of it will be visible (eventually none of it). The pan movement should have no effect (again if the assumption is correct). So it feels like we just need a few parameters: the arm reach-radius to pixel ratio (i.e. how big is the circle you draw in the straight down case), tilt-angle change to skew ratio (i.e. how much to skew the circle into an ellipse when the head tilts up by a certain amount), tilt-angle change to pixel movement ratio (how much to move the circle center downwards in the image when the head tilts up by a certain amount). Does that make sense? Might be worth trying if you're stuck with the other path. But I completely agree that it would be great to get threejs working at some point for other geometric overlay based interfaces. |
Yeah I think that does. I'm working on some more advanced camera math that takes into account the position provided from ROS, as well as the physical rotation point and position of the camera sensor (I'll post some more details on the math if it ends up working). If that doesn't work I definitely try the other method. |
I fixed a mathematical error in how the Euler rotations were being processed in the mapping from the ros reference frame to the threejs reference frame, which made a big improvement in how accurate the virtual camera angles are. There is still some small error, but I think I know the root cause and am working on a fix. The camera frame I'm currently using , Which interface views should the circle be visible in? Currently, it is visible in all of them. I also saw that the master branch now has the split-screen/two-mode interface, so maybe would just always be visible in one of those two. Everything is in #11-decrease-manipulation-burden, let me know how I can help when we are ready to merge into master. The biggest difference from master will probably be in operator_ui_regions.js where three.js is setup (The render canvas needs to be created in the correct location in the DOM to line up with the camera feed). The code to update the camera position once everything is setup should work without too much modification. |
Oh nice! Yes I think having the visualization only on the navigation view make sense, since it will be useful for navigating close enough to a target before rotating. For merging, how about you create a pull request, see if it will merge automatically? if not perhaps you merge master onto your branch first to handle conflicts then create the pull request? just to keep master functioning. |
@kaviMD I tested this feature on the real robot and it's really cool! I works quite well in the front part of the robot but there seems to be an issue when looking towards the side, described a bit more below. I tested it with a bottle on the floor. I tried moving the robot to a point where I'd be able to grasp the bottle from as far as possible. This is how it looked in the default view for navigation mode. So it is just within the reach surface. When I moved the camera around, the object seemed to nicely stay at that edge of reachability. However, when I rotated the robot base in place and looked towards the arm, the visualization seemed to go off. The object appeared to be quite a bit outside of reach, even though the distance to the robot center axis had not changed. And when I tried to reach towards the object it was able to reach it:
|
Haha, actually this reminded me that the robot arm raise axis is not at the robot's base rotation center, it is shifted backwards in the direction of extension--perhaps that's the issue here? We still need to visualize what can be reached with the arm, while we are in the navigation mode even though it cannot reach it withouth moving the base (i.e. assuming the robot would rotate in place to reach that direction, but would not be able to translate in that direction any more). |
Add a visualization to give the user a sense of the robot's arm's reach while driving. Initial idea is to overlay a transparent circle on top of the drive mode video stream on the floor plane. Since we know the head camera viewpoint parameters relative to the ground plane (height of the camera and tilt angle), we can compute how the circle would be skewed for the visualization. We should also be able to figure out the circle diameter based on the robot arm max extension (we probably want this to be where the fingers would end up. Here's a rough sketch.
This issue is motivated in the problem description in Issue #11.
The text was updated successfully, but these errors were encountered: