-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3D Control Interfaces #16
Comments
Thanks, I was able to try it out! Hmm I think I understand the problem. We might need a deeper refactoring to address. In the new setup the end-effector and the target are now SE3 instead of SE2, i.e. they have 6 degrees of freedom (x, y, z, theta_x, theta_y_theta_z). Each view has a separate control that can change a subset of these V1: x, y, theta_z; V2: x, z, theta_y; V3: y, z, theta_x. So it is not really three different EEs and probably best not to try and patch it by treating it that way. Does that make sense? Starting some TODOs here:
|
@kaviMD nice work at getting to those TODOs! The mock up looks great. I agree the view change control could be more intuitive/natural but this is totally fine as a place holder. |
It occurred to me that with the target EE visualization being a 'ghost' gripper, visualization of flex thresholds is going to get complicated. One idea is to change the task slightly, to something that's actually more realistic--instead of telling the user where to put the gripper, we place a 3D object to be grasped (and make it somewhat obvious roughly how it would be grasped) and the user's job is to place the gripper around that object so as to grasp it; if the object is small that corresponds to larger thresh_xy; if the object is round that corresponds to larger thresh_theta, etc. We wouldn't worry about systematically varying those values just having some variations. We would need a simple way to check if the grasp would have been successful (since we can't really simulate it, but there are some heuristics we can use). What do you think? |
TODO:
Pasting the narrative skeleton I came up with below. Some observations:
======
|
I just tried the different SE3 interfaces--great progress @kaviMD on implementing all options! They all work quite well and the interface is very intuitive. Two things to address: Re: Ghost EE---I think the easiest would be to not have a ghost of the gripper but instead have a ghost of the "red indicator"--in turn I think it would be great if we could make the indicator slightly different: (1) make it more elongated so its directionality is more apparent; something similar to the EE in SE2 (with a ball at the center and a stick in the direction away from the wrist; or perhaps additional one or two shorter sticks in orthogonal directions to that) would be great; (2) place the "center" of the indicator at the middle point between the fingers (instead of the wrist) -- this will require slight adjustment to the IK calculations. Then for the Re: Flex targets. Let's keep it simple for now, the target can be a rectangular prism object or cylindrical object with one longer edge (the long edge can be longer than the opening b/w the fingers, while the other edges/2*radius is smaller, such that it is clear the object needs to be grasped along the long edge). An end effector pose "satisfies" a target if closing the gripper would result in grasping the object (we can estimate this by ensuring the line between the two fingers intersects with the object and the fingers don't collide with the object). Flex can be added by making the short edge/radius of the object smaller (i.e. more possible successful grasps). There is still some flickering of IK solutions in tricky parts of the kinematic space; one thing we can do to avoid is have the initial pose of the robot be closer to the comfortable parts of the space and have the targets also be within that space. Typically with PR2 we raise the torso all the way up and have it manipulate objects on a table (with the torso all the way down that ends up quite close to the robot's base, so if we have a way of raising the torso that would be great). Such tabletop manipulation tasks often involve the robot's elbow being to the sides or even higher (rather than below the arm as it is now). Minor: Perhaps we can remove the 1 - 2 -3 buttons at the top now? the click-to-activate works very well. |
I finished getting I have removed the 1-2-3 buttons at the top and adjusted PR2's torso. I am still adjusting the best camera positions so that they have the best view. |
Cool! Hmm, I think having the axis of rotation relative to the camera view is more intuitive. I assume that's what we're doing for the other interfaces? That's why we have three orthogonal views, each view provides orientation around one axis and the combination is full control on rotation in 3D. Not sure how relative to local EE would work and couldn't access the video (seems to be set to private?) |
Sorry about the video, that should be fixed now (https://www.youtube.com/watch?v=5dtq6Mvkzcc). Right now, the rotation is actually relative to the EE across all of the interfaces. I can work on changing the rotation axis to be based on the camera. |
I have implemented rotation based on the world axis. For testing, I added a toggle at the top of the page to easily switch between the world and local axes of rotation (the toggle does not affect translation). @mayacakmak think that you're right that rotating on the world axis is easier to control than rotating on the local axis. Edit: |
Kavi's 3D interfaces demo is live here: Here are some ideas for improving the interface:
Regarding sampling of tasks: We discussed having two objects, one standing one lying on a table, possibly two different sizes. Here's a sketch (note that T3-T5 are the same object in the same location just rotated 45 degrees and 90 degrees, shown from different views). If we have two different sizes we would make a thinner bottle and a thinner rectangular prism ('remote'), but perhaps we forget about size for now pick something medium.
In addition Tapo brought up last time and I agree the IK solver might need some work--solutions seem to flicker a lot and not find solutions where I'd expect it to have one. I've been trying for a while to make the robot's gripper face down (i.e. as if it were picking objects up from the top off of a table) and just couldn't do it. @kaviMD you know the last wrist joint (which rotates the gripper like a screw driver) is a 360 joint that can keep moving (doesn't have a joint limit) - perhaps that's not represented in the IK solver? I'm trying to think of other possible reasons for non-ideal/missing IK solutions. @tapomayukh @csemecu Any feedback or thoughts to add before we can start collecting data from this? |
Looks great! This is not as easy as I thought and it needs some getting used to :-) I am so used to rotating 3D views using mouse that adapting to this is tricky and requires me to map multiple views continuously. Note, the IK is finicky. I really wish we had some joint control or atleast some elbow control. Minor comments: We could improve the visualization by adding a different color for the arm and the gripper atleast? It sometimes gets hard to see things, especially the gripper and the target object. Can we also make the robot a different color from the background? Finally, can we have a cheat sheet to map mouse actions to interface behaviors for each interface (on the side perhaps)? |
I think that I have completed most of the visual changes (I am still working on updates to the IK and sampling). One note: |
Nice work @kaviMD!!! I noticed some of the changes already earlier today when I was taking some place holder screenshots. Strange issue about the fingers, but opening evenly as much as you can (i.e. w/o making it lopsided) should be fine for now. |
I added code to turn the whole arm dark grey if a bad IK solve is detected (i.e. if the ee gets too far away from the target in orientation or position). When this happens, the IK is also "reset":
The tolerance for detecting a bad solve and initiating a reset can be set individually for bad rotation or bad position of the EE. I adjusted it to what seemed decent to me, but if it is over or under sensitive, that is really easy to change. https://github.com/mayacakmak/se2/blob/master/se3/scripts/control.js#L106 I also updated the objective function of the optimization to take into account the previous state of the robot more efficiently which should speed things up as well as make them more accurate. There were joint limits on all of the joints before, thanks for pointing that out @mayacakmak! I removed the limits from the two joints that shouldn't have had them (the wrist and elbow). The arm still jumps around a bit when trying to rotate around in a circle multiple times, but it is much better. With the new system for detecting (and trying to correct) bad IK solves, I found it a bit easier to get the gripper to face down, but it is still not as easy as would probably be ideal. I tried messing around with starting locations (such as with the elbow starting up). But most performed worse than the starting state that is being used right now. https://github.com/mayacakmak/se2/blob/master/se3/scripts/3d/3d.js#L6 I can try to update the objective function to also take into account the location of the elbow, but I worry a bit about how complicated that would become / if there is enough time to correctly implement it. I think it would probably take me a few days to get it working properly. I'm not sure that I quite understand what would be necessary for the cheatsheet. Is that something that would be covered by the video? |
@kaviMD Great work on using the previous state and removing the soft limits. For the cheat sheet, I just meant the mapping between mouse actions and interface behaviors. If you have a video before every interface test that talks about it, then great but from our previous Stretch study with people with disabilities, we found that they still need a cheat sheet because they tend to forget how to move the robot. Just something to think about. At this point, I agree it's better not to do heavy development but rather polish things so that it is robust during the study. We don't have time if things break :-) |
I updated test.html to work with SE3 and finished the updates to the sampling algorithm. Everything is here: mayacakmak.github.io/se2/se3/test.html With the updated gripper pose and limited sampling positions, 5 targets are really quick to get done (at least to me) it might be worth it to add more targets at the end. |
I tried this last week the first time you sent out an update, and tried it
out today. It looks really good and it feels a bit cleaner right now. I
agree that this time around feels faster to complete. That might also be a
learning effect.
…On Sun, Oct 25, 2020 at 9:32 AM Kavi Dey ***@***.***> wrote:
I updated test.html to work with SE3 and finished the updates to the
sampling algorithm.
There is a little bit of randomization in the size and position of the
targets, but not too much that changes the difficulty too much.
Everything is here: mayacakmak.github.io/se2/se3/test.html
With the updated gripper pose and limited sampling positions, 5 targets
are really quick to get done (at least to me) it might be worth it to add
more targets at the end.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#16 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKXGBLXMD6ARNRRE3ZS7CXLSMRHI7ANCNFSM4RC2F66Q>
.
|
I agree, this is great @kaviMD! the visual interface changes have really helped and the IK seems more stable. One question still: it seems the robot declares 'no IK' only when the position (x,y,z) cannot be reached; it is okay with the desired rotation (thetaX, thetaY, thetaZ) not being reached. Is that correct? I think that is sort of unintuitive--it can feel like the orientation knobs are not working, even though the robot continues to try to reach the specified orientation as position changes. Let me know what you think about this! |
Also I still think we could be operating in a "better" part of the robot's kinematic space, one thought is the following: Also sending along a video of the PR2 to give a better sense of the comfortable kinematic space: |
The code should declare 'no IK' when the target orientation cannot be achieved. It looks like the threshold was set much too high. I updated the threshold which combined with the update to the IK starting position makes rotation feel a bit better. The video about PR2 was super useful for picking a better starting position (and really cool to watch). Based on it I updated the starting position to this: The IK still takes over a bit and modifies the position of the elbow: But, I found that the new starting position is a bit better at getting to more difficult angles: https://youtu.be/GEMvvdNmCEM One thing that I find difficult now is to get the arm in a sideways position like this: One solution might be to make the rectangular targets thinner and more "remote" shaped like @mayacakmak so that they can be grasped from the top. I can also try and update some part of the IK to encourage the elbow to not stick up which might make grasping objects from the side easier. |
Excellent! Indeed, makes sense that the side grasp would be harder in that part of the space, and I would suggest exactly what you already suggested--make the object graspable from the top. The top and front grasps come up a lot with table top objects, whereas this type of side grasp would be more relevant for when someone hands over an object to the robot (which is okay for us to exclude as a task). You could just rotate the object 90degrees around the blue axis; I would also suggest making the smallest dimension of the object slightly bigger (double?) so it makes sense as something standing on a table. |
The starting joint angles (before any IK runs) is actually: The problem is that as soon as the IK starts, it will move the arm to better be closer to the target and change the location of the elbow. By setting the location and orientation of the target to be the exact same as the EE, I can put the IK in a position where it doesn't need to run (until the user interacts with the interface). (You might need to clear your browser cache for it to update. That may have actually been why the interface wasn't updated with the new IK state earlier. I have found that when a site is updated, Chrome will re-cache the .HTML file(s) but still use your older .js files, so it is possible for some changes to not apply immediately if you visit the site frequently.) |
Ah yes, I've noticed the issue with caching. I agree it's still a lot smoother. I think this is a reasonable point to stop changes for the study; I'll make the screen capture videos etc with this version and try to finish videos tomorrow so we can start collecting data! |
@kaviMD A few issues and potential bugs I noticed while trying to make videos:
Let me know if we can address these. Thanks!! |
A couple more:
|
Most of those should be fixed! Some notes:
|
@kaviMD There seems to be a bug with IK visualization in the target interface, it appears there is no IK solution even though the arm seems to move correctly: A similar thing might be happening in targetdrag, but that one still found some IK solutions once other decgress started moving, still suspicious in comparison to the non-target interfaces that seem to have a good range of IK options around the start pose that I'm trying to get to here: Let me know what you think. Also if it is easy, I think this might be useful:
|
One more detail about the bug with the 'target' interface: changing the orientation slightly seems to fix the visualization issue... so basically every time only position is changed (through a click in free space) the IK/no-IK viz doesn't seem to be updating. |
Different possible bug:
|
The problem with the gray color for the target interfaces is due to how the arm color is updated in code. At the moment, the color of the arm is only updated when the user interacts with the interface (not when a new frame is drawn), so it is possible that when the first frame is set (after only 1 round of IK), the arm is in a "bad" state and grey, however over a few frames, it converges on a more accurate position, but the color of the arm is never updated. This should be fixed now (Updating only on interactions was originally done for the sake of speed, and works well on interfaces with continuous input, but not so well on the target ones). I think that the problem finding IK solutions has to do with the fact that on the target and targetdrag interfaces, the target makes a really large jump in one frame. One of the things that the optimization takes into account in the objective function, is the difference in position between this frame and the last. This really helps create smooth movement on interfaces with continuous input, but can cause some problems on the target and ring-click interfaces that have big changes in the ee pose in a single frame. That combined with the fact that, like the arm color code, the bad-solve correction code also only used to run when the user interacted with the interface meant that those interfaces also had a harder time finding good solves. The button to reset the arm IK pose should also be added and working with logging (an event is added to the list of events for that cycle, and the variable The problem with the ghost ik target jumping around up is a bit weird. Basically, the ghost ik target did not have the same starting position as the visible ik target (the ghost was roughly 5 units higher). This meant that when the ghost appeared it was in the wrong position (the top left view only changes 2 axes of the target, x and z, but the y axis was the one with the problem), so it appeared much higher than it should have and threw everything off. That should be fixed now. Sorry for the delay in getting these fixes out, let me know if you find any other bugs! |
Awesome job hunting down the bugs @kaviMD! and thanks for explaining what they were ;) I'll remake a couple clips, almost done with videos. |
@kaviMD The 'Reset Arm Pose' button was not working for me, here's what I did--in |
@mayacakmak That makes sense, I originally misread what the 'Reset Arm Pose' button was supposed to do. I thought that it was just supposed to recalculate the IK without moving the IK target back to its original position. The one change that I made to the code was replacing
|
Aha, exactly the type of thing I was worried about with calling an 'initialize' function :) Thanks for fixing that! |
@kaviMD Just to check: are the "view changes" getting logged? i.e. could we recreate how long each participant spent in each view? This thread is getting too long :-) |
@mayacakmak Yes, the view changes are getting logged. As of right now, they are logged to the action list under |
I see progress on the 3d-interface branch but haven't been able to test it. @kaviMD what is the status? What remains to be done? How could I help?
The text was updated successfully, but these errors were encountered: