-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Continuous actions not smooth #30
Comments
Seemed related to the camera wrist following since the continuous motion would be normal while not following and become jerky when following. I tried to resolve this by moving the camera wrist following update loop to the backend rather than constantly send new lookat actions. This made the continuous actions go back to normal. However, the camera motion is still a bit jerky--might want to address later. |
@kaviMD FYI |
(Reopening this issue (rather than open an new one as they might be related/similar) The continuous actions (click and hold to continually send actions) does not seem to work well for the wrist and the base--this was mentioned by Henry in the past and I am also experiencing it in tests. Further, the motions seem to be worse now that the visual torque feedback (Issue #20) is now working (i.e. sending a bunch of sensor data from the back end to the front end), sometimes causing hiccups. This is not super replicable so not sure what to do for now, but noting it down. |
I think is that currently, all the sensor data is being sent over WebRTC along with the video data. I'm not super familiar with data channels in WebRTC, but I've used web sockets in other projects to stream lots of high bandwidth sensor and control data and have not run into any issues. We already have a web socket connection setup between the operator and robot. Currently, this is just used to facilitate the WebRTC connection between the operator and robot. I think that making WebRTC only for audio/video data, and using web sockets for the high bandwidth sensor and robot commands might help with the bandwidth issues. This probably wouldn't be too difficult to implement given that both protocols are already setup. I can do a bit more research into WebRTC vs web sockets |
Cool, this is also something we can ask Charlie, I imagine they've experimented with this. |
Hi everyone! Maya recommended I stop by and respond to issues here. I recommend sticking with WebRTC for sensor feedback and commands to the robot. Instead, you can reduce the rate of torque updates from the robot's browser. For example, you could change the following callback to only send torques on every Nth call.
I haven't experienced problems on my end, but I don't have continuous actions. Since I haven't experienced issues, I haven't bothered to reduce the torque sensor rate, which is higher than it needs to be. As I had noted in an old email, "there is a chance that the torque feedback update rate will be too high. Every time the robot's browser receives a new JointState message from ROS, it sends new torques to the operator's browser over the WebRTC data channel. Given the robot's underlying update rate, it should be low-bandwidth relative to video. If it becomes a problem, throttling it so it only sends every Nth torque update should work." I hope this helps. I'd be happy to make changes upstream, if you'd like. Best wishes, |
Thank you! Limiting the rate that data is being sent overall sounds like the best option. For continuous actions, I think @mayacakmak had previously also discussed whether the control loop for continuous actions run on the front end or the backend
I think we should be able to detect a disconnect using Limiting the transform data should also help with issue #14, the rosbridge transform handler has a nice feature that only sends data after a large enough change has taken place. |
One thought I had with respect to smoothness is that the frequency and detailed parameters for repeated commands matter. For example, I had to do a fair amount of tuning to achieve smooth motions with the initial version of the game controller teleoperation code. https://github.com/hello-robot/stretch_body/blob/master/tools/bin/stretch_xbox_controller_teleop.py Each motor controller uses a trapezoidal speed motion generator. https://github.com/EmanuelFeru/MotionGenerator To achieve smooth motion, you likely want to send the next command before the previous command begins to reduce speed. This depends on the distance, speed, and acceleration parameters used by the previous command. For example, if the previous command is given a short distance target, it will quickly start to slow down due to being close to its target location at which it should achieve zero speed. A longer distance target will result in a longer time before it starts slowing down, but will also go longer if no new command is received and risks overshooting the operator's true target. To achieve smoother mobile base motions with repeated commands, one approach might be to increase the target angles and target distances for individual commands and then send a 0 target command when the mouse button is released, which should stop the robot's current motion. Longer term, using the mobile base's velocity mode should result in smooth motions and provide the option of commanding curved trajectories in a manner comparable to telepresence robots. One drawback is the risk that a velocity command being executed by a motor control board will result in the robot continuing to drive when connectivity is lost or the onboard computer (Stretch's Intel NUC) crashes. Having some sort of timeout or heartbeat for velocity commands to the motor control boards might be a useful approach. |
I see that nice APIs for velocity control has landed, so may be worth looking at this again. |
We need to sort this out to complete capabilities we're interested in:
What's common to all of these is the need to express durative actions; things that take enough time that we must account for duration, e.g. by allowing a start without knowing when the end will be ahead of time (perhaps because we won't end until the user clicks again/releases the click), and allowing cancellation. The current implementation is to express durative actions by repeatedly sending actions that are individually small enough (tiny incremental motion of the joints) that simply stopping sending them is a good approximation of canceling. This is tricky to implement in a way that results in smooth motion as this thread and some of Charlie's notes on how these repeated goals are handled by the motor controllers documents. I suggest a few next steps:
|
Something broke the ability to click and hold for continuous actions (Issue #2), could be related to camera wrist following.
The text was updated successfully, but these errors were encountered: