Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bep/cutlery grasp #1339

Draft
wants to merge 111 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
111 commits
Select commit Hold shift + click to select a range
7204cc7
add template for aruco visual servoing
PetervDooren Jul 21, 2023
3aff2eb
add examples of motion
PetervDooren Jul 21, 2023
ed142f0
add example and fix stuff
PetervDooren Aug 8, 2023
9a81529
update docstring
PetervDooren Aug 8, 2023
821a34c
Merge branch 'master' into feature/aruco_visual_servoing
PetervDooren Jan 6, 2024
db81e29
rename files
PetervDooren Jan 6, 2024
00b4075
rename comments and classes
PetervDooren Jan 6, 2024
5165229
fix bugs
PetervDooren Jan 6, 2024
c4ed7df
add force sensor example
PetervDooren Jan 6, 2024
0436f11
add yolo segmentor and fix force sensor edge up
PetervDooren Jan 6, 2024
251e98b
set cutlery ids
PetervDooren Jan 6, 2024
937f1e4
remove prints
PetervDooren Jan 6, 2024
c3f9df3
change image topic to hand cam
PetervDooren Jan 6, 2024
5e38e44
update cultery grasp colors
PetervDooren Feb 20, 2024
9b3acac
add example joint control
PetervDooren Mar 6, 2024
c21cf22
add entity input
PetervDooren Mar 22, 2024
b26f2ff
get transform from palm to gripper
PetervDooren Mar 22, 2024
9a3bc84
Added arm joint trajectories, made a start
LucetteAlewijns Mar 26, 2024
c328c00
Merge branch 'BEP/cutlery_grasp' of https://github.com/tue-robotics/t…
LucetteAlewijns Mar 26, 2024
6bc0414
added closing of gripper and fixed some running errors
LucetteAlewijns Apr 4, 2024
dbedd26
small error fix
LucetteAlewijns Apr 4, 2024
89aa597
Split up pre-grasp movement and changed grasp joint coordinates
LucetteAlewijns Apr 9, 2024
c3520fe
trying to fix error of not going into while loop
LucetteAlewijns Apr 9, 2024
bffbc88
implemented option to close the gripper with more force
LucetteAlewijns Apr 16, 2024
e83f2eb
Made a start on creating a tf2 listener for the gripper coordinates
LucetteAlewijns Apr 17, 2024
98ce727
improvement on the listener
LucetteAlewijns Apr 17, 2024
dcc7f80
fixing errors in the listener
LucetteAlewijns Apr 21, 2024
03922e2
trying another error fix
LucetteAlewijns Apr 21, 2024
5770fca
error fix
LucetteAlewijns Apr 21, 2024
a56788b
added time initialization
LucetteAlewijns Apr 21, 2024
3547850
error fix of not initialized time
LucetteAlewijns Apr 21, 2024
e3554f5
error fix missing argument
LucetteAlewijns Apr 21, 2024
a525d7e
error fix
LucetteAlewijns Apr 21, 2024
64aa0c6
trying another error fix
LucetteAlewijns Apr 21, 2024
8efe5a1
small changes
LucetteAlewijns Apr 22, 2024
e9aeeab
get robot model
LucetteAlewijns Apr 23, 2024
8529aef
gripper coordinates listener should work now
LucetteAlewijns Apr 28, 2024
d519ea7
center point included as output from detection
LucetteAlewijns Apr 29, 2024
13727c2
gripper coordinate listener gives output for each time step
LucetteAlewijns Apr 30, 2024
a767206
fix gripper coordinates
LucetteAlewijns May 5, 2024
39df5f4
Working and tested detection with opencv, this was rewritten into the…
LucetteAlewijns May 6, 2024
87e35fb
cutlery_detector was finalized
LucetteAlewijns May 12, 2024
e961e8e
Obtaining the direction of the cutlery was implemented(upwards or dow…
LucetteAlewijns May 17, 2024
f21b49a
detection will only process the image if it is cutlery, top grasp cod…
LucetteAlewijns May 24, 2024
5175d29
sleep was changed to a while-loop
LucetteAlewijns May 27, 2024
bc365bb
start was made on rewriting frames. direction calculation was corrected
LucetteAlewijns May 27, 2024
131c0f3
camera frame rewritten into hand palm link frame
LucetteAlewijns May 28, 2024
ab9a9eb
segmentation is also published when something different than cutlery …
LucetteAlewijns Jun 4, 2024
87cf536
base movement corrected andobject's orientation/slope rewritten into …
LucetteAlewijns Jun 6, 2024
861aa79
orientation gripper was corrected and wrist roll joint position now a…
LucetteAlewijns Jun 6, 2024
0caa028
grasp detection included, which finishes the first grasp
LucetteAlewijns Jun 6, 2024
d04f852
wrist rotation after grasp was implement to be in line with the edge …
LucetteAlewijns Jun 7, 2024
96cb2e9
backwards movement until table edge was implemented
LucetteAlewijns Jun 10, 2024
fa69162
small changes + part of last grasp
LucetteAlewijns Jun 12, 2024
0c02f1f
small fixes
LucetteAlewijns Jun 13, 2024
3b39f97
small fixes that were determined by testing and implementation of the…
LucetteAlewijns Jun 16, 2024
5cbbb51
small correction
LucetteAlewijns Jun 17, 2024
e0c93e1
code for testing
LucetteAlewijns Jun 17, 2024
0b620c0
small fix
LucetteAlewijns Jun 17, 2024
31108bc
other test option
LucetteAlewijns Jun 17, 2024
949033d
testing
LucetteAlewijns Jun 17, 2024
46081a9
testing
LucetteAlewijns Jun 17, 2024
df1edda
testing
LucetteAlewijns Jun 17, 2024
7736fa0
move downwards until force detection finalized
LucetteAlewijns Jun 17, 2024
e4bd53d
testing
LucetteAlewijns Jun 17, 2024
3f1a4e9
testing
LucetteAlewijns Jun 17, 2024
f117003
small fix
LucetteAlewijns Jun 17, 2024
90dc45f
testing
LucetteAlewijns Jun 17, 2024
4c48c93
small fix: wait until gripper is closed for grasp detection
LucetteAlewijns Jun 17, 2024
88c7254
typing mistake
LucetteAlewijns Jun 17, 2024
fb14388
small fix: signs weren't correct
LucetteAlewijns Jun 17, 2024
e401930
testing
LucetteAlewijns Jun 17, 2024
7ef6cae
testing
LucetteAlewijns Jun 17, 2024
758c814
testing
LucetteAlewijns Jun 17, 2024
d5bcdf7
fix wrist roll joint value
LucetteAlewijns Jun 17, 2024
3b9ae57
sleep implemented
LucetteAlewijns Jun 17, 2024
8344839
fixes to trajectories
LucetteAlewijns Jun 17, 2024
e6ef2a9
Correction on side grasp and on direction calculation
LucetteAlewijns Jun 20, 2024
5d2fdce
Detection now separates cutlery data from other objects, such that al…
LucetteAlewijns Jun 23, 2024
fa9d97a
Redone focal length calculations
LucetteAlewijns Jun 23, 2024
1c7b507
testing
LucetteAlewijns Jun 24, 2024
7c17897
testing
LucetteAlewijns Jun 24, 2024
1434b4a
testing
LucetteAlewijns Jun 24, 2024
129250e
testing
LucetteAlewijns Jun 24, 2024
a276e9f
testing
LucetteAlewijns Jun 24, 2024
fbdda29
small fix on direction
LucetteAlewijns Jun 24, 2024
9624fc1
testing
LucetteAlewijns Jun 25, 2024
5b908f2
testing
LucetteAlewijns Jun 25, 2024
366f701
testing
LucetteAlewijns Jun 25, 2024
2511be4
testing
LucetteAlewijns Jun 25, 2024
1a3bd4c
small fix
LucetteAlewijns Jun 25, 2024
480db2f
testing
LucetteAlewijns Jun 25, 2024
e356b5b
testing
LucetteAlewijns Jun 25, 2024
f926dfd
testing
LucetteAlewijns Jun 25, 2024
92943af
testing
LucetteAlewijns Jun 25, 2024
5d0e984
testing
LucetteAlewijns Jun 25, 2024
6b0dd53
testing
LucetteAlewijns Jun 26, 2024
eb46d62
clearing up the focal length calculation
LucetteAlewijns Jun 26, 2024
c584145
testing
LucetteAlewijns Jun 26, 2024
f900aab
testing
LucetteAlewijns Jun 26, 2024
5f8e01a
testing
LucetteAlewijns Jun 26, 2024
c3b8418
testing
LucetteAlewijns Jun 26, 2024
618e4de
test
LucetteAlewijns Jun 26, 2024
77fb59c
change on rpy second grasp
LucetteAlewijns Jun 26, 2024
3d27cbc
correction angle
LucetteAlewijns Jun 26, 2024
16a436e
testing
LucetteAlewijns Jun 26, 2024
e84db9e
testing
LucetteAlewijns Jun 26, 2024
12a207d
testing
LucetteAlewijns Jun 26, 2024
166d1b4
small correction
LucetteAlewijns Jun 26, 2024
7fee73c
testing
LucetteAlewijns Jun 26, 2024
8f6a0e2
Code finalization
LucetteAlewijns Jun 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions robot_smach_states/examples/manipulation/example_top_grasp.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# System
import argparse

# ROS
import PyKDL as kdl
import rospy

# TU/e Robotics
from ed.entity import Entity
from ed.shape import RightPrism
from pykdl_ros import FrameStamped
from robot_skills.get_robot import get_robot

# Robot Smach States
import robot_smach_states.util.designators as ds
from robot_smach_states.manipulation.top_grasp import TopGrab


if __name__ == "__main__":

parser = argparse.ArgumentParser(description="Put an imaginary object in the world model and grasp it using the "
"'topgrasp' smach state")
parser.add_argument("x", type=float, help="x-coordinate (in map) of the imaginary object")
parser.add_argument("y", type=float, help="y-coordinate (in map) of the imaginary object")
parser.add_argument("z", type=float, help="z-coordinate (in map) of the imaginary object")
parser.add_argument("--robot", default="hero", help="Robot name (amigo, hero, sergio)")
args = parser.parse_args()

rospy.init_node("test_top_grasping")

robot = get_robot(args.robot)

entity_id = "test_item"
pose = FrameStamped(frame=kdl.Frame(kdl.Rotation.RPY(0.0, 0.0, 0.0), kdl.Vector(args.x, args.y, args.z)),
stamp=rospy.Time.now(),
frame_id="map")
robot.ed.update_entity(uuid=entity_id, frame_stamped=pose)
shape = RightPrism([kdl.Vector(0, 0, 0), kdl.Vector(0, 0.05, 0), kdl.Vector(0.05, 0.05, 0), kdl.Vector(0.05, 0, 0)], -0.1, 0.1)
item = Entity(entity_id, "test_type", pose.header.frame_id, pose.frame, shape, None, None, rospy.Time.now())

item = ds.Designator(item)

arm = ds.UnoccupiedArmDesignator(robot).lockable()

grab_state = TopGrab(robot, item, arm)
grab_state.execute()
Original file line number Diff line number Diff line change
@@ -0,0 +1,245 @@
import rospy
import cv2
import math
import numpy as np
from ultralytics import YOLO
from cv_bridge import CvBridge
from sensor_msgs.msg import Image



class YoloSegmentor:
def __init__(self) -> None:
model_path = "~/MEGA/developers/Donal/yolov8x-seg.pt"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use ~/data/pytorch_models as folder.

device = "cuda"
self.model = YOLO(model_path).to(device)
self.class_ids = [42, 43, 44] # See the COCO dataset for class id to label info (fork = 42, knife = 43, spoon = 44)
self.active = False

self.intercept = 0
self.slope = 0
self.length = 0

self.publisher = rospy.Publisher('/hero/segmented_image', Image, queue_size=10)
self.subscriber = rospy.Subscriber('/hero/hand_camera/image_raw', Image, self.callback)

# Initialize the attributes to store the values
self.x_center = None
self.y_center = None
self.slope = None
self.upwards = None
self.class_id = None
self.filtered_masks = None
self.time = rospy.Time.now()


def start(self):
self.active = True

def stop(self):
self.active = False

@staticmethod
def detect(model, frame):
results = model(frame)
global result
result = results[0]

segmentation_contours_idx = [np.array(seg, dtype=np.int32) for seg in result.masks.xy]
class_ids = np.array(result.boxes.cls.cpu(), dtype="int")

return class_ids, segmentation_contours_idx, result

def extract_table_segment(self, image, class_ids, segmentation_contours_idx):
table_segment = np.zeros_like(image, dtype=np.uint8)
global width
height, width, channels = image.shape #obtain data on image size

for class_id, seg in zip(class_ids, segmentation_contours_idx):
self.class_id = class_id
if class_id in self.class_ids:
cv2.fillPoly(table_segment, [seg], color=(255, 0, 255)) #If fork, knife or spoon a pink mask will be created
else:
cv2.fillPoly(table_segment, [seg], color=(255, 0, 0)) #If another object from the COCO dataset a blue mask will be created

return table_segment

def coordinates_center_point(self):
# Separating boxes and masks for cutlery from other objects
cutlery_boxes = []
filtered_masks = []
for i, box in enumerate(result.boxes):
if int(box.cls) in self.class_ids:
cutlery_boxes.append(box)
filtered_masks.append(result.masks.xy[i])
self.filtered_masks = filtered_masks
x_center = None
y_center = None

# Check if cutlery_boxes is empty
if not cutlery_boxes:
coordinates_box = None # No bounding boxes found
else:
# Access the xywh attribute of the first box in cutlery_boxes
coordinates_box_tensor = cutlery_boxes[0].xywh
if coordinates_box_tensor.numel() == 0: # emake sure that the tensor is not empty before accessing its elements
coordinates_box = None
else:
coordinates_box = coordinates_box_tensor[0].tolist()
x_center, y_center = coordinates_box[0], coordinates_box[1] # Center coordinates of the bounding box

x_center, y_center = coordinates_box[0], coordinates_box[1] # Obtain the center coordinates of the bounding box = center coordinates object
print(f"x_center, y_center = {x_center, y_center}") #print center coordinates

# Store the values
self.x_center = x_center
self.y_center = y_center

return x_center, y_center

def visualize_center_point(self, x_center, y_center, table_segment):
cv2.circle(table_segment, (int(x_center), int(y_center)), 5, (0, 255, 255), -1) # Draw yellow dot at the center point
return table_segment

def calculate_slope_intercept(self):

inner_array = self.filtered_masks[0] # result.masks.xy is an array in an array, so has to be unpacked first
x = inner_array[:,0] # first column = x coordinates of all points of the segmentation mask
y = inner_array[:,1] # second column = y coordinates of all points of the segmentation mask

mean_x = np.mean(x)
mean_y = np.mean(y)
min_x = int(min(x))
max_x = int(max(x))
min_y = int(min(y))
max_y = int(max(y))

length = math.sqrt((max_x - min_x)**2+(max_y - min_y)**2)
self.length = length

#for nearly vertical cases:
if max_x - min_x < (1/5 * width): #if the object takes up a small width (less than 1/5 of the image) it is oriented (nearly) vertical in the camera coordinate frame
numerator = np.sum((y - mean_y) ** 2)
denominator = np.sum((x - mean_x) * (y - mean_y))
slope = numerator / denominator

#for all other orientations:
else:
numerator = np.sum((x - mean_x) * (y - mean_y))
denominator = np.sum((x - mean_x) ** 2)
slope = numerator / denominator
intercept = mean_y - slope * mean_x

print(f"Calculated slope: {slope}")
print(f"Calculated intercept: {intercept}")

# Store the slope value
self.slope = slope

return slope, intercept, min_x, max_x

def predict(self, slope, intercept, min_x, max_x):
y_minx = slope * min_x + intercept #y-coordinate corresponding to min_x
y_maxx = slope * max_x + intercept #y-coordinate corresponding to max_x
rospy.loginfo("predict")
return y_minx, y_maxx

def visualize_orientation(self, min_x, max_x, y_minx, y_maxx, table_segment):
cv2.line(table_segment, (min_x, int(y_minx)), (max_x, int(y_maxx)), (0, 255, 0), 2) # Draw the line created by the least squares method in green
return table_segment

def object_direction(self, x_center, y_center, slope):
perpendicular_slope = -1/slope
perpendicular_intercept = y_center - perpendicular_slope * x_center

inner_array = self.filtered_masks[0]
x = inner_array[:,0]
y = inner_array[:,1]
coordinates_upper = 0
coordinates_lower = 0
if 0 <= perpendicular_slope < math.inf:
for i in range(len(x)):
if y[i]<= perpendicular_slope*x[i] + perpendicular_intercept:
coordinates_upper += 1
else:
coordinates_lower += 1
if -math.inf < perpendicular_slope < 0:
for i in range(len(x)):
if y[i]>= perpendicular_slope*x[i] + perpendicular_intercept:
coordinates_upper += 1
else:
coordinates_lower += 1
print("Size outline upper half of the mask:", coordinates_upper)
print("Size outline lower half of the mask:", coordinates_lower)

if coordinates_upper >= coordinates_lower:
upwards = True
elif coordinates_upper < coordinates_lower:
upwards = False
self.time = rospy.Time.now()
self.upwards = upwards

return upwards

def callback(self, data):
#rospy.loginfo("Received image data")

if not self.active:
#rospy.loginfo("Callback inactive")
return
bridge = CvBridge()
cv_image = bridge.imgmsg_to_cv2(data, desired_encoding='passthrough')
cv_image = cv2.cvtColor(cv_image, cv2.COLOR_RGB2BGR)
rospy.loginfo("Image converted")

class_ids, segmentation_contours_idx, result = self.detect(self.model, cv_image)
rospy.loginfo("Object detected")

table_segment = self.extract_table_segment(cv_image, class_ids, segmentation_contours_idx)

#makes sure that even when no cutlery is detected, the segmentation of objects is visualized
table_message = bridge.cv2_to_imgmsg(table_segment, encoding="passthrough")
self.publisher.publish(table_message)

if any(class_id in self.class_ids for class_id in class_ids):
rospy.loginfo("Cutlery detected")

x_center, y_center = self.coordinates_center_point()


# Visualize center point
table_segment_with_center = self.visualize_center_point(x_center, y_center, table_segment)
rospy.loginfo("Center point visualized")

# Calculate slope and intercept
slope, intercept, min_x, max_x = self.calculate_slope_intercept()
rospy.loginfo("Slope and intercept calculated")

y_minx, y_maxx = self.predict(slope, intercept, min_x, max_x)

# Visualize orientation
table_segment_with_orientation = self.visualize_orientation(min_x, max_x, y_minx, y_maxx, table_segment_with_center)
rospy.loginfo("Orientation visualized")

# Publish the table segment as a binary mask
table_message = bridge.cv2_to_imgmsg(table_segment_with_orientation, encoding="passthrough")
self.publisher.publish(table_message)
rospy.loginfo("Segmented image with orientation published")

upwards = self.object_direction(x_center, y_center, slope)
rospy.loginfo("Direction determined")

else:
rospy.loginfo("No cutlery detected")

# Method to be able to access values in the top_grasp code
def data_object(self):
return self.x_center, self.y_center, self.length, self.slope, self. upwards, self.time


if __name__ == '__main__':
rospy.init_node("cutlery_detector")
ts = YoloSegmentor()
ts.start()

rospy.spin()
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
import rospy
import tf2_ros
import geometry_msgs.msg
from pykdl_ros import FrameStamped

if __name__ == '__main__':
rospy.init_node('tf2_gripper_listener')

tfBuffer = tf2_ros.Buffer()
listener = tf2_ros.TransformListener(tfBuffer)
PetervDooren marked this conversation as resolved.
Show resolved Hide resolved

gripper_coordinates = rospy.Publisher('/gripper_coordinates', geometry_msgs.msg.PoseStamped, queue_size=1)

rate = rospy.Rate(10.0)

while not rospy.is_shutdown():
try:
gripper_id = FrameStamped.from_xyz_rpy(0, 0, 0, 0, 0, 0, rospy.Time(), frame_id="hand_palm_link") #generates the frame of the gripper
gripper_bl = tfBuffer.lookup_transform("base_link", "hand_palm_link", rospy.Time()) #calculates the transform between the origin of the base frame and that of the gripper frame

# PoseStamped coordinates
# obtains coordinates and orientation of the gripper transformed into the coordinate frame of the robot's base
msg = geometry_msgs.msg.PoseStamped()
msg.header = gripper_bl.header
msg.pose.position.x = gripper_bl.transform.translation.x
msg.pose.position.y = gripper_bl.transform.translation.y
msg.pose.position.z = gripper_bl.transform.translation.z
msg.pose.orientation = gripper_bl.transform.rotation

# publishing coordinates
gripper_coordinates.publish(msg)


except (tf2_ros.LookupException, tf2_ros.ConnectivityException, tf2_ros.ExtrapolationException):

rate.sleep()
continue

rate.sleep()
Loading