diff --git a/documents/Jigsaw_task/Readme.md b/documents/Jigsaw_task/Readme.md new file mode 100644 index 0000000..6bb6db3 --- /dev/null +++ b/documents/Jigsaw_task/Readme.md @@ -0,0 +1,74 @@ +# Configuration +The robot work cell is showed in figure.1 + +
+Figure 1. The robot work cell
+ +- Arm: A suction cup mounted on the tool flange, and the end of suction cup is 0.15 meters above the table, 0.3 meters away from the center of the workspace, and the pose is vertical downward. +- The camera is mounted on the base and is 1m above the table. The accurate position is got by calibration. +- The rectangle workspace is front of the robot, the width is 300mm, the lenth is 400mm. +- the jigsaw pieces are placed in the workspace. + +In this example, the robot is Franka Emika Panda, the camera is realsense D435i and the end-effector is a suction cup. +The configration of three tasks followed are similar, and the different is where and how to place the jigsaw pieces. + +# Procedure +With the same jigsaw puzzle, we proposed a standard workflow for manipulation task implementation. + +The functional metrics are general, and the full task metrics are designed according to actual tasks. + +Figure 2. The workflow
+ +We designed 3 tasks: pick and place task, 4-piece tiling task, 5-piece assembly task, and the details showed below: + +Figure 3. The tasks sets
+ +Tasks are also implemented in different platforms. + +Figure 4. The hardware sets
+ +# Result +In each experiment, we record the results of the functions and task. Each task we repeat 10 times, and the metrics of each function and full task are showed below. + + +Table 1. an example of results
+ +- **IoU**: Intersection over Union, an overlap ratio between the predicted bounding box and ground truth bounding box. To calculate this metric, we print jigsaw shape templates of each piece and place the jigsaw piece on the corresponding jigsaw shape template. We get the ground truth using templates,and calculate the IoU. + + _Recall_: TP/(TP+FN) = True position /(All real positive) + _Precision_: TP/(TP+FP) = True position /(All predicted positive) + where TP is Ture positive, TN is True negative, FP is False positive,FN is False negative + + +- **AP**: True position /(All predicted positive). For this task, we predict all the object in the **ws space**, and judge which is right. For example, we predict 4 objects, and 3 is correct, so precision equals 3/4. + +- **success rate**: this metric evaluates the physical performance of the picking, equals success picking/total picking. + +- **time**: the time consumption of each period and the full task. This metric represent the cost of the task. +- **area rate**: standard area/real area + + +Figure 5. area rate
+ + + + +# Demo Videos +The video is here [video](https://github.com/bionicdl-sustech/DeepClawBenchmark/tree/master/documents/Jigsaw_task/Video_Jigsaw.mp4) + + + + + +# Contact +If you have any questions or find any bugs, please let me know: 11930807@mail.sustech.edu diff --git a/documents/Jigsaw_task/Video_Jigsaw.mp4 b/documents/Jigsaw_task/Video_Jigsaw.mp4 new file mode 100644 index 0000000..2e4055d Binary files /dev/null and b/documents/Jigsaw_task/Video_Jigsaw.mp4 differ diff --git a/documents/Jigsaw_task/fig-Panda.png b/documents/Jigsaw_task/fig-Panda.png new file mode 100644 index 0000000..1475d6c Binary files /dev/null and b/documents/Jigsaw_task/fig-Panda.png differ diff --git a/documents/Jigsaw_task/fig-TaskWorkflow.png b/documents/Jigsaw_task/fig-TaskWorkflow.png new file mode 100644 index 0000000..24b48be Binary files /dev/null and b/documents/Jigsaw_task/fig-TaskWorkflow.png differ diff --git a/documents/Jigsaw_task/fig-overview.png b/documents/Jigsaw_task/fig-overview.png new file mode 100644 index 0000000..71bd295 Binary files /dev/null and b/documents/Jigsaw_task/fig-overview.png differ diff --git a/documents/Jigsaw_task/fig-task&jigsaw.png b/documents/Jigsaw_task/fig-task&jigsaw.png new file mode 100644 index 0000000..f7eb52a Binary files /dev/null and b/documents/Jigsaw_task/fig-task&jigsaw.png differ diff --git a/documents/Jigsaw_task/fig-workflow.png b/documents/Jigsaw_task/fig-workflow.png new file mode 100644 index 0000000..2cbb8ac Binary files /dev/null and b/documents/Jigsaw_task/fig-workflow.png differ diff --git a/documents/Jigsaw_task/task_description.md b/documents/Jigsaw_task/task_description.md deleted file mode 100644 index 9570369..0000000 --- a/documents/Jigsaw_task/task_description.md +++ /dev/null @@ -1,68 +0,0 @@ -# Configuration -The robot work cell is showed in figure.1(整体安装示意图,安装反向,安装距离高度等)(一张俯视图,一张正视图,说明各个部分相互之间的位置关系) -- The initial pose of the arm is **(1,1,1,1,1,1)**, angles of each joints. With this pose, the arm will not occlude the camera. -- The end-effector is mounted on the with a **z offset** in the tool coordinate. -- The camera is mounted on the base and is **(xx,yy,zz)** in the robot base coordinate. The accurate position is got by calibration. -- The rectangle workspace is front of the robot, and the center is **(0,y,z)**, the width is 300mm, the lenth is 400mm. The left is place space and the right is pick space. -- the objects are placed in the workspace, and the models(stl and png) are showed in **XXX** folder. - -In this example, the robot is UR5, the camera is realsense D435 and the end-effector is a suction cup. -The configration of three tasks followed are similar, and the different is where and how to place the jigsaw pieces. - -# Procedure -With the same jigsaw puzzle, 3 tasks are implemented. -## pick and place task -(增加初始状态示意图,一张初始,一张放置)4 pieces is placed on the **XXX space**, (将pick区域分成四块,四片分别放置在四个区域,这样使得整个任务运行的轨迹距离基本一致)(以放置区域中心为基点,4 block模板放在正中) -## 4-piece tiling task -task descrption (以放置区域中心为基点,完成拼图时,拼图中心与基点重合) -## 5-piece assembly task -task descrption (以放置区域中心为基点,拼图基板与中心重合) - -# Result -In each experiment, we record the results of the functions and task. The metrics of each function and full task are showed below. - - -Figure 3. Metrics
- -- **IoU**: Intersection over Union, an overlap ratio between the predicted bounding box and ground truth bounding box. To calculate this metric, we print jigsaw shape templates of each piece and place the jigsaw piece on the corresponding jigsaw shape template. We get the ground truth using templates,and calculate the IoU. - - -Figure 1. IoU
- - _Recall_: TP/(TP+FN) = True position /(All real positive) - _Precision_: TP/(TP+FP) = True position /(All predicted positive) - where TP is Ture positive, TN is True negative, FP is False positive,FN is False negative - - - -- **precision**: True position /(All predicted positive). For this task, we predict all the object in the **ws space**, and judge which is right. For example, we predict 5 objects, and 4 is correct, so precision equals 4/5(示意图) - -- **success rate**: this metric evaluates the physical performance of the picking, equals success picking/total picking. - -- **time**: the time consumption of each period and the full task. This metric represent the cost of the task. -- **area rate**: standard area/real area - - -Figure 4. area rate
- - -For each task, we repeat 12 times and record the results. And finilally calculate the result. - -trial|IoU|seg time|precisiom|recog time|success rate|pick plan time|area rate|time(s) -:-----:|---|--------|--|----------|------------|--------------|------------|------- -1|0.8|12.3|4/5|4.0|2/3|8.0|0.9|40.3 -2|---|--------|--|----------|------------|--------------|------------|------- -...|---|--------|--|----------|------------|--------------|------------|------- -12|---|--------|--|----------|------------|--------------|------------|------- -reault|IoU = sum(IoUi)/10|sum(timei)/10|sum(APi)/10|sum(timei)/10|sum(success ratei)/10|sum(timei)/10|sum(area ratei)/10|sum(timei)/10 - -# Demo Videos -Demo videos of a real robot in action can be found here. -(此处放置三个任务的视频) - -# Contact -If you have any questions or find any bugs, please let me know: 11930807@mail.sustech.edu