Skip to content

Commit

Permalink
add v1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
as-wanfang committed May 8, 2020
1 parent b07a023 commit ee550ba
Show file tree
Hide file tree
Showing 85 changed files with 4,164 additions and 1 deletion.
80 changes: 79 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,79 @@
# DeepClaw
# DeepClawBenchmark

paper | poster | video

Establishing a reproducible and shareable benchmarking for dexterous manipulation has been a significant challenge since the diversity of robot systems, the complexity of manipulation tasks, and a wide selection of metrics. To reduce the entry barrier, we propose **DeepClaw** - a standardized dexterous manipulation protocol, which comprises four common operations to streamline the manipulation process: *localization*, *identification*, *multiple points motion planning*, and *execution*.

Robot can learning skills that applicable for the similar tasks, called the *task familiy*[1]. We have implemented several manipulation tasks in three task families representing assembly tasks, reasoning tasks and bin-picking tasks separately. You can find them <a href="#tasks">here</a>.

In addition, we propose certain metrics measuring manipulations in many dimensions, such as xxx.

![](https://github.com/bionicdl-sustech/DeepClawBenchmark/blob/master/Documents/Figs/deepclaw-framework.png)

## Quick Start

### Prerequisites

DeepClaw framework has only been tested with *Python 2.7* and *Ubuntu 16.04 LTS*. We recommend using a virtual environment (such as virtualenv) to manage DeepClaw.

Install virtualenv.

```shell
$ sudo pip install -U virtualenv
```

Create a new virtual environment.

```shell
$ virtualenv --system-site-packages -p python2.7 ./venv
```

Activate or retreat from virtual environment.

```shell
$ source ./venv/bin/activate # activate virtual environment
$ deactivate # retreat from virtual environment
```

### Installation

Clone or download DeepClaw from Github.

```shell
$ git clone https://github.com/bionicdl-sustech/DeepClawBenchmark.git
$ cd ./DeepClawBenchmark
```

Run the DeepClaw installation helper script:

```shell
$ sudo sh install.sh realsense ur
```

The brackets indicate optional arguments to switch installation methods.

The first argument specifies the version:

- **realsense**: RealSense D435 support.

The second argument specifies the installation mode:

- **ur**: UNIVERSAL ROBOT arm series support (UR5 and UR10e).
- **franka**: FRANKA arm support (update later).
- **aubo**: AUBO arm support (update later).
- **denso**: DENSO Cobotta arm support (update later).

There are some test cases for testing your installation and calibration.

[Test cases](https://github.com/bionicdl-sustech/DeepClawBenchmark/blob/master/Documents/TestCases.md)

## <a name="tasks">Tasks</a>
We have implemented some task families with DeepClaw:
- Task Family 1: [Jigsaw](https://github.com/bionicdl-sustech/DeepClawBenchmark/blob/master/Documents/Jigsaw_task/task_description.md)
- Task Family 2: Tictactoe Game
- Task Family 3: Toy-Claw

Find the task desription template [here](https://github.com/bionicdl-sustech/DeepClawBenchmark/blob/master/Documents/Task-Description-Template.md).

## References
[1] O. Kroemer, S. Niekum, and G. Konidaris, “A review of robot learning for manipulation: Challenges, representations, and algorithms,”arXiv preprintarXiv:1907.03146, 2019.
8 changes: 8 additions & 0 deletions config/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# Configurations

Configuration provides certain arguments of specific algorithm modules or robot controllers. For example, if we want to calibrate our robot system with RGB-D camera, arguments such as starting point and strict length in each axis should be set.

We provide configuration templates of calibration module and robot arm controller here. You can create your own configurations for your own modules or controllers.

- Calibration Configuration
- UR5 Configuration
Empty file added config/__init__.py
Empty file.
23 changes: 23 additions & 0 deletions config/arms/franka.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# initial params
HOME_JOINTS:
-
- ''
- ''
- ''
-
- ''
- ''
- ''

HOME_POSE:
-
- 0.352
- 0.325
- 0.25
-
- 3.14
- 0
- 0

CALIBRATION_DIR: "/data/calibration_data/franka.npz"

27 changes: 27 additions & 0 deletions config/arms/ur10e.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# initial params
SOCKET_CONFIGURATION:
robot_ip: "192.168.31.10"
port_number: 30003

HOME_JOINTS:
-
- -72.94
- -80.84
- -130.1
-
- -51.87
- 111.72
- -161.44

HOME_POSE:
-
- 0.03
- -0.54
- 0.4
-
- 3.14
- -0.4
- 0
PICK_Z: 0.
PLACE_Z: 0.08

28 changes: 28 additions & 0 deletions config/arms/ur5.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
# initial params
SOCKET_CONFIGURATION:
robot_ip: "192.168.1.27"
port_number: 30003

HOME_JOINTS:
-
- -72.94
- -80.84
- -130.1
-
- -51.87
- 111.72
- -161.44

HOME_POSE:
-
- 0.03
- -0.54
- 0.4
-
- 3.14
- -0.4
- 0
PICK_Z: 0.
PLACE_Z: 0.08
CALIBRATION_DIR: "/data/calibration_data/ur5.npz"

14 changes: 14 additions & 0 deletions config/modules/calibration.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
initial_position:
-
- 0.4
- -0.5
- 0.35
-
- 3.14
- 0
- 0
# the numebr of sampled points is cube_size*cube_size*cube_size
cube_size: 4
x_stride: 0.05
y_stride: 0.075
z_stride: 0.01
8 changes: 8 additions & 0 deletions config/sensors/realsense.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# initial params
DEVICE_CONFIGURATION:
serial_id: ""

FRAME_ARGS:
width: 1280
height: 720
fps: 30
Empty file added data/__init__.py
Empty file.
Binary file added documents/Figs/deepclaw-framework.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documents/Jigsaw_task/AP.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documents/Jigsaw_task/IoU.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documents/Jigsaw_task/IoU_calculate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documents/Jigsaw_task/all in one.pdf
Binary file not shown.
Binary file added documents/Jigsaw_task/area rate.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added documents/Jigsaw_task/metrics.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
68 changes: 68 additions & 0 deletions documents/Jigsaw_task/task_description.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Configuration
The robot work cell is showed in figure.1(整体安装示意图,安装反向,安装距离高度等)(一张俯视图,一张正视图,说明各个部分相互之间的位置关系)
- The initial pose of the arm is **(1,1,1,1,1,1)**, angles of each joints. With this pose, the arm will not occlude the camera.
- The end-effector is mounted on the with a **z offset** in the tool coordinate.
- The camera is mounted on the base and is **(xx,yy,zz)** in the robot base coordinate. The accurate position is got by calibration.
- The rectangle workspace is front of the robot, and the center is **(0,y,z)**, the width is 300mm, the lenth is 400mm. The left is place space and the right is pick space.
- the objects are placed in the workspace, and the models(stl and png) are showed in **XXX** folder.

In this example, the robot is UR5, the camera is realsense D435 and the end-effector is a suction cup.
The configration of three tasks followed are similar, and the different is where and how to place the jigsaw pieces.

# Procedure
With the same jigsaw puzzle, 3 tasks are implemented.
## pick and place task
(增加初始状态示意图,一张初始,一张放置)4 pieces is placed on the **XXX space**, (将pick区域分成四块,四片分别放置在四个区域,这样使得整个任务运行的轨迹距离基本一致)(以放置区域中心为基点,4 block模板放在正中)
## 4-piece tiling task
task descrption (以放置区域中心为基点,完成拼图时,拼图中心与基点重合)
## 5-piece assembly task
task descrption (以放置区域中心为基点,拼图基板与中心重合)

# Result
In each experiment, we record the results of the functions and task. The metrics of each function and full task are showed below.

<p align="center"><img src="./metrics.png" width="40%"/></p>
<p align="center">Figure 3. Metrics</p>

- **IoU**: Intersection over Union, an overlap ratio between the predicted bounding box and ground truth bounding box. To calculate this metric, we print jigsaw shape templates of each piece and place the jigsaw piece on the corresponding jigsaw shape template. We get the ground truth using templates,and calculate the IoU.

<p align="center"><img src="./IoU_calculate.png" width="60%" height="60%"/></p>
<p align="center">Figure 1. IoU</p>

  _Recall_: TP/(TP+FN) = True position /(All real positive)
  _Precision_: TP/(TP+FP) = True position /(All predicted positive)
  where TP is Ture positive, TN is True negative, FP is False positive,FN is False negative

<!--- **AP**: The AP summarises the shape of the precision/recall curve, and is defined as the mean precision at a set of eleven equally spaced recall levels [0,0.1,...,1],here r is recall:
<p align="center"><img src="./AP.png" width="40%"/></p>
<p align="center">Figure 2. AP</p>
-->

- **precision**: True position /(All predicted positive). For this task, we predict all the object in the **ws space**, and judge which is right. For example, we predict 5 objects, and 4 is correct, so precision equals 4/5(示意图)

- **success rate**: this metric evaluates the physical performance of the picking, equals success picking/total picking.

- **time**: the time consumption of each period and the full task. This metric represent the cost of the task.
- **area rate**: standard area/real area

<p align="center"><img src="./area rate.png" width="50%"/></p>
<p align="center">Figure 4. area rate</p>


For each task, we repeat 12 times and record the results. And finilally calculate the result.

trial|IoU|seg time|precisiom|recog time|success rate|pick plan time|area rate|time(s)
:-----:|---|--------|--|----------|------------|--------------|------------|-------
1|0.8|12.3|4/5|4.0|2/3|8.0|0.9|40.3
2|---|--------|--|----------|------------|--------------|------------|-------
...|---|--------|--|----------|------------|--------------|------------|-------
12|---|--------|--|----------|------------|--------------|------------|-------
reault|IoU = sum(IoU<sub>i</sub>)/10|sum(time<sub>i</sub>)/10|sum(AP<sub>i</sub>)/10|sum(time<sub>i</sub>)/10|sum(success rate<sub>i</sub>)/10|sum(time<sub>i</sub>)/10|sum(area rate<sub>i</sub>)/10|sum(time<sub>i</sub>)/10

# Demo Videos
Demo videos of a real robot in action can be found here.
(此处放置三个任务的视频)

# Contact
If you have any questions or find any bugs, please let me know: [email protected]
65 changes: 65 additions & 0 deletions documents/Task-Description-Template.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
# XXX Task Family

## Tasks Description

Description of this task family, such as name of tasks, organization of tasks and so on.

## Configuration

- Manipulation Environment Description
- Workspace size
- Functions of regions
- Constrains
- ...
- Objects Information
- List of objects and their descriptions
- Initial poses of objects
- ...
- Robots/Hardware Setup
- Robot arm
- Robot gripper
- Computing platform
- ...

## Procedure

### Task 1

Crucial parameters: P1, P2, ...

Pseudo-code:

```pseudocode
task_display(p1, p2, ...):
input_0 = [p1, p2, ...]
main_loop:
results = subtask_display(input_0)
metrics_record(results)
return
subtask_display(input_0):
results_0 = localization(input_0)
results_1 = identification(results_0)
results_2 = multiple_points_motion_planning(results_1)
results_3 = execution(results_2)
return results_3
```



### Task 2

### ...

## Metrics

### Experiment 1

Table of results.

### Experiment 2

Table of results

### ...

22 changes: 22 additions & 0 deletions documents/TestCases.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Test Cases

## Test the installation:

After installation, you can run following command to test your installation.

```shell
$ python main.py {ur5|ur10e} test
```

## Calibration

We provide a naive hand-eye calibrating method with RGB-D camera (RealSense D435) which utilizes 3-D dimension information.

Before run this test case, you need to **rewrite parameters** in /Config/calibrating.yaml, such as "start point", "x_stride" and so on.

Then you can test the calibration:

```shell
$ python main.py {ur5|ur10e} calibration_test
```

10 changes: 10 additions & 0 deletions driver/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# Driver

In this part, we provide widely used hardware controllers such as camera sensors, collaborating robot arms and adaptive robot grippers. You can create your own controller by inheriting the parent class.

## Sensor

## Arm

## Gripper

Empty file added driver/__init__.py
Empty file.
Binary file added driver/__pycache__/__init__.cpython-37.pyc
Binary file not shown.
Loading

0 comments on commit ee550ba

Please sign in to comment.