Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/RLE-Foundation/rllte into main
Browse files Browse the repository at this point in the history
  • Loading branch information
yuanmingqi committed Feb 29, 2024
2 parents b2f7241 + af65281 commit 25f5daa
Show file tree
Hide file tree
Showing 112 changed files with 4,269 additions and 3,196 deletions.
4 changes: 4 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
[submodule "deployment/ncnn/ncnn"]
path = deployment/ncnn/ncnn
url = https://github.com/Tencent/ncnn.git
branch = master
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ def function_with_types_in_docstring(param1: type1, param2: type2):
```

## Pull Request (PR)
Before proposing a PR, please open an issue, where the feature will be discussed. This prevent from duplicated PR to be proposed and also ease the code review process. Each PR need to be reviewed and accepted by at least one of the maintainers (@[yuanmingqi](https://github.com/yuanmingqi), @[ShihaoLuo](https://github.com/orgs/RLE-Foundation/people/ShihaoLuo)). A PR must pass the Continuous Integration tests to be merged with the master branch.
Before proposing a PR, please open an issue, where the feature will be discussed. This prevent from duplicated PR to be proposed and also ease the code review process. Each PR need to be reviewed and accepted by at least one of the maintainers (@[yuanmingqi](https://github.com/yuanmingqi), @[roger-creus](https://github.com/roger-creus)). A PR must pass the Continuous Integration tests to be merged with the master branch.

See the [Pull Request Template](https://github.com/RLE-Foundation/rllte/blob/main/.github/PULL_REQUEST_TEMPLATE.md).

Expand Down
10 changes: 5 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,10 @@ Why **RLLTE**?
- 🚀 Optimized workflow for full hardware acceleration;
- ⚙️ Support custom environments and modules;
- 🖥️ Support multiple computing devices like GPU and NPU;
- 💾 Large number of reusable benchmarks;
- 👨‍✈️ Large language model-empowered copilot ([Copilot](https://github.com/RLE-Foundation/rllte-copilot)).
- 💾 Large number of reusable benchmarks ([RLLTE Hub](https://hub.rllte.dev));
- 👨‍✈️ Large language model-empowered copilot ([RLLTE Copilot](https://github.com/RLE-Foundation/rllte-copilot)).

> ⚠️ Since the construction of RLLTE Hub requires massive computing power, we have to upload the training datasets and model weights gradually. Progress report can be found in [Issue#30](https://github.com/RLE-Foundation/rllte/issues/30). **Training datasets**: [hub.rllte.dev](https://hub.rllte.dev). **Model weights**: [hub.rllte.dev](https://hub.rllte.dev). **Wandb**: [wandb.rllte.dev](https://wandb.rllte.dev)
> ⚠️ Since the construction of RLLTE Hub requires massive computing power, we have to upload the training datasets and model weights gradually. Progress report can be found in [Issue#30](https://github.com/RLE-Foundation/rllte/issues/30).
See the project structure below:
<div align=center>
Expand All @@ -76,15 +76,15 @@ conda create -n rllte python=3.8

- with pip `recommended`

Open up a terminal and install **rllte** with `pip`:
Open a terminal and install **rllte** with `pip`:
``` shell
pip install rllte-core # basic installation
pip install rllte-core[envs] # for pre-defined environments
```

- with git

Open up a terminal and clone the repository from [GitHub](https://github.com/RLE-Foundation/rllte) with `git`:
Open a terminal and clone the repository from [GitHub](https://github.com/RLE-Foundation/rllte) with `git`:
``` sh
git clone https://github.com/RLE-Foundation/rllte.git
```
Expand Down
118 changes: 104 additions & 14 deletions deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,28 @@
>+ `cd path_to_rllte/deloyment/c++`
>+ `mkdir build && cd build`
>+ `cmake .. && make`
>+ `./DeployerTest ../../model/test_model.onnx`
>+ `./DeployerTest ../../model/test_model.onnx`
This demo will deploy the testing onnx model into tensorRT and output the performed result w.r.t the input [1\*9\*84\*84] float16 data.
![Alt text](docs/c++_quick_start_run.png)

### python
>+ `git clone https://github.com/RLE-Foundation/rllte`
>+ `cd path_to_rllte/deloyment/python`
>+ `python3 pth2onnx.py ../model/test_model.pth`
>+ `./trtexec --onnx=test_model.onnx --saveEngine=test_model.trt --skipInference`
>+ `python3 infer.py test_model.plan`
>+ `git clone https://github.com/RLE-Foundation/rllte`
>+ `cd path_to_rllte/deloyment/python`
>+ `python3 pth2onnx.py ../model/test_model.pth`
This python script will transform pth model to onnx model, which is saved in the current path.
![Alt text](docs/python_pth_2_onnx.png)
>+ `./trtexec --onnx=test_model.onnx --saveEngine=test_model.trt --skipInference`
Using the trtexec tool to transfom the onnx model into trt model.
![Alt text](docs/onnx_2_trt_py.png)
>+ `python3 infer.py test_model.trt`
It will infer the trt model and output the performed result w.r.t the input [1\*9\*84\*84] float16 data.
![Alt text](docs/py_infer.png)


## use in your c++ project
### basic API instruction
>+ `#inlude "RLLTEDeployer.h"`
Including the header file in your cpp file.
Including the header file in your cpp file.
>+ `Options options;`
`options.deviceIndex = 0;`
`options.doesSupportDynamicBatchSize = false;`
Expand All @@ -50,13 +60,23 @@
Use infer member funtion to execute the infer process. The input is the tensor with relevant data type, and the output is a pointer with relevant data size and data type. The infer result will be moved to the output.
>+ The complete code please refer to the DeployerTest.cpp;
## c++ project with cmake
### build your c++ project with cmake
>+ `find_package(CUDA REQUIRED)`
Find the header and dynamic libraries of CUDA.
>+ `include_directories(${CUDA_INCLUDE_DIRS} ${Path_of_RLLTEDeployer_h}})`
>+ `target_link_libraries(YOUREXECUTEFILE ${PATH_OF_libRLLTEDeployer_so)`
Set the path of include files required.
>+ `add_library(RLLTEDeployer SHARED ${Path_of_RLLTEDeployer.cpp} ${Path_of_common/logger.cpp})`
Build the RLLTEDployer as a dynamic library.
>+ `target_link_libraries(RLLTEDeployer nvinfer nvonnxparser ${CUDA_LIBRARIES})`
Link the dependecies od RLLTEDployer.so.
>+ `add_executable(YourProjectExecutable ${Path_of_YourProjectExecutable.cpp})`
Build the executable file of your project.
>+ `target_link_libraries(YourProjectExecutable RLLTEDeployer)`
Link the RLLTEDeployer to your project.

## c++ deployment with Docker

## c++ deployment with Docker
Using docker to deploy model is easier than using host PC, the nvidia driver is the only dependency to install, everything else is prepared in the image.
### install Nvidia_Docker
>+ Make sure to install Nvidia Driver.
>+ `sudo apt-get install ca-certificates gnupg lsb-release`
Expand All @@ -73,15 +93,21 @@
>+ `sudo groupadd docker`
>+ `sudo gpasswd -a $USER docker`
>+ Logout and Login to make the user group activated.
>+ `sudo service docker restart`
>+ `sudo service docker restart`
>+ `docker run --gpus all nvidia/cuda:12.0.0-cudnn8-devel-ubuntu20.04 nvidia-smi`
If the gpu message is showed, then everything is okay.
![Alt text](docs/gpus_docker.png)

### usage
>+ `docker pull jakeshihaoluo/rllte_deployment_env:0.0.1`
>+ `docker pull jakeshihaoluo/rllte_deployment_env:0.0.1`
![Alt text](docs/pull.png)
>+ `docker run -it -v ${path_to_the_repo}:/rllte --gpus all jakeshihaoluo/rllte_deployment_env:0.0.1`
![Alt text](docs/docker_container.png)
>+ `cd /rllte/deloyment/c++`
>+ `mkdir build && cd build`
>+ `cmake .. && make`
>+ `./DeployerTest ../../model/test_model.onnx`
![Alt text](docs/run_docker.png)

## deployment with Ascend

Expand All @@ -98,7 +124,7 @@
### c++ development
>+ include header file `#include "acl/acl.h"`
>+ The main workflow is showned as below. The main functions are implemented in the *ascend/src/main.cpp* .
>+ The main workflow is showned as below. The main functions are implemented in the *ascend/src/main.cpp* .
![Alt text](docs/ascendmain.png)

### build and run
Expand All @@ -109,4 +135,68 @@
>+ `chmod +x sample_build.sh`
>+ `./sample_build.sh`
>+ `./chmod +x sample_run.sh`
>+ `./sample_run.sh`
>+ `./sample_run.sh`
## deployment with NCNN

### what is NCNN
>+ ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployment and uses on mobile phones from the beginning of design. ncnn does not have third party dependencies. It is cross-platform, and runs faster than all known open source frameworks on mobile phone cpu. Developers can easily deploy deep learning algorithm models to the mobile platform by using efficient ncnn implementation, create intelligent APPs, and bring the artificial intelligence to your fingertips. ncnn is currently being used in many Tencent applications, such as QQ, Qzone, WeChat, Pitu and so on.
Ref: https://github.com/Tencent/ncnn
![Alt text](docs/ncnn.png)

### deployment on PC with NCNN
>+ install requirements of NCNN
`sudo apt install build-essential git cmake libprotobuf-dev protobuf-compiler libvulkan-dev vulkan-utils libopencv-dev`
>+ `cd deployment/ncnn`
download the ncnn repo into deployment/ncnn/ncnn
>+ `git submodule init && git submodule update`
>+ `cd ncnn`
compile the ncnn library. Note: if you don't want to use vulkan or have problem with vulkan on your PC, jsut set -DNCNN_VULKAN=OFF
>+ `git submodule update --init && mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=../toolchains/host.gcc.toolchain.cmake -DNCNN_VULKAN=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_BUILD_TOOLS=ON .. && make -j$(nproc) && make install`
>+ your onnx model may contains many redundant operators such as Shape, Gather and Unsqueeze that is not supported in ncnn. Use handy tool developed by daquexian to eliminate them.
Ref:https://github.com/daquexian/onnx-simplifier
>+ `python3 -m pip install onnxsim`
>+ `cd ../../ && python3 -m onnxsim ../model/test_model.onnx test_model-sim.onnx`
![Alt text](docs/ncnn_sim.png)
>+ convert the model to ncnn using tools/onnx2ncnn
`./ncnn/build/install/bin/onnx2ncnn test_model-sim.onnx test_model-sim.param test_model-sim.bin`
>+ now, you should have test_model-sim.bin test_model-sim.onnx test_model-sim.param in the ncnn directory.
![Alt text](docs/ncnn_1.png)
>+ before compile the executable, change the ncnn lib directory to your own path in the CMakeLists.txt, for example
![Alt text](docs/ncnn_2.png)
>+ `mkdir build && cd build && cmake .. && make`
>+ `./NCNNDeployTest ../test_model-sim.param ../test_model-sim.bin`
>+ After running, it will output a 1*50 tensor.
![Alt text](docs/ncnn_3.png)

### deployment on RaspberryPi with NCNN
>+ first, ssh to your RaspberryPi or directly operate on it
>+ clone this repo into RespberryPi.
>+ `git clone https://github.com/RLE-Foundation/rllte.git && cd rllte/deployment/ncnn`
>+ install requirements
>+ `wget https://cmake.org/files/v3.22/cmake-3.22.0.tar.gz`
>+ `tar -xvzf cmake-3.22.0.tar.gz`
>+ `cd cmake-3.22.0 && sudo ./bootstrap && sudo make -j$(nproc) && sudo make install`
>+ `sudo apt update && sudo apt install build-essential git libprotobuf-dev protobuf-compiler libvulkan-dev libopencv-dev libxcb-randr0-dev libxrandr-dev libxcb-xinerama0-dev libxinerama-dev libxcursor-dev libxcb-cursor-dev libxkbcommon-dev xutils-dev xutils-dev libpthread-stubs0-dev libpciaccess-dev libffi-dev x11proto-xext-dev libxcb1-dev libxcb-*dev libssl-dev libgnutls28-dev x11proto-dri2-dev x11proto-dri3-dev libx11-dev libxcb-glx0-dev libx11-xcb-dev libxext-dev libxdamage-dev libxfixes-dev libva-dev x11proto-randr-dev x11proto-present-dev libclc-dev libelf-dev mesa-utils libvulkan-dev libvulkan1 libassimp-dev libdrm-dev libxshmfence-dev libxxf86vm-dev libunwind-dev libwayland-dev wayland-protocols libwayland-egl-backend-dev valgrind libzstd-dev vulkan-tools bison flex ninja-build python3-mako`
download the ncnn repo into deployment/ncnn/ncnn
>+ `git submodule init && git submodule update`
>+ `cd ncnn`
compile the ncnn library. Note: if you don't want to use vulkan or have problem with vulkan on your RaspberryPi, jsut set -DNCNN_VULKAN=OFF
>+ `git submodule update --init && mkdir build && cd build && cmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_TOOLCHAIN_FILE=../toolchains/pi3.toolchain.cmake -DNCNN_VULKAN=ON -DNCNN_BUILD_EXAMPLES=ON -DNCNN_BUILD_TOOLS=ON .. && make -j$(nproc) && make install`
>+ your onnx model may contains many redundant operators such as Shape, Gather and Unsqueeze that is not supported in ncnn. Use handy tool developed by daquexian to eliminate them.
Ref:https://github.com/daquexian/onnx-simplifier
`python3 -m pip install onnxsim`
`cd ../../ && python3 -m onnxsim ../model/test_model.onnx test_model-sim.onnx`
![Alt text](docs/ncnn_sim.png)
>+ convert the model to ncnn using tools/onnx2ncnn
`./ncnn/build/install/bin/onnx2ncnn test_model-sim.onnx test_model-sim.param test_model-sim.bin`
>+ now, you should have test_model-sim.bin test_model-sim.onnx test_model-sim.param in the ncnn directory.
![Alt text](docs/ncnn_1.png)
>+ before compile the executable, change the ncnn lib directory to your own path in the CMakeLists.txt, for example
![Alt text](docs/ncnn_2.png)
`mkdir build && cd build && cmake .. && make`
`./NCNNDeployTest ../test_model-sim.param ../test_model-sim.bin`
>+ After running, it will output a 1*50 tensor.
![Alt text](docs/ncnn_3.png)


Binary file added deployment/docs/ascendmain.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ascendworkflow.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/c++_quick_start_run.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/docker_container.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/gpus_docker.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/jetpackos.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ncnn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ncnn_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ncnn_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ncnn_3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ncnn_sim.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/onnx_2_trt_py.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/pull.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/py_infer.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/python_pth_2_onnx.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/run_docker.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/sdk_ver.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added deployment/docs/ssh_con.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 21 additions & 0 deletions deployment/ncnn/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
cmake_minimum_required(VERSION 3.2)

project(NCNNDeploy_test)

if(CMAKE_BUILD_TYPE STREQUAL "")
set(CMAKE_BUILD_TYPE "Release")
endif()

set(CMAKE_CXX_STANDARD 14)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++14 -O3")

include_directories("~/Documents/Hsuanwu/deployment/ncnn/ncnn/build/install/include")
set(ncnn_DIR "~/Documents/Hsuanwu/deployment/ncnn/ncnn/build/install/lib/cmake/ncnn"
CACHE PATH "Directory that contains ncnnConfig.cmake")
find_package(ncnn REQUIRED)

add_executable(NCNNDeployTest NCNNDeployeTest.cpp)
target_link_libraries(NCNNDeployTest ncnn)

message("Build type: " ${CMAKE_BUILD_TYPE})

46 changes: 46 additions & 0 deletions deployment/ncnn/NCNNDeployeTest.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
#include "ncnn/net.h"

int main(int argc, char** argv)
{
ncnn::Net net; //net
net.load_param(argv[1]);//load the param file
net.load_model(argv[2]);//load the bin file


ncnn::Mat in;//data tyoe Mat. Input/output data are stored in this structure.
in.create(84 ,84, 9);//create a 9*84*84 tensor as the test onnx model requires.
in.fill(3.0f);//fill the input data will 3.0.


ncnn::Extractor ex = net.create_extractor();//create a extractor from net
ex.set_light_mode(true);
ex.set_num_threads(4);
ex.input("input", in);//feed data into the extractor, then it will start infer automatically.
ncnn::Mat out;
ex.extract("output", out);//get infer result and put it into the out variable.


for (int q=0; q<out.c; q++)//print the infer result
{
const float* ptr = out.channel(q);
for (int z=0; z<out.d; z++)
{
for (int y=0; y<out.h; y++)
{
for (int x=0; x<out.w; x++)
{
printf("%f ", ptr[x]);
}
ptr += out.w;
printf("\n");
}
printf("\n");
}
printf("------------------------\n");
}


ex.clear();//destrcut the extractor
net.clear();//destrcut the net
return 0;
}
1 change: 1 addition & 0 deletions deployment/ncnn/ncnn
Submodule ncnn added at 22c990
5 changes: 3 additions & 2 deletions docs/api_docs/agent/daac.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ DAAC(
hidden_dim: int = 256, clip_range: float = 0.2, clip_range_vf: float = 0.2,
policy_epochs: int = 1, value_freq: int = 1, value_epochs: int = 9, vf_coef: float = 0.5,
ent_coef: float = 0.01, adv_coef: float = 0.25, max_grad_norm: float = 0.5,
init_fn: str = 'xavier_uniform'
discount: float = 0.999, init_fn: str = 'xavier_uniform'
)
```

Expand Down Expand Up @@ -44,6 +44,7 @@ Based on: https://github.com/rraileanu/idaac
* **ent_coef** (float) : Weighting coefficient of entropy bonus.
* **adv_ceof** (float) : Weighting coefficient of advantage loss.
* **max_grad_norm** (float) : Maximum norm of gradients.
* **discount** (float) : Discount factor.
* **init_fn** (str) : Parameters initialization method.


Expand All @@ -57,7 +58,7 @@ DAAC agent instance.


### .update
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/daac.py/#L170)
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/daac.py/#L173)
```python
.update()
```
Expand Down
5 changes: 3 additions & 2 deletions docs/api_docs/agent/drac.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ DrAC(
feature_dim: int = 512, batch_size: int = 256, lr: float = 0.00025, eps: float = 1e-05,
hidden_dim: int = 512, clip_range: float = 0.1, clip_range_vf: float = 0.1,
n_epochs: int = 4, vf_coef: float = 0.5, ent_coef: float = 0.01, aug_coef: float = 0.1,
max_grad_norm: float = 0.5, init_fn: str = 'orthogonal'
max_grad_norm: float = 0.5, discount: float = 0.999, init_fn: str = 'orthogonal'
)
```

Expand Down Expand Up @@ -41,6 +41,7 @@ Based on: https://github.com/rraileanu/auto-drac
* **ent_coef** (float) : Weighting coefficient of entropy bonus.
* **aug_coef** (float) : Weighting coefficient of augmentation loss.
* **max_grad_norm** (float) : Maximum norm of gradients.
* **discount** (float) : Discount factor.
* **init_fn** (str) : Parameters initialization method.


Expand All @@ -54,7 +55,7 @@ DrAC agent instance.


### .update
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drac.py/#L163)
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drac.py/#L166)
```python
.update()
```
Expand Down
5 changes: 3 additions & 2 deletions docs/api_docs/agent/drdaac.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ DrDAAC(
hidden_dim: int = 256, clip_range: float = 0.2, clip_range_vf: float = 0.2,
policy_epochs: int = 1, value_freq: int = 1, value_epochs: int = 9, vf_coef: float = 0.5,
ent_coef: float = 0.01, aug_coef: float = 0.1, adv_coef: float = 0.25,
max_grad_norm: float = 0.5, init_fn: str = 'xavier_uniform'
max_grad_norm: float = 0.5, discount: float = 0.999, init_fn: str = 'xavier_uniform'
)
```

Expand Down Expand Up @@ -45,6 +45,7 @@ Based on: https://github.com/rraileanu/idaac
* **aug_coef** (float) : Weighting coefficient of augmentation loss.
* **adv_ceof** (float) : Weighting coefficient of advantage loss.
* **max_grad_norm** (float) : Maximum norm of gradients.
* **discount** (float) : Discount factor.
* **init_fn** (str) : Parameters initialization method.


Expand All @@ -58,7 +59,7 @@ DAAC agent instance.


### .update
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drdaac.py/#L176)
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drdaac.py/#L179)
```python
.update()
```
Expand Down
2 changes: 1 addition & 1 deletion docs/api_docs/agent/drqv2.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ Update the critic network.
None.

### .update_actor
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drqv2.py/#L235)
[source](https://github.com/RLE-Foundation/rllte/blob/main/rllte/agent/drqv2.py/#L236)
```python
.update_actor(
obs: th.Tensor
Expand Down
Loading

0 comments on commit 25f5daa

Please sign in to comment.