Skip to content

Commit

Permalink
Update docs with new setup tool commands
Browse files Browse the repository at this point in the history
Signed-off-by: Leonid Kondrashov <[email protected]>
  • Loading branch information
leokondrashov committed Nov 1, 2023
1 parent 125e702 commit f17f81b
Show file tree
Hide file tree
Showing 4 changed files with 36 additions and 24 deletions.
1 change: 1 addition & 0 deletions configs/.wordlist.txt
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,7 @@ multinode
multithreaded
Mutlu
MUX
mv
Nagarajan
namespace
namespaces
Expand Down
22 changes: 13 additions & 9 deletions docs/developers_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,12 @@ or in gVisor MicroVMs instead of Firecracker MicroVMs, use the following command
```bash
git clone https://github.com/vhive-serverless/vhive
cd vhive
./scripts/cloudlab/setup_node.sh [stock-only|gvisor|firecracker]
./scripts/install_go.sh; source /etc/profile # or install Go manually
pushd scripts && go build -o setup_tool && popd && mv scripts/setup_tool .

./setup_tool setup_node [stock-only|gvisor|firecracker]
sudo containerd
./scripts/cluster/create_one_node_cluster.sh [stock-only|gvisor|firecracker]
./setup_tool create_one_node_cluster [stock-only|gvisor|firecracker]
# wait for the containers to boot up using
watch kubectl get pods -A
# once all the containers are ready/complete, you may start Knative functions
Expand Down Expand Up @@ -49,7 +52,7 @@ and check out the vHive repository manually.
# Enter the container
docker exec -it <container name> bash
# Inside the container, create a single-node cluster
./scripts/cluster/create_one_node_cluster.sh [stock-only]
./setup_tool create_one_node_cluster [stock-only]
```
> **Notes:**
>
Expand Down Expand Up @@ -112,13 +115,13 @@ Assuming you rented a node using the vHive CloudLab profile:
1. Setup the node for the desired sandbox:

```bash
./scripts/cloudlab/setup_node.sh <firecracker|gvisor>
./setup_tool setup_node [firecracker|gvisor]
```

2. Setup the CRI test environment for the desired sandbox:

```bash
./scripts/github_runner/setup_cri_test_env.sh <firecracker|gvisor>
./scripts/github_runner/setup_cri_test_env.sh [firecracker|gvisor]
```

3. Run CRI tests:
Expand All @@ -130,7 +133,7 @@ source /etc/profile && go clean -testcache && go test ./cri -v -race -cover
4. Cleanup:

```bash
./scripts/github_runner/clean_cri_runner.sh <firecracker|gvisor>
./scripts/github_runner/clean_cri_runner.sh [firecracker|gvisor]
```

## High-level features
Expand Down Expand Up @@ -204,6 +207,7 @@ Knative function call requests can now be traced & visualized using [zipkin](htt
Zipkin is a distributed tracing system featuring easy collection and lookup of tracing data.
Here are some useful commands (there are plenty of Zipkin tutorials online):

* Setup Zipkin with `./setup_tool setup_zipkin`
* Once the zipkin container is running, start the dashboard using `istioctl dashboard zipkin`.
* To access requests remotely, run `ssh -L 9411:127.0.0.1:9411 <Host_IP>` for port forwarding.
* Go to your browser and enter [localhost:9411](http://localhost:9411) for the dashboard.
Expand Down Expand Up @@ -236,7 +240,7 @@ Knative functions can use GPU although only `stock-only` mode is supported.
Follow the guide to [setup stock knative](#testing-stock-knative-setup-or-images).

``` bash
./scripts/cloudlab/setup_node.sh stock-only
./setup_tool setup_node stock-only
```

### Install NVIDIA Driver and NVIDIA Container Toolkit
Expand All @@ -247,15 +251,15 @@ You can use the script provided if the install of containerd is using our script
The script has been tested on ubuntu20.04, with GPU including NVIDIA A100, V100 or P100.

``` bash
./scripts/gpu/setup_nvidia_gpu.sh
./setup_tool setup_nvidia_gpu
```


### Start Containerd and Knative

``` bash
sudo screen -dmS containerd containerd; sleep 5;
./scripts/cluster/create_one_node_cluster.sh stock-only
./setup_tool create_one_node_cluster stock-only
```

### Install NVIDIA Device Plugin
Expand Down
13 changes: 10 additions & 3 deletions docs/logging.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,16 @@ We present how to set up a multi-node cluster, however, the same modifications c
git clone --depth=1 https://github.com/vhive-serverless/vhive.git && cd vhive && mkdir -p /tmp/vhive-logs
```

2. Build `setup_tool`

```bash
./scripts/install_go.sh; source /etc/profile
pushd scripts && go build -o setup_tool && popd && mv scripts/setup_tool .
```

3. Run the node setup script:
```bash
./scripts/cloudlab/setup_node.sh > >(tee -a /tmp/vhive-logs/setup_node.stdout) 2> >(tee -a /tmp/vhive-logs/setup_node.stderr >&2)
./setup_tool setup_node
```
> **BEWARE:**
>
Expand All @@ -42,7 +49,7 @@ We present how to set up a multi-node cluster, however, the same modifications c
**On each worker node**, execute the following instructions **as a non-root user with sudo rights** using **bash**:
1. Run the script that sets up the kubelet:
```bash
./scripts/cluster/setup_worker_kubelet.sh > >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stdout) 2> >(tee -a /tmp/vhive-logs/setup_worker_kubelet.stderr >&2)
./setup_tool setup_worker_kubelet
```

2. Open a new `tmux session` in detached mode and start `containerd` in the detached session:
Expand Down Expand Up @@ -92,7 +99,7 @@ We present how to set up a multi-node cluster, however, the same modifications c

2. Run the script that creates the multinode cluster:
```bash
./scripts/cluster/create_multinode_cluster.sh > >(tee -a /tmp/vhive-logs/create_multinode_cluster.stdout) 2> >(tee -a /tmp/vhive-logs/create_multinode_cluster.stderr >&2)
./setup_tool create_multinode_cluster firecracker
```

> **BEWARE:**
Expand Down
24 changes: 12 additions & 12 deletions docs/quickstart_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi
3. Get the setup scripts:
```bash
# Build from source
pushd scripts && go build -o setup_tool && popd
pushd scripts && go build -o setup_tool && popd && mv scripts/setup_tool .
```

**Note:** All setup logs will be generated and saved to your current working directory.
Expand All @@ -90,7 +90,7 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi
> flags as follows:
>
> ```bash
> ./scripts/setup_tool setup_node stock-only use-stargz
> ./setup_tool setup_node stock-only use-stargz
> ```
> **IMPORTANT**
> Currently `stargz` is only supported in native kubelet contexts without firecracker.
Expand All @@ -101,7 +101,7 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi

For the standard setup, run the following script:
```bash
./scripts/setup_tool setup_node firecracker
./setup_tool setup_node firecracker
```
> **BEWARE:**
>
Expand All @@ -115,11 +115,11 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi
> **IMPORTANT:**
> If step `1.4 - Run the node setup script` was executed with the `stock-only` flag, execute the following instead:
> ```bash
> ./scripts/setup_tool setup_worker_kubelet stock-only
> ./setup_tool setup_worker_kubelet stock-only
For the standard kubelet setup, run the following script:
```bash
./scripts/setup_tool setup_worker_kubelet firecracker
./setup_tool setup_worker_kubelet firecracker
2. Start `containerd` in a background terminal named `containerd`:
```bash
sudo screen -dmS containerd bash -c "containerd > >(tee -a /tmp/vhive-logs/containerd.stdout) 2> >(tee -a /tmp/vhive-logs/containerd.stderr >&2)"
Expand Down Expand Up @@ -182,7 +182,7 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi
```
2. Run the script that creates the multinode cluster (without `stargz`):
```bash
./scripts/setup_tool create_multinode_cluster firecracker
./setup_tool create_multinode_cluster firecracker
```
> **BEWARE:**
>
Expand All @@ -206,7 +206,7 @@ Another option is to run `./scripts/install_go.sh; source /etc/profile`, this wi
> script instead:
>
> ```bash
> ./scripts/setup_tool create_multinode_cluster stock-only
> ./setup_tool create_multinode_cluster stock-only
> ```
### 4. Configure Worker Nodes
Expand Down Expand Up @@ -246,13 +246,13 @@ In essence, you will execute the same commands for master and worker setups but
Execute the following below **as a non-root user with sudo rights** using **bash**:
1. Run the node setup script:
```bash
./scripts/setup_tool setup_node firecracker
./setup_tool setup_node firecracker
```
> **Note:**
> To enable runs with `stargz` images, setup kubelet by adding the `stock-only` and `use-stargz`
> flags as follows:
> ```bash
> ./scripts/setup_tool setup_node stock-only use-stargz
> ./setup_tool setup_node stock-only use-stargz
> ```
> **IMPORTANT**
> Currently `stargz` is only supported in native kubelet contexts without firecracker.
Expand Down Expand Up @@ -281,13 +281,13 @@ Execute the following below **as a non-root user with sudo rights** using **bash
```
6. Run the single node cluster setup script:
```bash
./scripts/setup_tool create_one_node_cluster firecracker
./setup_tool create_one_node_cluster firecracker
```
> **IMPORTANT:**
> If you setup the node using the `stock-only` flag, execute the following
> script instead:
> ```bash
> ./scripts/setup_tool create_one_node_cluster stock-only
> ./setup_tool create_one_node_cluster stock-only
> ```
### 2. Clean Up
Expand All @@ -302,7 +302,7 @@ This script stops the existing cluster if any, cleans up and then starts a fresh
```bash
# specify if to enable debug logs; cold starts: snapshots, REAP snapshots
export GITHUB_VHIVE_ARGS="[-dbg] [-snapshots]"
./scripts/setup_tool start_onenode_vhive_cluster firecracker
./setup_tool start_onenode_vhive_cluster firecracker
```
> **Note:**
Expand Down

0 comments on commit f17f81b

Please sign in to comment.