Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Linkerd example #8650

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
99 changes: 99 additions & 0 deletions examples/interdomain/linkerd/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# NSM + linkerd interdomain example over kind clusters

This diagram shows that we have 2 clusters with NSM and also linkerd deployed on the Cluster-2.

In this example, we deploy an http-server (**Workload-2**) on the Cluster-2 and show how it can be reached from Cluster-1.

The client will be `alpine` (**Workload-1**), we will use curl.

## Requires

- [Load balancer](../loadbalancer)
- [Interdomain DNS](../dns)
- Interdomain spire
- [Spire on first cluster](../../spire/cluster1)
- [Spire on second cluster](../../spire/cluster2)
- [Spiffe Federation](../spiffe_federation)
- [Interdomain nsm](../nsm)


## Run

```bash
export KUBECONFIG=$KUBECONFIG2
linkerd check --pre
linkerd install --crds | kubectl apply -f -
linkerd install | kubectl apply -f -
linkerd check
```

Install networkservice for the second cluster:
```bash
kubectl create ns ns-nsm-linkerd
kubectl apply -f ./networkservice.yaml
```

Start `alpine` with networkservicemesh client on the first cluster:

```bash
kubectl --kubeconfig=$KUBECONFIG1 apply -f ./greeting/client.yaml
```

Start `auto-scale` networkservicemesh endpoint:
```bash
kubectl --kubeconfig=$KUBECONFIG2 apply -k ./nse-auto-scale
```

Install http-server for the second cluster:
```bash
export KUBECONFIG=$KUBECONFIG2
kubectl apply -f ./greeting/server.yaml
kubectl get deploy greeting -o yaml | linkerd inject - | kubectl apply -f -
```


Wait for the `alpine` client to be ready:
```bash
kubectl --kubeconfig=$KUBECONFIG1 wait --timeout=2m --for=condition=ready pod -l app=alpine
```

Install everything for client:
```bash
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- apk add curl
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- apk add iproute2
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- apk add iptables
```

```bash
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- iptables -t mangle -A OUTPUT -p tcp -d 199.0.0.0/8 -j MARK --set-mark 8
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- iptables -t nat -A OUTPUT -m mark --mark 8 -j NETMAP --to 10.0.0.0/8
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- iptables -t nat -A POSTROUTING -m mark --mark 8 -j SNAT --to 172.16.1.3
```

```bash
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- echo 201 nsm_table >> /etc/iproute2/rt_tables
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- ip ru add fwmark 8 lookup nsm_table pref 3333
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- ip ro add default via 172.16.1.3 table nsm_table
```

Verify connectivity:
```bash
kubectl --kubeconfig=$KUBECONFIG1 exec deploy/alpine -c alpine -- curl -s greeting.default:9080 | grep -o "hello world from linkerd"
```
**Expected output** is "hello world from linkerd"

Congratulations!
You have made a interdomain connection between two clusters via NSM + linkerd!

## Cleanup

```bash
export KUBECONFIG=$KUBECONFIG2
kubectl delete deployment greeting
kubectl delete ns ns-nsm-linkerd
linkerd uninstall | kubectl delete -f -
```

```bash
kubectl --kubeconfig=$KUBECONFIG1 delete deployment alpine
```
35 changes: 35 additions & 0 deletions examples/interdomain/linkerd/greeting/client.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: alpine
labels:
app: alpine
spec:
replicas: 1
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
annotations:
networkservicemesh.io: kernel://[email protected]/nsm-1?app=greeting
spec:
containers:
- name: alpine
image: alpine:3.15.0
imagePullPolicy: IfNotPresent
stdin: true
tty: true
securityContext:
privileged: true
- name: server
image: hashicorp/http-echo:alpine
args:
- -text="hello world from greeting.cluster1"
- -listen=:9081
ports:
- containerPort: 9081
name: http
51 changes: 51 additions & 0 deletions examples/interdomain/linkerd/greeting/server.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
---
apiVersion: v1
kind: Service
metadata:
name: greeting
labels:
app: greeting
service: greeting
spec:
ports:
- port: 9080
name: http
selector:
app: greeting
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: greeting-sa
labels:
account: greeting
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: greeting
labels:
app: greeting
annotations:
linkerd.io/inject: enabled
spec:
replicas: 1
selector:
matchLabels:
app: greeting
template:
metadata:
labels:
app: greeting
spec:
serviceAccountName: greeting-sa
containers:
- name: server
image: hashicorp/http-echo:alpine
args:
- -text="hello world from linkerd"
- -listen=:9080
ports:
- containerPort: 9080
name: http
---
18 changes: 18 additions & 0 deletions examples/interdomain/linkerd/networkservice.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
apiVersion: networkservicemesh.io/v1
kind: NetworkService
metadata:
name: nsm-linkerd
namespace: nsm-system
spec:
payload: IP
matches:
- source_selector:
fallthrough: true
routes:
- destination_selector:
podName: "{{ .podName }}"
- source_selector:
routes:
- destination_selector:
any: "true"
2 changes: 2 additions & 0 deletions examples/interdomain/linkerd/nse-auto-scale/iptables-map.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
---
- -t nat -I PREROUTING 1 -p tcp -i {{ .NsmInterfaceName }} -j DNAT --to-destination 127.0.0.1:4140
21 changes: 21 additions & 0 deletions examples/interdomain/linkerd/nse-auto-scale/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: ns-nsm-linkerd
bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-supplier-k8s?ref=4b428a8d16019d09c338938b362d001c1eed1a7b

patchesStrategicMerge:
- patch-supplier.yaml

configMapGenerator:
- name: supplier-pod-template-configmap
files:
- pod-template.yaml
- name: iptables-map
files:
- iptables-map.yaml

generatorOptions:
disableNameSuffixHash: true
29 changes: 29 additions & 0 deletions examples/interdomain/linkerd/nse-auto-scale/patch-supplier.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nse-supplier-k8s
spec:
template:
spec:
containers:
- name: nse-supplier
env:
- name: NSM_SERVICE_NAME
value: nsm-linkerd
- name: NSM_LABELS
value: any:true
- name: NSM_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NSM_POD_DESCRIPTION_FILE
value: /run/supplier/pod-template.yaml
volumeMounts:
- name: pod-file
mountPath: /run/supplier
readOnly: true
volumes:
- name: pod-file
configMap:
name: supplier-pod-template-configmap
74 changes: 74 additions & 0 deletions examples/interdomain/linkerd/nse-auto-scale/pod-template.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
---
apiVersion: apps/v1
kind: Pod
metadata:
name: proxy-{{ index .Labels "podName" }}
labels:
"spiffe.io/spiffe-id": "true"
annotations:
linkerd.io/inject: enabled
config.linkerd.io/enable-debug-sidecar: "false"
spec:
restartPolicy: Never
containers:
- name: nse
image: nikitaxored/cmd-nse-l7-proxy:clean
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: SPIFFE_ENDPOINT_SOCKET
value: unix:///run/spire/sockets/agent.sock
- name: NSM_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAME
value: {{ index .Labels "podName" }}
- name: NSM_CONNECT_TO
value: unix:///var/lib/networkservicemesh/nsm.io.sock
- name: NSM_CIDR_PREFIX
value: 172.16.1.2/31
- name: NSM_SERVICE_NAMES
value: nsm-linkerd
- name: NSM_LABELS
value: app:{{ index .Labels "app" }}
- name: NSM_IDLE_TIMEOUT
value: 240s
- name: NSM_LOG_LEVEL
value: TRACE
- name: NSM_REWRITEIP
value: "false"
- name: NSM_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NSM_RULES_CONFIG_PATH
value: iptables-map/iptables-map.yaml
- name: NSM_REGISTRY_CLIENT_POLICIES
value: ""
volumeMounts:
- name: spire-agent-socket
mountPath: /run/spire/sockets
readOnly: true
- name: nsm-socket
mountPath: /var/lib/networkservicemesh
readOnly: true
- name: iptables-config-map
mountPath: /iptables-map
resources:
limits:
memory: 40Mi
cpu: 150m
volumes:
- name: spire-agent-socket
hostPath:
path: /run/spire/sockets
type: Directory
- name: nsm-socket
hostPath:
path: /var/lib/networkservicemesh
type: DirectoryOrCreate
- name: iptables-config-map
configMap:
name: iptables-map