Skip to content

Commit

Permalink
feat(#89): add support for pulling the Locust image from private regi…
Browse files Browse the repository at this point in the history
…stries (#98) by @jachinte
  • Loading branch information
jachinte authored Apr 22, 2023
1 parent dabc29a commit 9d7bfa4
Show file tree
Hide file tree
Showing 12 changed files with 194 additions and 36 deletions.
57 changes: 45 additions & 12 deletions docs/advanced_topics.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,25 +4,25 @@ title: Advanced topics

# Advanced topics

Basic configuration is not always enough to satisfy the performance test needs, for example when needing to work with Kafka and MSK. Below is a collection of topics of an advanced nature. This list will be keep growing as the tool matures more and more.
Basic configuration is not always enough to satisfy the performance test needs, for example when needing to work with Kafka and MSK. Below is a collection of topics of an advanced nature. This list will be keep growing as the tool matures more and more.

## Kafka & AWS MSK configuration

Generally speaking, the usage of Kafka in a _locustfile_ is identical to how it would be used anywhere else within the cloud context. Thus, no special setup is needed for the purposes of performance testing with the _Operator_.
That being said, if an organization is using kafka in production, chances are that authenticated kafka is being used. One of the main providers of such managed service is _AWS_ in the form of _MSK_. For that end, the _Operator_ have an _out-of-the-box_ support for MSK.
That being said, if an organization is using kafka in production, chances are that authenticated kafka is being used. One of the main providers of such managed service is _AWS_ in the form of _MSK_. For that end, the _Operator_ have an _out-of-the-box_ support for MSK.

To enable performance testing with _MSK_, a central/global Kafka user can be created by the "cloud admin" or "the team" responsible for the _Operator_ deployment within the organization. The _Operator_ can then be easily configured to inject the configuration of that user as environment variables in all generated resources. Those variables can be used by the test to establish authentication with the kafka broker.

| Variable Name | Description |
|:---------------------------------|:---------------------------------------------------------------------------------|
| :------------------------------- | :------------------------------------------------------------------------------- |
| `KAFKA_BOOTSTRAP_SERVERS` | Kafka bootstrap servers |
| `KAFKA_SECURITY_ENABLED` | - |
| `KAFKA_SECURITY_PROTOCOL_CONFIG` | Security protocol. Options: `PLAINTEXT`, `SASL_PLAINTEXT`, `SASL_SSL`, `SSL` |
| `KAFKA_SASL_MECHANISM` | Authentication mechanism. Options: `PLAINTEXT`, `SCRAM-SHA-256`, `SCRAM-SHA-512` |
| `KAFKA_USERNAME` | The username used to authenticate Kafka clients with the Kafka server |
| `KAFKA_PASSWORD` | The password used to authenticate Kafka clients with the Kafka server |

--------
---

## Dedicated Kubernetes Nodes

Expand All @@ -34,7 +34,7 @@ This allows generated resources to have specific _Affinity_ options.

!!! Note

The _Custom Resource Definition Spec_ is designed with modularity and expandability in mind. This means that although a specific set of _Kubernetes Affinity_ options are supported today, extending this support based on need is a streamlined and easy processes. If additonal support is needed, don't hesitate to open a [feature request](https://github.com/AbdelrhmanHamouda/locust-k8s-operator/issues).
The _Custom Resource Definition Spec_ is designed with modularity and expandability in mind. This means that although a specific set of _Kubernetes Affinity_ options are supported today, extending this support based on need is a streamlined and easy processes. If additonal support is needed, don't hesitate to open a [feature request](https://github.com/AbdelrhmanHamouda/locust-k8s-operator/issues).

#### Affinity Options

Expand Down Expand Up @@ -131,11 +131,44 @@ closely [Kubernetes native definition](https://kubernetes.io/docs/concepts/sched
...
```

## Usage of a private image registry

Images from a private image registry can be used through various methods as described in the [kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#using-a-private-registry), one of those methods depends on setting `imagePullSecrets` for pods. This is supported in the operator by simply setting the `imagePullSecrets` option in the deployed custom resource. For example:

```yaml title="locusttest-pull-secret-cr.yaml"
apiVersion: locust.io/v1
...
spec:
image: ghcr.io/mycompany/locust:latest #(1)!
imagePullSecrets: #(2)!
- gcr-secret
...
```

1. Specify which Locust image to use for both master and worker containers.
2. [Optional] Specify an existing pull secret to use for master and worker pods.

### Image pull policy

Kubernetes uses the image tag and pull policy to control when kubelet attempts to download (pull) a container image. The image pull policy can be defined through the `imagePullPolicy` option, as explained in the [kubernetes documentation](https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy). When using the operator, the `imagePullPolicy` option can be directly configured in the custom resource. For example:

```yaml title="locusttest-pull-policy-cr.yaml"
apiVersion: locust.io/v1
...
spec:
image: ghcr.io/mycompany/locust:latest #(1)!
imagePullPolicy: Always #(2)!
...
```

1. Specify which Locust image to use for both master and worker containers.
2. [Optional] Specify the pull policy to use for containers defined within master and worker containers. Supported options include `Always`, `IfNotPresent` and `Never`.

## Automatic Cleanup for Finished Master and Worker Jobs

Once load tests finish, master and worker jobs remain available in Kubernetes.
You can set up a time-to-live (TTL) value in the operator's Helm chart, so that
kubernetes jobs are eligible for cascading removal once the TTL expires. This means
Once load tests finish, master and worker jobs remain available in Kubernetes.
You can set up a time-to-live (TTL) value in the operator's Helm chart, so that
kubernetes jobs are eligible for cascading removal once the TTL expires. This means
that Master and Worker jobs and their dependent objects (e.g., pods) will be deleted.

Note that setting up a TTL will not delete `LocustTest` or `ConfigMap` resources.
Expand All @@ -159,7 +192,7 @@ Read more about the `ttlSecondsAfterFinished` parameter in Kubernetes's [officia

### Kubernetes Support for `ttlSecondsAfterFinished`

Support for parameter `ttlSecondsAfterFinished` was added in Kubernetes v1.12.
In case you're deploying the locust operator to a Kubernetes cluster that does not
support `ttlSecondsAfterFinished`, you may leave the Helm key empty or use an empty
string. In this case, job definitions will not include the parameter.
Support for parameter `ttlSecondsAfterFinished` was added in Kubernetes v1.12.
In case you're deploying the locust operator to a Kubernetes cluster that does not
support `ttlSecondsAfterFinished`, you may leave the Helm key empty or use an empty
string. In this case, job definitions will not include the parameter.
6 changes: 5 additions & 1 deletion docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,11 @@ spec:
7. The amount of _worker_ nodes to spawn in the cluster.
8. [Optional] Name of _configMap_ to mount into the pod
Note that other options are available. In particular, you can add labels and annotations as well. For example:
#### Other options
##### Labels and annotations
You can add labels and annotations to generated Pods. For example:
```yaml title="locusttest-cr.yaml"
apiVersion: locust.io/v1
Expand Down
39 changes: 18 additions & 21 deletions docs/helm_deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,31 +7,28 @@ description: Instructions on how to deploy Locust Kubernetes Operator with HELM

In order to deploy using helm, follow the below steps:

1. Add the _Operator´s_ HELM repo
- `helm repo add locust-k8s-operator https://abdelrhmanhamouda.github.io/locust-k8s-operator/`

!!! note

If the repo has been added before, run `helm repo update` in order to pull the latest available release!
1. Add the _Operator´s_ HELM repo

2. Install the _Operator_
- `helm repo add locust-k8s-operator https://abdelrhmanhamouda.github.io/locust-k8s-operator/`

- `#!bash helm install locust-operator locust-k8s-operator/locust-k8s-operator`
- The _Operator_ will ready up in around 40-60 seconds
- This will cause the bellow resources to be deployed in the currently active _k8s_ context & namespace.
- [crd-locusttest.yaml]
- This _CRD_ is the first part of the _Operator_ pattern. It is needed in order to enable _Kubernetes_ to understand the _LocustTest_
custom resource and allow its deployment.
- [serviceaccount-and-roles.yaml]
- ServiceAccount and Role bindings that enable the _Controller_ to have the needed privilege inside the cluster to watch and
manage the related resources.
- [deployment.yaml]
- The _Controller_ responsible for managing and reacting to the cluster resources.
!!! note

[//]: # (Resources urls)
If the repo has been added before, run `helm repo update` in order to pull the latest available release!

[crd-locusttest.yaml]: https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/charts/locust-k8s-operator/crds/locust-test-crd.yaml
2. Install the _Operator_

- `#!bash helm install locust-operator locust-k8s-operator/locust-k8s-operator` - The _Operator_ will ready up in around 40-60 seconds
- This will cause the bellow resources to be deployed in the currently active _k8s_ context & namespace.
- [crd-locusttest.yaml]
- This _CRD_ is the first part of the _Operator_ pattern. It is needed in order to enable _Kubernetes_ to understand the _LocustTest_
custom resource and allow its deployment.
- [serviceaccount-and-roles.yaml]
- ServiceAccount and Role bindings that enable the _Controller_ to have the needed privilege inside the cluster to watch and
manage the related resources.
- [deployment.yaml]
- The _Controller_ responsible for managing and reacting to the cluster resources.

[//]: # "Resources urls"
[crd-locusttest.yaml]: https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/kube/crd/locust-test-crd.yaml
[serviceaccount-and-roles.yaml]: https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/charts/locust-k8s-operator/templates/serviceaccount-and-roles.yaml

[deployment.yaml]: https://github.com/AbdelrhmanHamouda/locust-k8s-operator/blob/master/charts/locust-k8s-operator/templates/deployment.yaml
12 changes: 12 additions & 0 deletions kube/crd/locust-test-crd.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,18 @@ spec:
image: # Child field 'image'
description: Locust image
type: string
imagePullPolicy:
description: Image pull policy
type: string
enum:
- "Always"
- "IfNotPresent"
- "Never"
imagePullSecrets:
description: Secrets for pulling images from private registries
type: array
items:
type: string
configMap: # Child field 'configMap'
description: Configuration map name containing the test
type: string
Expand Down
4 changes: 4 additions & 0 deletions kube/sample-cr/locust-test-cr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ metadata:
name: demo.test
spec:
image: locustio/locust:latest
# [Optional-Section] Image pull policy and secrets
imagePullPolicy: Always
imagePullSecrets:
- "my-private-registry-secret"

# [Optional-Section] Labels
labels:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ public class LoadGenerationNode {
private List<String> command;
private OperationalMode operationalMode;
private String image;
private String imagePullPolicy;
private List<String> imagePullSecrets;
private Integer replicas;
private List<Integer> ports;
private String configMap;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,8 @@ public LoadGenerationNode generateLoadGenNodeObject(LocustTest resource, Operati
constructNodeCommand(resource, mode),
mode,
getNodeImage(resource),
getNodeImagePullPolicy(resource),
getNodeImagePullSecrets(resource),
getReplicaCount(resource, mode),
getNodePorts(resource, mode),
getConfigMap(resource));
Expand Down Expand Up @@ -93,6 +95,14 @@ private String getNodeImage(LocustTest resource) {

}

private String getNodeImagePullPolicy(LocustTest resource) {
return resource.getSpec().getImagePullPolicy();
}

private List<String> getNodeImagePullSecrets(LocustTest resource) {
return resource.getSpec().getImagePullSecrets();
}

public LocustTestAffinity getNodeAffinity(LocustTest resource) {

return config.isAffinityCrInjectionEnabled() ? resource.getSpec().getAffinity() : null;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
import io.fabric8.kubernetes.api.model.ContainerPortBuilder;
import io.fabric8.kubernetes.api.model.EnvVar;
import io.fabric8.kubernetes.api.model.EnvVarBuilder;
import io.fabric8.kubernetes.api.model.LocalObjectReference;
import io.fabric8.kubernetes.api.model.LocalObjectReferenceBuilder;
import io.fabric8.kubernetes.api.model.NodeAffinity;
import io.fabric8.kubernetes.api.model.NodeAffinityBuilder;
import io.fabric8.kubernetes.api.model.NodeSelector;
Expand Down Expand Up @@ -219,6 +221,8 @@ private ObjectMeta prepareTemplateMetadata(LoadGenerationNode nodeConfig, String
private PodSpec prepareTemplateSpec(LoadGenerationNode nodeConfig) {

PodSpec templateSpec = new PodSpecBuilder()
// images
.withImagePullSecrets(prepareImagePullSecrets(nodeConfig))

// Containers
.withContainers(prepareContainerList(nodeConfig))
Expand All @@ -234,6 +238,23 @@ private PodSpec prepareTemplateSpec(LoadGenerationNode nodeConfig) {

}

private List<LocalObjectReference> prepareImagePullSecrets(LoadGenerationNode nodeConfig) {
final List<LocalObjectReference> references = new ArrayList<>();

if (nodeConfig.getImagePullSecrets() != null) {
references.addAll(
nodeConfig.getImagePullSecrets()
.stream()
.map(secretName -> new LocalObjectReferenceBuilder().withName(secretName).build())
.toList()
);
}

log.debug("Prepared image pull secrets: {}", references);

return references;
}

private List<Volume> prepareVolumesList(LoadGenerationNode nodeConfig) {

List<Volume> volumeList = new ArrayList<>();
Expand Down Expand Up @@ -343,7 +364,7 @@ private List<Container> prepareContainerList(LoadGenerationNode nodeConfig) {

// Inject metrics container only if `master`
if (nodeConfig.getOperationalMode().equals(MASTER)) {
constantsList.add(prepareMetricsExporterContainer());
constantsList.add(prepareMetricsExporterContainer(nodeConfig.getImagePullPolicy()));
}

return constantsList;
Expand All @@ -355,9 +376,10 @@ private List<Container> prepareContainerList(LoadGenerationNode nodeConfig) {
* <p>
* Reference: <a href="https://github.com/ContainerSolutions/locust_exporter">locust exporter docs</a>
*
* @param pullPolicy The image pull policy
* @return Container
*/
private Container prepareMetricsExporterContainer() {
private Container prepareMetricsExporterContainer(final String pullPolicy) {

HashMap<String, String> envMap = new HashMap<>();

Expand All @@ -371,6 +393,7 @@ private Container prepareMetricsExporterContainer() {

// Image
.withImage(EXPORTER_IMAGE)
.withImagePullPolicy(pullPolicy)

// Ports
.withPorts(new ContainerPortBuilder().withContainerPort(LOCUST_EXPORTER_PORT).build())
Expand Down Expand Up @@ -404,6 +427,7 @@ private Container prepareLoadGenContainer(LoadGenerationNode nodeConfig) {

// Image
.withImage(nodeConfig.getImage())
.withImagePullPolicy(nodeConfig.getImagePullPolicy())

// Ports
.withPorts(prepareContainerPorts(nodeConfig.getPorts()))
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,5 +36,7 @@ public class LocustTestSpec implements KubernetesResource {
private Integer workerReplicas;
private String configMap;
private String image;
private String imagePullPolicy;
private List<String> imagePullSecrets;

}
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
Expand All @@ -37,6 +38,8 @@ public class TestFixtures {
public static final String KIND = "LocustTest";
public static final String DEFAULT_SEED_COMMAND = "--locustfile src/demo.py";
public static final String DEFAULT_TEST_IMAGE = "xlocust:latest";
public static final String DEFAULT_IMAGE_PULL_POLICY = "IfNotPresent";
public static final List<String> DEFAULT_IMAGE_PULL_SECRETS = Collections.emptyList();
public static final String DEFAULT_TEST_CONFIGMAP = "demo-test-configmap";
public static final String DEFAULT_NAMESPACE = "default";
public static final int REPLICAS = 50;
Expand Down Expand Up @@ -120,6 +123,8 @@ public static LocustTest prepareLocustTest(String resourceName, Integer replicas
spec.setWorkerCommandSeed(DEFAULT_SEED_COMMAND);
spec.setConfigMap(DEFAULT_TEST_CONFIGMAP);
spec.setImage(DEFAULT_TEST_IMAGE);
spec.setImagePullPolicy(DEFAULT_IMAGE_PULL_POLICY);
spec.setImagePullSecrets(DEFAULT_IMAGE_PULL_SECRETS);
spec.setWorkerReplicas(replicas);

var labels = new HashMap<String, Map<String, String>>();
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
import com.locust.operator.customresource.internaldto.LocustTestNodeAffinity;
import com.locust.operator.customresource.internaldto.LocustTestToleration;
import io.fabric8.kubernetes.api.model.KubernetesResourceList;
import io.fabric8.kubernetes.api.model.LocalObjectReference;
import io.fabric8.kubernetes.api.model.PodList;
import io.fabric8.kubernetes.api.model.batch.v1.JobList;
import lombok.NoArgsConstructor;
import lombok.SneakyThrows;
Expand Down Expand Up @@ -154,6 +156,17 @@ public static LoadGenerationNode prepareNodeConfigWithTolerations(String nodeNam

}

public static LoadGenerationNode prepareNodeConfigWithPullPolicyAndSecrets(
String nodeName, OperationalMode mode, String pullPolicy, List<String> pullSecrets) {

val nodeConfig = prepareNodeConfig(nodeName, mode);
nodeConfig.setImagePullPolicy(pullPolicy);
nodeConfig.setImagePullSecrets(pullSecrets);

return nodeConfig;

}

public static <T extends KubernetesResourceList<?>> void assertK8sResourceCreation(String nodeName, T resourceList) {

assertSoftly(softly -> {
Expand All @@ -163,6 +176,29 @@ public static <T extends KubernetesResourceList<?>> void assertK8sResourceCreati

}

public static void assertImagePullData(LoadGenerationNode nodeConfig, PodList podList) {

podList.getItems().forEach(pod -> {
final List<String> references = pod.getSpec()
.getImagePullSecrets()
.stream()
.map(LocalObjectReference::getName)
.toList();

assertSoftly(softly -> {
softly.assertThat(references).isEqualTo(nodeConfig.getImagePullSecrets());
});

pod.getSpec()
.getContainers()
.forEach(container -> {
assertSoftly(softly -> {
softly.assertThat(container.getImagePullPolicy()).isEqualTo(nodeConfig.getImagePullPolicy());
});
});
});
}

public static void assertK8sTtlSecondsAfterFinished(JobList jobList, Integer ttlSecondsAfterFinished) {
jobList.getItems().forEach(job -> {
val actualTtlSecondsAfterFinished = job.getSpec().getTtlSecondsAfterFinished();
Expand Down
Loading

0 comments on commit 9d7bfa4

Please sign in to comment.