This is the Kubernetes Python Operator for Apache Ranger.
Note: This operator requires the use of juju>=3.1.
Ranger requires PostgreSQL to store its state. Therefore, its deployment requires a relation with the Postgres charm:
# this will be blocked until the relation with Postgres is created
juju deploy ranger-k8s
juju deploy postgresql-k8s --channel 14/stable --trust
juju relate ranger-k8s:db postgresql-k8s:database
Refer to CONTRIBUTING.md for details on bootstrapping a juju controller for microk8s.
The Charmed Ranger Operator makes use of Ranger usersync to synchronize users, groups and memberships from a compatible LDAP server (eg. openldap, ActiveDirectory) to Ranger admin. The usersync functionality can be configured on deployment of the Ranger Charm. While you can scale the Ranger admin application, you should only have 1 Usersync deployed.
juju deploy ranger-k8s --config charm-function=usersync ranger-usersync-k8s
#optional ldap relation
juju deploy comsys-openldap-k8s --channel=edge
juju relate ranger-usersync-k8s comsys-openldap-k8s
This charm connects to the Ranger Admin and OpenLDAP via a relation but can also be directly configured.
Related applications must have the Ranger plugin configured. The Ranger plugin schedules regular download of Ranger policies (every 3 minutes) storing these policies within the related application in a cache. On access request, the requesting user's group is used when comparing to Ranger group policies to determine access. Therefore the related application should have the same source for groups.
Before relation of an application to the Ranger charm, the application's ranger-service-name
configuration parameter should be set. This will be the name of the Ranger service created for the application.
The configuration of these groups is done automatically on relation with the Ranger charm in the Trino K8s charm.
# relate trino and ranger charms:
juju relate trino-k8s:policy ranger-k8s:policy
# confirm applications are related and wait until active:
juju status --relations
# provide the ranger configuration file:
juju config ranger-k8s --file=user-group-configuration.yaml
Charmed OpenSearch should be integrated with the Ranger admin charm to enable auditing functionality for data access.
Charmed OpenSearch is a machine charm, unlike Charmed Ranger which is a K8s charm. As such we will need to bootstrap a LXD controller and implement a cross-controller relation. This can be achieved by:
# Bootstrap a LXD controller
juju bootstrap lxd lxd-controller
# Add a Model for OpenSearch
juju add-model opensearch
# Configure system settings of the host (required by OpenSearch)
cat <<EOF > cloudinit-userdata.yaml
cloudinit-userdata: |
postruncmd:
- [ 'echo', 'vm.max_map_count=262144', '>>', '/etc/sysctl.conf' ]
- [ 'echo', 'vm.swappiness=0', '>>', '/etc/sysctl.conf' ]
- [ 'echo', 'net.ipv4.tcp_retries2=5', '>>', '/etc/sysctl.conf' ]
- [ 'echo', 'fs.file-max=1048576', '>>', '/etc/sysctl.conf' ]
- [ 'sysctl', '-p' ]
EOF
sudo tee -a /etc/sysctl.conf > /dev/null <<EOT
vm.max_map_count=262144
vm.swappiness=0
net.ipv4.tcp_retries2=5
fs.file-max=1048576
EOT
sudo sysctl -p
juju model-config --file=./cloudinit-userdata.yaml
# Deploy OpenSearch
juju deploy ch:opensearch --channel=2/edge
# Deploy self-signed-certificates operator for enabling TLS
juju deploy self-signed-certificates --channel=latest/stable
# Enable TLS via relation
juju integrate self-signed-certificates opensearch
# Scale OpenSearch to 3 units
juju add-unit opensearch -n 2
# Offer the `opensearch-client` endpoint for consumption
juju offer opensearch:opensearch-client
# Switch back to your K8s controller and consume offer
juju switch ranger-controller
juju consume lxd-controller:admin/opensearch.opensearch
# Finally, relate the applications
juju relate ranger-k8s opensearch
More details on the setup process can be found here.
The Ranger operator exposes its ports using the Nginx Ingress Integrator operator. You must first make sure to have an Nginx Ingress Controller deployed. To enable TLS connections, you must have a TLS certificate stored as a k8s secret (default name is "ranger-tls"). A self-signed certificate for development purposes can be created as follows:
# Generate private key
openssl genrsa -out server.key 2048
# Generate a certificate signing request
openssl req -new -key server.key -out server.csr -subj "/CN=ranger-k8s"
# Create self-signed certificate
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt -extfile <(printf "subjectAltName=DNS:ranger-k8s")
# Create a k8s secret
kubectl create secret tls ranger-tls --cert=server.crt --key=server.key
This operator can then be deployed and connected to the ranger operator using the Juju command line as follows:
# Deploy ingress controller.
microk8s enable ingress:default-ssl-certificate=ranger-k8s/ranger-tls
juju deploy nginx-ingress-integrator --channel edge --revision 71
juju relate ranger-k8s nginx-ingress-integrator
Once deployed, the hostname will default to the name of the application (ranger-k8s), and can be configured using the external-hostname configuration on the ranger operator.
Apache Ranger is a stateless application, all of the metadata is stored in the PostgreSQL relation. Therefore backup and restore is achieved through backup and restoration of this data. A requirement for this is an AWS S3 bucket for use with the S3 integrator charm.
# Deploy the s3-integrator charm
juju deploy s3-integrator
# Provide S3 credentials
juju run s3-integrator/leader sync-s3-credentials access-key=<your_key> secret-key=<your_secret_key>
# Configure the s3-integrator
juju config s3-integrator \
endpoint="https://s3.eu-west-2.amazonaws.com" \
bucket="ranger-backup-bucket-1" \
path="/ranger-backup" \
region="eu-west-2"
# Relate postgres
juju relate s3-integratior postgresql-k8s
More details and configuration values can be found in the documentation for the PostgreSQL K8s charm
# Create a backup
juju run postgresql-k8s/leader create-backup --wait 5m
# List backups
juju run postgresql-k8s/leader list-backups
More details found here.
# Check available backups
juju run postgresql-k8s/leader list-backups
# Restore backup by ID
juju run postgresql-k8s/leader restore backup-id=YYYY-MM-DDTHH:MM:SSZ --wait 5m
More details found here.
The Apache Ranger charm can be related to the Canonical Observability Stack in order to collect logs and telemetry. To deploy cos-lite and expose its endpoints as offers, follow these steps:
# Deploy the cos-lite bundle:
juju add-model cos
juju deploy cos-lite --trust
# Expose the cos integration endpoints:
juju offer prometheus:metrics-endpoint
juju offer loki:logging
juju offer grafana:grafana-dashboard
# Relate ranger to the cos-lite apps:
juju relate ranger-k8s admin/cos.grafana
juju relate ranger-k8s admin/cos.loki
juju relate ranger-k8s admin/cos.prometheus
# Access grafana with username "admin" and password:
juju run grafana/0 -m cos get-admin-password --wait 1m
# Grafana is listening on port 3000 of the app ip address.
# Dashboard can be accessed under "Ranger Admin Metrics".
This charm is still in active development. Please see the Juju SDK docs for guidelines on enhancements to this charm following best practice guidelines, and CONTRIBUTING.md for developer guidance.