diff --git a/assets/jfrog/artifactory-ha-107.90.14.tgz b/assets/jfrog/artifactory-ha-107.90.14.tgz new file mode 100644 index 0000000000..2ce24d3326 Binary files /dev/null and b/assets/jfrog/artifactory-ha-107.90.14.tgz differ diff --git a/assets/jfrog/artifactory-jcr-107.90.14.tgz b/assets/jfrog/artifactory-jcr-107.90.14.tgz new file mode 100644 index 0000000000..354c969a77 Binary files /dev/null and b/assets/jfrog/artifactory-jcr-107.90.14.tgz differ diff --git a/assets/kasten/k10-7.0.1101.tgz b/assets/kasten/k10-7.0.1101.tgz new file mode 100644 index 0000000000..951edb231b Binary files /dev/null and b/assets/kasten/k10-7.0.1101.tgz differ diff --git a/assets/new-relic/nri-bundle-5.0.94.tgz b/assets/new-relic/nri-bundle-5.0.94.tgz new file mode 100644 index 0000000000..0fc7a9e537 Binary files /dev/null and b/assets/new-relic/nri-bundle-5.0.94.tgz differ diff --git a/assets/stackstate/stackstate-k8s-agent-1.0.98.tgz b/assets/stackstate/stackstate-k8s-agent-1.0.98.tgz new file mode 100644 index 0000000000..fe8fa33a6c Binary files /dev/null and b/assets/stackstate/stackstate-k8s-agent-1.0.98.tgz differ diff --git a/charts/jfrog/artifactory-ha/107.90.14/.helmignore b/charts/jfrog/artifactory-ha/107.90.14/.helmignore new file mode 100644 index 0000000000..b6e97f07fb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/.helmignore @@ -0,0 +1,24 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +OWNERS + +tests/ \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/CHANGELOG.md b/charts/jfrog/artifactory-ha/107.90.14/CHANGELOG.md new file mode 100644 index 0000000000..5edb3eebb0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/CHANGELOG.md @@ -0,0 +1,1466 @@ +# JFrog Artifactory-ha Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.14] - July 18, 2024 +* Fixed #adding colon in image registry which breaks deployment [GH-1892](https://github.com/jfrog/charts/pull/1892) +* Added new `nginx.hosts` to use Nginx server_name directive instead of `ingress.hosts` +* Added a deprecation notice of ingress.hosts when `ngnix.enabled` is true +* Added new evidence service +* Corrected database connection values based on sizing +* **IMPORTANT** +* Separate access from artifactory tomcat to run on its own dedicated tomcat + * With this change access will be running in its own dedicated container + * This will give the ability to control resources and java options specific to access + Can be done by passing the following, + `access.javaOpts.other` + `access.resources` + `access.extraEnvironmentVariables` +* Updating the example link for downloading the DB driver +* Added Binary Provider recommendations + +## [107.89.0] - May 30, 2024 +* Fix the indentation of the commented-out sections in the values.yaml file + +## [107.88.0] - May 29, 2024 +* **IMPORTANT** +* Refactored `nginx.artifactoryConf` and `nginx.mainConf` configuration (moved to files/nginx-artifactory-conf.yaml and files/nginx-main-conf.yaml instead of keys in values.yaml) + +## [107.87.0] - May 29, 2024 +* Renamed `.Values.artifactory.openMetrics` to `.Values.artifactory.metrics` +* Align all liveness and readiness probes (Removed hard-coded values) + +## [107.85.0] - May 29, 2024 +* Changed `migration.enabled` to false by default. For 6.x to 7.x migration, this flag needs to be set to `true` + +## [107.84.0] - May 29, 2024 +* Added image section for `initContainers` instead of `initContainerImage` +* Renamed `router.image.imagePullPolicy` to `router.image.pullPolicy` +* Removed loggers.image section +* Added support for `global.verisons.initContainers` to override `initContainers.image.tag` +* Fixed an issue with extraSystemYaml merge +* **IMPORTANT** +* Renamed `artifactory.setSecurityContext` to `artifactory.podSecurityContext` +* Renamed `artifactory.uid` to `artifactory.podSecurityContext.runAsUser` +* Renamed `artifactory.gid` to `artifactory.podSecurityContext.runAsGroup` and `artifactory.podSecurityContext.fsGroup` +* Renamed `artifactory.fsGroupChangePolicy` to `artifactory.podSecurityContext.fsGroupChangePolicy` +* Renamed `artifactory.seLinuxOptions` to `artifactory.podSecurityContext.seLinuxOptions` +* Added flag `allowNonPostgresql` defaults to false +* Update postgresql tag version to `15.6.0-debian-12-r5` +* Added a check if `initContainerImage` exists +* Fixed a wrong imagePullPolicy configuration +* Fixed an issue to generate unified secret to support artifactory fullname [GH-1882](https://github.com/jfrog/charts/issues/1882) +* Fixed an issue template render on loggers [GH-1883](https://github.com/jfrog/charts/issues/1883) +* Override metadata and observability image tag with `global.verisons.artifactory` value +* Fixed resource constraints for "setup" initContainer of nginx deployment [GH-962] (https://github.com/jfrog/charts/issues/962) +* Added .Values.artifactory.unifiedSecretsPrependReleaseName` for unified secret to prepend release name +* Fixed maxCacheSize and cacheProviderDir mix up under azure-blob-storage-v2-direct template in binarystore.xml + +## [107.83.0] - Mar 12, 2024 +* Added image section for `metadata` and `observability` + +## [107.82.0] - Mar 04, 2024 +* Added `disableRouterBypass` flag as experimental feature, to disable the artifactoryPath /artifactory/ and route all traffic through the Router. +* Removed Replicator Service + +## [107.81.0] - Feb 20, 2024 +* **IMPORTANT** +* Refactored systemYaml configuration (moved to files/system.yaml instead of key in values.yaml) +* Added ability to provide `extraSystemYaml` configuration in values.yaml which will merge with the existing system yaml when `systemYamlOverride` is not given [GH-1848](https://github.com/jfrog/charts/pull/1848) +* Added option to modify the new cache configs, maxFileSizeLimit and skipDuringUpload +* Added IPV4/IPV6 Dualstack flag support for Artifactory and nginx service +* Added `singleStackIPv6Cluster` flag, which manages the Nginx configuration to enable listening on IPv6 and proxying +* Fixing broken link for creating additional kubernetes resources. Refer [here](https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-ha-values.yaml) +* Refactored installerInfo configuration (moved to files/installer-info.json instead of key in values.yaml) + +## [107.80.0] - Feb 20, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.79.0] - Feb 20, 2024 +* **IMPORTANT** +* Added `unifiedSecretInstallation` flag which enables single unified secret holding all internal (chart) secrets to `true` by default +* Added support for azure-blob-storage-v2-direct config +* Added option to set Nginx to write access_log to container STDOUT +* **Important change:** +* Update postgresql tag version to `15.2.0-debian-11-r23` +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default bundles PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x/13.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true + +## [107.77.0] - April 22, 2024 +* Removed integration service +* Added recommended postgresql sizing configurations under sizing directory +* Updated artifactory-federation (probes, port, embedded mode) +* **IMPORTANT** +* setSecurityContext has been renamed to podSecurityContext. +* Moved podSecurityContext to values.yaml +* Fixing broken nginx port [GH-1860](https://github.com/jfrog/charts/issues/1860) +* Added nginx.customCommand to use custom commands for the nginx container + +## [107.76.0] - Dec 13, 2023 +* Added connectionTimeout and socketTimeout paramaters under AWSS3 binarystore section +* Reduced nginx startupProbe initialDelaySeconds + +## [107.74.0] - Nov 30, 2023 +* Added recommended sizing configurations under sizing directory, please refer [here](README.md/#apply-sizing-configurations-to-the-chart) +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.70.0] - Nov 30, 2023 +* Fixed - StatefulSet pod annotations changed from range to toYaml [GH-1828](https://github.com/jfrog/charts/issues/1828) +* Fixed - Invalid format for awsS3V3 `multiPartLimit,multipartElementSize` in binarystore.xml +* Fixed - Artifactory primary service condition +* Fixed - SecurityContext with runAsGroup in artifactory-ha [GH-1838](https://github.com/jfrog/charts/issues/1838) +* Added support for custom labels in the Nginx pods [GH-1836](https://github.com/jfrog/charts/pull/1836) +* Added podSecurityContext and containerSecurityContext for nginx +* Added support for nginx on openshift, set `podSecurityContext` and `containerSecurityContext` to false +* Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + +## [107.69.0] - Sep 18, 2023 +* Adjust rtfs context +* Fixed - Metadata service does not respect customVolumeMounts for DB CAs [GH-1815](https://github.com/jfrog/charts/issues/1815) + +## [107.68.8] - Sep 18, 2023 +* Reverted - Enabled `unifiedSecretInstallation` by default [GH-1819](https://github.com/jfrog/charts/issues/1819) +* Removed unused `artifactory.javaOpts` from values.yaml +* Removed openshift condition check from NOTES.txt +* Fixed an issue with artifactory node replicaCount [GH-1808](https://github.com/jfrog/charts/issues/1808) + +## [107.68.7] - Aug 28, 2023 +* Enabled `unifiedSecretInstallation` by default +* Removed unused `artifactory.javaOpts` from values.yaml + +## [107.67.0] - Aug 28, 2023 +* Add 'extraJavaOpts' and 'port' values to federation service + +## [107.66.0] - Aug 28, 2023 +* Added federation service container in artifactory +* Add rtfs service to ingress in artifactory + +## [107.64.0] - Aug 28,2023 +* Added support to configure event.webhooks within generated system.yaml +* Fixed an issue to generate ssl certificate should support artifactory-ha fullname +* Added 'multiPartLimit' and 'multipartElementSize' parameters to awsS3V3 binary providers. +* Increased default Artifactory Tomcat acceptCount config to 400 +* Fixed Illegal Strict-Transport-Security header in nginx config + +## [107.63.0] - Aug 28, 2023 +* Added support for Openshift by adding the securityContext in container level. +* **IMPORTANT** +* Disable securityContext in container and pod level to deploy postgres on openshift. +* Fixed support for fsGroup in non openshift environment and runAsGroup in openshift environment. +* Fixed - Helm Template Error when using artifactory.loggers [GH-1791](https://github.com/jfrog/charts/issues/1791) +* Removed the nginx disable condition for openshift +* Fixed jfconnect disabling as micro-service on splitcontainers [GH-1806](https://github.com/jfrog/charts/issues/1806) + +## [107.62.0] - Jun 5, 2023 +* Added support for 'port' and 'useHttp' parameters for s3-storage-v3 binary provider [GH-1767](https://github.com/jfrog/charts/issues/1767) + +## [107.61.0] - May 31, 2023 +* Added new binary provider `google-storage-v2-direct` + +## [107.60.0] - May 31, 2023 +* Enabled `splitServicesToContainers` to true by default +* Updated the recommended values for small, medium and large installations to support the 'splitServicesToContainers' + +## [107.59.0] - May 31, 2023 +* Fixed reference of `terminationGracePeriodSeconds` +* **Breaking change** +* Updated the defaults of replicaCount (Values.artifactory.primary.replicaCount and Values.artifactory.node.replicaCount) to support Cloud-Native High Availability. Refer [Cloud-Native High Availability](https://jfrog.com/help/r/jfrog-installation-setup-documentation/cloud-native-high-availability) +* Updated the values of the recommended resources - values-small, values-medium and values-large according to the Cloud-Native HA support. +* **IMPORTANT** +* In the absence of custom parameters for primary.replicaCount and node.replicaCount on your deployment, it is recommended to specify the current values explicitly to prevent any undesired changes to the deployment structure. +* Please be advised that the configuration for resources allocation (requests, limits, javaOpts, affinity rules, etc) will now be applied solely under Values.artifactory.primary when using the new defaults. +* **Upgrade** +* Upgrade from primary-members to primary-only is recommended, and can be done by deploy the chart with the new values. +* During the upgrade, members pods should be deleted and new primary pods should be created. This might trigger the creation of new PVCs. +* Added Support for Cold Artifact Storage as part of the systemYaml configuration (disabled by default) +* Added new binary provider `s3-storage-v3-archive` +* Fixed jfconnect disabling as micro-service on non-splitcontainers +* Fixed an issue whereby, Artifactory failed to start when using persistence storage type `nfs` due to missing binarystore.xml + + +## [107.58.0] - Mar 23, 2023 +* Updated postgresql multi-arch tag version to `13.10.0-debian-11-r14` +* Removed obselete remove-lost-found initContainer` +* Added env JF_SHARED_NODE_HAENABLED under frontend when running in the container split mode + +## [107.57.0] - Mar 02, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1793` + +## [107.55.0] - Feb 21, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1760` +* Adding a custom preStop to Artifactory router for allowing graceful termination to complete +* Fixed an invalid reference of node selector on artifactory-ha chart + +## [107.53.0] - Jan 20, 2023 +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.50.0] - Jan 20, 2023 +* Updated postgresql tag version to `13.9.0-debian-11-r11` +* Fixed make lint issue on artifactory-ha chart [GH-1714](https://github.com/jfrog/charts/issues/1714) +* Fixed an issue for capabilities check of ingress +* Updated jfrogUrl text path in migrate.sh file +* Added a note that from 107.46.x chart versions, `copyOnEveryStartup` is not needed for binarystore.xml, it is always copied via initContainers. For more Info, Refer [GH-1723](https://github.com/jfrog/charts/issues/1723) + +## [107.49.0] - Jan 16, 2023 +* Changed logic in wait-for-primary container to use /dev/tcp instead of curl +* Added support for setting `seLinuxOptions` in `securityContext` [GH-1700](https://github.com/jfrog/charts/pull/1700) +* Added option to enable/disable proxy_request_buffering and proxy_buffering_off [GH-1686](https://github.com/jfrog/charts/pull/1686) +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.48.0] - Oct 27, 2022 +* Updated router version to `7.51.0` + +## [107.47.0] - Sep 29, 2022 +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-941` +* Added support for annotations for artifactory statefulset and nginx deployment [GH-1665](https://github.com/jfrog/charts/pull/1665) +* Updated router version to `7.49.0` + +## [107.46.0] - Sep 14, 2022 +* **IMPORTANT** +* Added support for lifecycle hooks for all containers, changed `artifactory.postStartCommand` to `.Values.artifactory.lifecycle.postStart.exec.command` +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.6-902` +* Update nginx configuration to allow websocket requests when using pipelines +* Fixed an issue to allow artifactory to make direct API calls to store instead via jfconnect service when `splitServicesToContainers=true` +* Refactor binarystore.xml configuration (moved to `files/binarystore.xml` instead of key in values.yaml) +* Added new binary providers `s3-storage-v3-direct`, `azure-blob-storage-direct`, `google-storage-v2` +* Deprecated (removed) `aws-s3` binary provider [JetS3t library](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore#ConfiguringtheFilestore-BinaryProvider) +* Deprecated (removed) `google-storage` binary provider and force persistence storage type `google-storage` to work with `google-storage-v2` only +* Copy binarystore.xml in init Container to fix existing persistence on file system in clear text +* Removed obselete `.Values.artifactory.binarystore.enabled` key +* Removed `newProbes.enabled`, default to new probes +* Added nginx.customCommand using inotifyd to reload nginx's config upon ssl secret or configmap changes [GH-1640](https://github.com/jfrog/charts/pull/1640) + +## [107.43.0] - Aug 25, 2022 +* Added flag `artifactory.replicator.ingress.enabled` to enable/disable ingress for replicator +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.6-854` +* Updated router version to `7.45.0` +* Added flag `artifactory.schedulerName` to set for the pods the value of schedulerName field [GH-1606](https://github.com/jfrog/charts/issues/1606) +* Enabled TLS based on access or router in values.yaml + +## [107.42.0] - Aug 25, 2022 +* Enabled database creds secret to use from unified secret +* Updated router version to `7.42.0` +* Added support to truncate (> 63 chars) for unifiedCustomSecretVolumeName + +## [107.41.0] - June 27, 2022 +* Added support for nginx.terminationGracePeriodSeconds [GH-1645](https://github.com/jfrog/charts/issues/1645) +* Fix nginx lifecycle values [GH-1646](https://github.com/jfrog/charts/pull/1646) +* Use an alternate command for `find` to copy custom certificates +* Added support for circle of trust using `circleOfTrustCertificatesSecret` secret name [GH-1623](https://github.com/jfrog/charts/pull/1623) + +## [107.40.0] - Jun 16, 2022 +* Deprecated k8s PodDisruptionBudget api policy/v1beta1 [GH-1618](https://github.com/jfrog/charts/issues/1618) +* Disabled node PodDisruptionBudget, statefulset and artifactory-primary service from artifactory-ha chart when member nodes are 0 +* From artifactory 7.38.x, joinKey can be retrived from Admin > User Management > Settings in UI +* Fixed template name for artifactory-ha database creds [GH-1602](https://github.com/jfrog/charts/pull/1602) +* Allow templating for pod annotations [GH-1634](https://github.com/jfrog/charts/pull/1634) +* Added flags to control enable/disable infra services in splitServicesToContainers + +## [107.39.0] - May 16, 2022 +* Fix default `artifactory.async.corePoolSize` [GH-1612](https://github.com/jfrog/charts/issues/1612) +* Added support of nginx annotations +* Reduce startupProbe `initialDelaySeconds` +* Align all liveness and readiness probes failureThreshold to `5` seconds +* Added new flag `unifiedSecretInstallation` to enables single unified secret holding all the artifactory-ha secrets +* Updated router version to `7.38.0` + +## [107.38.0] - May 04, 2022 +* Added support for `global.nodeSelector` to artifactory and nginx pods +* Updated router version to `7.36.1` +* Added support for custom global probes timeout +* Updated frontend container command +* Added topologySpreadConstraints to artifactory and nginx, and add lifecycle hooks to nginx [GH-1596](https://github.com/jfrog/charts/pull/1596) +* Added support of extraEnvironmentVariables for all infra services containers +* Enabled the consumption (jfconnect) flag by default +* Fix jfconnect disabling on non-splitcontainers + +## [107.37.0] - Mar 08, 2022 +* Added support for customPorts in nginx deployment +* Bugfix - Wrong proxy_pass configurations for /artifactory/ in the default artifactory.conf +* Added signedUrlExpirySeconds option to artifactory.persistence.type aws-S3-V3 +* Updated router version to `7.35.0` +* Added useInstanceCredentials,enableSignedUrlRedirect option to google-storage-v2 +* Changed dependency charts repo to `charts.jfrog.io` + +## [107.36.0] - Mar 03, 2022 +* Remove pdn tracker which starts replicator service +* Added silent option for curl probes +* Added readiness health check for the artifactory container for k8s version < 1.20 +* Fix property file migration issue to system.yaml 6.x to 7.x + +## [107.35.0] - Feb 08, 2022 +* Updated router version to `7.32.1` + +## [107.33.0] - Jan 11, 2022 +* Make default value of anti-affinity to soft +* Readme fixes +* Added support for setting `fsGroupChangePolicy` +* Added nginx customInitContainers, customVolumes, customSidecarContainers [GH-1565](https://github.com/jfrog/charts/pull/1565) +* Updated router version to `7.30.0` + +## [107.32.0] - Dec 23, 2021 +* Updated logger image to `jfrog/ubi-minimal:8.5-204` +* Added default `8091` as `artifactory.tomcat.maintenanceConnector.port` for probes check +* Refactored probes to replace httpGet probes with basic exec + curl +* Refactored `database-creds` secret to create only when database values are passed +* Added new endpoints for probes `/artifactory/api/v1/system/liveness` and `/artifactory/api/v1/system/readiness` +* Enabled `newProbes:true` by default to use these endpoints +* Fix filebeat sidecar spool file permissions +* Updated filebeat sidecar container to `7.16.2` + +## [107.31.0] - Dec 17, 2021 +* Remove integration service feature flag to make it mandatory service +* Update postgresql tag version to `13.4.0-debian-10-r39` +* Refactored `router.requiredServiceTypes` to support platform chart + +## [107.30.0] - Nov 30, 2021 +* Fixed incorrect permission for filebeat.yaml +* Updated healthcheck (liveness/readiness) api for integration service +* Disable readiness health check for the artifactory container when running in the container split mode +* Ability to start replicator on enabling pdn tracker + +## [107.29.0] - Nov 30, 2021 +* Added integration service container in artifactory +* Add support for Ingress Class Name in Ingress Spec [GH-1516](https://github.com/jfrog/charts/pull/1516) +* Fixed chart values to use curl instead of wget [GH-1529](https://github.com/jfrog/charts/issues/1529) +* Updated nginx config to allow websockets when pipelines is enabled +* Moved router.topology.local.requireqservicetypes from system.yaml to router as environment variable +* Added jfconnect in system.yaml +* Updated artifactory container’s health probes to use artifactory api on rt-split +* Updated initContainerImage to `jfrog/ubi-minimal:8.5-204` +* Updated router version to `7.28.2` +* Set Jfconnect enabled to `false` in the artifactory container when running in the container split mode + +## [107.28.0] - Nov 11, 2021 +* Added default values cpu and memeory in initContainers +* Updated router version to `7.26.0` +* Bug fix - jmx port not exposed in artifactory service +* Updated (`rbac.create` and `serviceAccount.create` to false by default) for least privileges +* Fixed incorrect data type for `Values.router.serviceRegistry.insecure` in default values.yaml [GH-1514](https://github.com/jfrog/charts/pull/1514/files) +* **IMPORTANT** +* Changed init-container images from `alpine` to `ubi8/ubi-minimal` +* Added support for AWS License Manager using `.Values.aws.licenseConfigSecretName` + +## [107.27.0] - Oct 6, 2021 +* **Breaking change** +* Aligned probe structure (moved probes variables under config block) +* Added support for new probes(set to false by default) +* Bugfix - Invalid format for `multiPartLimit,multipartElementSize,maxCacheSize` in binarystore.xml [GH-1466](https://github.com/jfrog/charts/issues/1466) +* Added missioncontrol container in artifactory +* Dropped NET_RAW capability for the containers +* Added resources to migration-artifactory init container +* Added resources to all rt split containers +* Updated router version to `7.25.1` +* Added support for Ingress networking.k8s.io/v1/Ingress for k8s >=1.22 [GH-1487](https://github.com/jfrog/charts/pull/1487) +* Added min kubeVersion ">= 1.14.0-0" in chart.yaml +* Update alpine tag version to `3.14.2` +* Update busybox tag version to `1.33.1` +* Update postgresql tag version to `13.4.0-debian-10-r39` + +## [107.26.0] - Aug 20, 2021 +* Added Observability container (only when `splitServicesToContainers` is enabled) +* Added min kubeVersion ">= 1.12.0-0" in chart.yaml + +## [107.25.0] - Aug 13, 2021 +* Updated readme of chart to point to wiki. Refer [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory) +* Added startupProbe and livenessProbe for RT-split containers +* Updated router version to 7.24.1 +* Added security hardening fixes +* Enabled startup probes for k8s >= 1.20.x +* Changed network policy to allow all ingress and egress traffic +* Added Observability changes +* Added support for global.versions.router (only when `splitServicesToContainers` is enabled) + +## [107.24.0] - July 27, 2021 +* Support global and product specific tags at the same time +* Added support for artifactory containers split + +## [107.23.0] - July 8, 2021 +* Bug fix - logger sideCar picks up Wrong File in helm +* Allow filebeat metrics configuration in values.yaml + +## [107.22.0] - July 6, 2021 +* Update alpine tag version to `3.14.0` +* Added `nodePort` support to artifactory-service and nginx-service templates +* Removed redundant `terminationGracePeriodSeconds` in statefulset +* Increased `startupProbe.failureThreshold` time + +## [107.21.3] - July 2, 2021 +* Added ability to change sendreasonphrase value in server.xml via system yaml + +## [107.19.3] - May 20, 2021 +* Fix broken support for startupProbe for k8s < 1.18.x +* Removed an extraneous resources block from the prepare-custom-persistent-volume container in the primary statefulset +* Added support for `nameOverride` and `fullnameOverride` in values.yaml + +## [107.18.6] - May 4, 2021 +* Removed `JF_SHARED_NODE_PRIMARY` env to support for Cloud Native HA +* Bumping chart version to align with app version +* Add `securityContext` option on nginx container + +## [5.0.0] - April 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible +* Fix support for Cloud Native HA +* Fixed filebeat-configmap naming +* Explicitly set ServiceAccount `automountServiceAccountToken` to 'true' +* Update alpine tag version to `3.13.5` + +## [4.13.2] - April 15, 2021 +* Updated Artifactory version to 7.17.9 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.9) + +## [4.13.1] - April 6, 2021 +* Updated Artifactory version to 7.17.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.6) +* Update alpine tag version to `3.13.4` + +## [4.13.0] - April 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Updated Artifactory version to 7.17.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.5) + +## [4.12.2] - Mar 31, 2021 +* Updated Artifactory version to 7.17.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.4) + +## [4.12.1] - Mar 30, 2021 +* Updated Artifactory version to 7.17.3 +* Add `timeoutSeconds` to all exec probes - Please refer [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) + +## [4.12.0] - Mar 24, 2021 +* Updated Artifactory version to 7.17.2 +* Optimized startupProbe time + +## [4.11.0] - Mar 18, 2021 +* Add support to startupProbe + +## [4.10.0] - Mar 15, 2021 +* Updated Artifactory version to 7.16.3 + +## [4.9.5] - Mar 09, 2021 +* Added HSTS header to nginx conf + +## [4.9.4] - Mar 9, 2021 +* Removed bintray URL references in the chart + +## [4.9.3] - Mar 04, 2021 +* Updated Artifactory version to 7.15.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.4) + +## [4.9.2] - Mar 04, 2021 +* Fixed creation of nginx-certificate-secret when Nginx is disabled + +## [4.9.1] - Feb 19, 2021 +* Update busybox tag version to `1.32.1` + +## [4.9.0] - Feb 18, 2021 +* Updated Artifactory version to 7.15.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.3) +* Add option to specify update strategy for Artifactory statefulset + +## [4.8.1] - Feb 11, 2021 +* Exposed "multiPartLimit" and "multipartElementSize" for the Azure Blob Storage Binary Provider + +## [4.8.0] - Feb 08, 2021 +* Updated Artifactory version to 7.12.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.8) +* Support for custom certificates using secrets +* **Important:** Switched docker images download from `docker.bintray.io` to `releases-docker.jfrog.io` +* Update alpine tag version to `3.13.1` + +## [4.7.9] - Feb 3, 2021 +* Fix copyOnEveryStartup for HA cluster license + +## [4.7.8] - Jan 25, 2021 +* Add support for hostAliases + +## [4.7.7] - Jan 11, 2021 +* Fix failures when using creds file for configurating google storage + +## [4.7.6] - Jan 11, 2021 +* Updated Artifactory version to 7.12.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.6) + +## [4.7.5] - Jan 07, 2021 +* Added support for optional tracker dedicated ingress `.Values.artifactory.replicator.trackerIngress.enabled` (defaults to false) + +## [4.7.4] - Jan 04, 2021 +* Fixed gid support for statefulset + +## [4.7.3] - Dec 31, 2020 +* Added gid support for statefulset +* Add setSecurityContext flag to allow securityContext block to be removed from artifactory statefulset + +## [4.7.2] - Dec 29, 2020 +* **Important:** Removed `.Values.metrics` and `.Values.fluentd` (Fluentd and Prometheus integrations) +* Add support for creating additional kubernetes resources - [refer here](https://github.com/jfrog/log-analytics-prometheus/blob/master/artifactory-ha-values.yaml) +* Updated Artifactory version to 7.12.5 + +## [4.7.1] - Dec 21, 2020 +* Updated Artifactory version to 7.12.3 + +## [4.7.0] - Dec 18, 2020 +* Updated Artifactory version to 7.12.2 +* Added `.Values.artifactory.openMetrics.enabled` + +## [4.6.1] - Dec 11, 2020 +* Added configurable `.Values.global.versions.artifactory` in values.yaml + +## [4.6.0] - Dec 10, 2020 +* Update postgresql tag version to `12.5.0-debian-10-r25` +* Fixed `artifactory.persistence.googleStorage.endpoint` from `storage.googleapis.com` to `commondatastorage.googleapis.com` +* Updated chart maintainers email + +## [4.5.5] - Dec 4, 2020 +* **Important:** Renamed `.Values.systemYaml` to `.Values.systemYamlOverride` + +## [4.5.4] - Dec 1, 2020 +* Improve error message returned when attempting helm upgrade command + +## [4.5.3] - Nov 30, 2020 +* Updated Artifactory version to 7.11.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) + +# [4.5.2] - Nov 23, 2020 +* Updated Artifactory version to 7.11.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) +* Updated port namings on services and pods to allow for istio protocol discovery +* Change semverCompare checks to support hosted Kubernetes +* Add flag to disable creation of ServiceMonitor when enabling prometheus metrics +* Prevent the PostHook command to be executed if the user did not specify a command in the values file +* Fix issue with tls file generation when nginx.https.enabled is false + +## [4.5.1] - Nov 19, 2020 +* Updated Artifactory version to 7.11.2 +* Bugfix - access.config.import.xml override Access Federation configurations + +## [4.5.0] - Nov 17, 2020 +* Updated Artifactory version to 7.11.1 +* Update alpine tag version to `3.12.1` + +## [4.4.6] - Nov 10, 2020 +* Pass system.yaml via external secret for advanced usecases +* Added support for custom ingress +* Bugfix - stateful set not picking up changes to database secrets + +## [4.4.5] - Nov 9, 2020 +* Updated Artifactory version to 7.10.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.6) + +## [4.4.4] - Nov 2, 2020 +* Add enablePathStyleAccess property for aws-s3-v3 binary provider template + +## [4.4.3] - Nov 2, 2020 +* Updated Artifactory version to 7.10.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.5) + +## [4.4.2] - Oct 22, 2020 +* Chown bug fix where Linux capability cannot chown all files causing log line warnings +* Fix Frontend timeout linting issue + +## [4.4.1] - Oct 20, 2020 +* Add flag to disable prepare-custom-persistent-volume init container + +## [4.4.0] - Oct 19, 2020 +* Updated Artifactory version to 7.10.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.2) + +## [4.3.4] - Oct 19, 2020 +* Add support to specify priorityClassName for nginx deployment + +## [4.3.3] - Oct 15, 2020 +* Fixed issue with node PodDisruptionBudget which also getting applied on the primary +* Fix mandatory masterKey check issue when upgrading from 6.x to 7.x + +## [4.3.2] - Oct 14, 2020 +* Add support to allow more than 1 Primary in Artifactory-ha STS + +## [4.3.1] - Oct 9, 2020 +* Add global support for customInitContainersBegin + +## [4.3.0] - Oct 07, 2020 +* Updated Artifactory version to 7.9.1 +* **Breaking change:** Fix `storageClass` to correct `storageClassName` in values.yaml + +## [4.2.0] - Oct 5, 2020 +* Expose Prometheus metrics via a ServiceMonitor +* Parse log files for metric data with Fluentd + +## [4.1.0] - Sep 30, 2020 +* Updated Artifactory version to 7.9.0 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.9) + +## [4.0.12] - Sep 25, 2020 +* Update to use linux capability CAP_CHOWN instead of root base init container to avoid any use of root containers to pass Redhat security requirements + +## [4.0.11] - Sep 28, 2020 +* Setting chart coordinates in migitation yaml + +## [4.0.10] - Sep 25, 2020 +* Update filebeat version to `7.9.2` + +## [4.0.9] - Sep 24, 2020 +* Fixed broken issue - when setting `waitForDatabase:false` container startup still waits for DB + +## [4.0.8] - Sep 22, 2020 +* Updated readme + +## [4.0.7] - Sep 22, 2020 +* Fix lint issue in migitation yaml + +## [4.0.6] - Sep 22, 2020 +* Fix broken migitation yaml + +## [4.0.5] - Sep 21, 2020 +* Added mitigation yaml for Artifactory - [More info](https://github.com/jfrog/chartcenter/blob/master/docs/securitymitigationspec.md) + +## [4.0.4] - Sep 17, 2020 +* Added configurable session(UI) timeout in frontend microservice + +## [4.0.3] - Sep 17, 2020 +* Fix small typo in README and added proper required text to be shown while postgres upgrades + +## [4.0.2] - Sep 14, 2020 +* Updated Artifactory version to 7.7.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7.8) + +## [4.0.1] - Sep 8, 2020 +* Added support for artifactory pro license (single node) installation. + +## [4.0.0] - Sep 2, 2020 +* **Breaking change:** Changed `imagePullSecrets` value from string to list +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Added support for global values +* Updated maintainers in chart.yaml +* Update postgresql tag version to `12.3.0-debian-10-r71` +* Update postgresqlsub chart version to `9.3.4` - [9.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#900) +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x's postgresql.image.tag and databaseUpgradeReady=true. + +## [3.1.0] - Aug 13, 2020 +* Updated Artifactory version to 7.7.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7) + +## [3.0.15] - Aug 10, 2020 +* Added enableSignedUrlRedirect for persistent storage type aws-s3-v3. + +## [3.0.14] - Jul 31, 2020 +* Update the README section on Nginx SSL termination to reflect the actual YAML structure. + +## [3.0.13] - Jul 30, 2020 +* Added condition to disable the migration scripts. + +## [3.0.12] - Jul 29, 2020 +* Document Artifactory node affinity. + +## [3.0.11] - Jul 28, 2020 +* Added maxConnections for persistent storage type aws-s3-v3. + +## [3.0.10] - Jul 28, 2020 +Bugfix / support for userPluginSecrets with Artifactory 7 + +## [3.0.9] - Jul 27, 2020 +* Add tpl to external database secrets. +* Modified `scheme` to `artifactory-ha.scheme` + +## [3.0.8] - Jul 23, 2020 +* Added condition to disable the migration init container. + +## [3.0.7] - Jul 21, 2020 +* Updated Artifactory-ha Chart to add node and primary labels to pods and service objects. + +## [3.0.6] - Jul 20, 2020 +* Support custom CA and certificates + +## [3.0.5] - Jul 13, 2020 +* Updated Artifactory version to 7.6.3 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.3 +* Fixed Mysql database jar path in `preStartCommand` in README + +## [3.0.4] - Jul 8, 2020 +* Move some postgresql values to where they should be according to the subchart + +## [3.0.3] - Jul 8, 2020 +* Set Artifactory access client connections to the same value as the access threads. + +## [3.0.2] - Jul 6, 2020 +* Updated Artifactory version to 7.6.2 +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [3.0.1] - Jul 01, 2020 +* Add dedicated ingress object for Replicator service when enabled + +## [3.0.0] - Jun 30, 2020 +* Update postgresql tag version to `10.13.0-debian-10-r38` +* Update alpine tag version to `3.12` +* Update busybox tag version to `1.31.1` +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass postgresql.image.tag=9.6.18-debian-10-r7 and databaseUpgradeReady=true + +## [2.6.0] - Jun 29, 2020 +* Updated Artifactory version to 7.6.1 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.1 +* Add tpl for external database secrets + +## [2.5.8] - Jun 25, 2020 +* Stop loading the Nginx stream module because it is now a core module + +## [2.5.7] - Jun 18, 2020 +* Fixes bootstrap configMap issue on member node + +## [2.5.6] - Jun 11, 2020 +* Support list of custom secrets + +## [2.5.5] - Jun 11, 2020 +* NOTES.txt fixed incorrect information + +## [2.5.4] - Jun 12, 2020 +* Updated Artifactory version to 7.5.7 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5.7 + +## [2.5.3] - Jun 8, 2020 +* Statically setting primary service type to ClusterIP. +* Prevents primary service from being exposed publicly when using LoadBalancer type on cloud providers. + +## [2.5.2] - Jun 8, 2020 +* Readme update - configuring Artifactory with oracledb + +## [2.5.1] - Jun 5, 2020 +* Fixes broken PDB issue upgrading from 6.x to 7.x + +## [2.5.0] - Jun 1, 2020 +* Updated Artifactory version to 7.5.5 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5 +* Fixes bootstrap configMap permission issue +* Update postgresql tag version to `9.6.18-debian-10-r7` + +## [2.4.10] - May 27, 2020 +* Added Tomcat maxThreads & acceptCount + +## [2.4.9] - May 25, 2020 +* Fixed postgresql README `image` Parameters + +## [2.4.8] - May 24, 2020 +* Fixed typo in README regarding migration timeout + +## [2.4.7] - May 19, 2020 +* Added metadata maxOpenConnections + +## [2.4.6] - May 07, 2020 +* Fix `installerInfo` string format + +## [2.4.5] - Apr 27, 2020 +* Updated Artifactory version to 7.4.3 + +## [2.4.4] - Apr 27, 2020 +* Change customInitContainers order to run before the "migration-ha-artifactory" initContainer + +## [2.4.3] - Apr 24, 2020 +* Fix `artifactory.persistence.awsS3V3.useInstanceCredentials` incorrect conditional logic +* Bump postgresql tag version to `9.6.17-debian-10-r72` in values.yaml + +## [2.4.2] - Apr 16, 2020 +* Custom volume mounts in migration init container. + +## [2.4.1] - Apr 16, 2020 +* Fix broken support for gcpServiceAccount for googleStorage + +## [2.4.0] - Apr 14, 2020 +* Updated Artifactory version to 7.4.1 + +## [2.3.1] - April 13, 2020 +* Update README with helm v3 commands + +## [2.3.0] - April 10, 2020 +* Use dependency charts from `https://charts.bitnami.com/bitnami` +* Bump postgresql chart version to `8.7.3` in requirements.yaml +* Bump postgresql tag version to `9.6.17-debian-10-r21` in values.yaml + +## [2.2.11] - Apr 8, 2020 +* Added recommended ingress annotation to avoid 413 errors + +## [2.2.10] - Apr 8, 2020 +* Moved migration scripts under `files` directory +* Support preStartCommand in migration Init container as `artifactory.migration.preStartCommand` + +## [2.2.9] - Apr 01, 2020 +* Support masterKey and joinKey as secrets + +## [2.2.8] - Apr 01, 2020 +* Ensure that the join key is also copied when provided by an external secret +* Migration container in primary and node statefulset now respects custom versions and the specified node/primary resources + +## [2.2.7] - Apr 01, 2020 +* Added cache-layer in chain definition of Google Cloud Storage template +* Fix readme use to `-hex 32` instead of `-hex 16` + +## [2.2.6] - Mar 31, 2020 +* Change the way the artifactory `command:` is set so it will properly pass a SIGTERM to java + +## [2.2.5] - Mar 31, 2020 +* Removed duplicate `artifactory-license` volume from primary node + +## [2.2.4] - Mar 31, 2020 +* Restore `artifactory-license` volume for the primary node + +## [2.2.3] - Mar 29, 2020 +* Add Nginx log options: stderr as logfile and log level + +## [2.2.2] - Mar 30, 2020 +* Apply initContainers.resources to `copy-system-yaml`, `prepare-custom-persistent-volume`, and `migration-artifactory-ha` containers +* Use the same defaulting mechanism used for the artifactory version used elsewhere in the chart +* Removed duplicate `artifactory-license` volume that prevented using an external secret + +## [2.2.1] - Mar 29, 2020 +* Fix loggers sidecars configurations to support new file system layout and new log names + +## [2.2.0] - Mar 29, 2020 +* Fix broken admin user bootstrap configuration +* **Breaking change:** renamed `artifactory.accessAdmin` to `artifactory.admin` + +## [2.1.3] - Mar 24, 2020 +* Use `postgresqlExtendedConf` for setting custom PostgreSQL configuration (instead of `postgresqlConfiguration`) + +## [2.1.2] - Mar 21, 2020 +* Support for SSL offload in Nginx service(LoadBalancer) layer. Introduced `nginx.service.ssloffload` field with boolean type. + +## [2.1.1] - Mar 23, 2020 +* Moved installer info to values.yaml so it is fully customizable + +## [2.1.0] - Mar 23, 2020 +* Updated Artifactory version to 7.3.2 + +## [2.0.36] - Mar 20, 2020 +* Add support GCP credentials.json authentication + +## [2.0.35] - Mar 20, 2020 +* Add support for masterKey trim during 6.x to 7.x migration if 6.x masterKey is 32 hex (64 characters) + +## [2.0.34] - Mar 19, 2020 +* Add support for NFS directories `haBackupDir` and `haDataDir` + +## [2.0.33] - Mar 18, 2020 +* Increased Nginx proxy_buffers size + +## [2.0.32] - Mar 17, 2020 +* Changed all single quotes to double quotes in values files +* useInstanceCredentials variable was declared in S3 settings but not used in chart. Now it is being used. + +## [2.0.31] - Mar 17, 2020 +* Fix rendering of Service Account annotations + +## [2.0.30] - Mar 16, 2020 +* Add Unsupported message from 6.18 to 7.2.x (migration) + +## [2.0.29] - Mar 11, 2020 +* Upgrade Docs update + +## [2.0.28] - Mar 11, 2020 +* Unified charts public release + +## [2.0.27] - Mar 8, 2020 +* Add an optional wait for primary node to be ready with a proper test for http status + +## [2.0.23] - Mar 6, 2020 +* Fix path to `/artifactory_bootstrap` +* Add support for controlling the name of the ingress and allow to set more than one cname + +## [2.0.22] - Mar 4, 2020 +* Add support for disabling `consoleLog` in `system.yaml` file + +## [2.0.21] - Feb 28, 2020 +* Add support to process `valueFrom` for extraEnvironmentVariables + +## [2.0.20] - Feb 26, 2020 +* Store join key to secret + +## [2.0.19] - Feb 26, 2020 +* Updated Artifactory version to 7.2.1 + +## [2.0.12] - Feb 07, 2020 +* Remove protection flag `databaseUpgradeReady` which was added to check internal postgres upgrade + +## [2.0.0] - Feb 07, 2020 +* Updated Artifactory version to 7.0.0 + +## [1.4.10] - Feb 13, 2020 +* Add support for SSH authentication to Artifactory + +## [1.4.9] - Feb 10, 2020 +* Fix custom DB password indention + +## [1.4.8] - Feb 9, 2020 +* Add support for `tpl` in the `postStartCommand` + +## [1.4.7] - Feb 4, 2020 +* Support customisable Nginx kind + +## [1.4.6] - Feb 2, 2020 +* Add a comment stating that it is recommended to use an external PostgreSQL with a static password for production installations + +## [1.4.5] - Feb 2, 2020 +* Add support for primary or member node specific preStartCommand + +## [1.4.4] - Jan 30, 2020 +* Add the option to configure resources for the logger containers + +## [1.4.3] - Jan 26, 2020 +* Improve `database.user` and `database.password` logic in order to support more use cases and make the configuration less repetitive + +## [1.4.2] - Jan 22, 2020 +* Refined pod disruption budgets to separate nginx and Artifactory pods + +## [1.4.1] - Jan 19, 2020 +* Fix replicator port config in nginx replicator configmap + +## [1.4.0] - Jan 19, 2020 +* Updated Artifactory version to 6.17.0 + +## [1.3.8] - Jan 16, 2020 +* Added example for external nginx-ingress + +## [1.3.7] - Jan 07, 2020 +* Add support for customizable `mountOptions` of NFS PVs + +## [1.3.6] - Dec 30, 2019 +* Fix for nginx probes failing when launched with http disabled + +## [1.3.5] - Dec 24, 2019 +* Better support for custom `artifactory.internalPort` + +## [1.3.4] - Dec 23, 2019 +* Mark empty map values with `{}` + +## [1.3.3] - Dec 16, 2019 +* Another fix for toggling nginx service ports + +## [1.3.2] - Dec 12, 2019 +* Fix for toggling nginx service ports + +## [1.3.1] - Dec 10, 2019 +* Add support for toggling nginx service ports + +## [1.3.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [1.2.4] - Nov 28, 2019 +* Add support for using existing PriorityClass + +## [1.2.3] - Nov 27, 2019 +* Add support for PriorityClass + +## [1.2.2] - Nov 20, 2019 +* Update Artifactory logo + +## [1.2.1] - Nov 18, 2019 +* Add the option to provide service account annotations (in order to support stuff like https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html) + +## [1.2.0] - Nov 18, 2019 +* Updated Artifactory version to 6.15.0 + +## [1.1.12] - Nov 17, 2019 +* Fix `README.md` format (broken table) + +## [1.1.11] - Nov 17, 2019 +* Update comment on Artifactory master key + +## [1.1.10] - Nov 17, 2019 +* Fix creation of double slash in nginx artifactory configuration + +## [1.1.9] - Nov 14, 2019 +* Set explicit `postgresql.postgresqlPassword=""` to avoid helm v3 error + +## [1.1.8] - Nov 12, 2019 +* Updated Artifactory version to 6.14.1 + +## [1.1.7] - Nov 11, 2019 +* Additional documentation for masterKey + +## [1.1.6] - Nov 10, 2019 +* Update PostgreSQL chart version to 7.0.1 +* Use formal PostgreSQL configuration format + +## [1.1.5] - Nov 8, 2019 +* Add support `artifactory.service.loadBalancerSourceRanges` for whitelisting when setting `artifactory.service.type=LoadBalancer` + +## [1.1.4] - Nov 6, 2019 +* Add support for any type of environment variable by using `extraEnvironmentVariables` as-is + +## [1.1.3] - Nov 6, 2019 +* Add nodeselector support for Postgresql + +## [1.1.2] - Nov 5, 2019 +* Add support for the aws-s3-v3 filestore, which adds support for pod IAM roles + +## [1.1.1] - Nov 4, 2019 +* When using `copyOnEveryStartup`, make sure that the target base directories are created before copying the files + +## [1.1.0] - Nov 3, 2019 +* Updated Artifactory version to 6.14.0 + +## [1.0.1] - Nov 3, 2019 +* Make sure the artifactory pod exits when one of the pre-start stages fail + +## [1.0.0] - Oct 27, 2019 +**IMPORTANT - BREAKING CHANGES!**
+**DOWNTIME MIGHT BE REQUIRED FOR AN UPGRADE!** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), must use the upgrade instructions in [UPGRADE_NOTES.md](UPGRADE_NOTES.md)! +* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is **not backward compatible** with the old version (`0.9.5`)! +* Note the following **PostgreSQL** Helm chart changes + * The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used + * **PostgreSQL** is deployed as a StatefulSet + * See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations + +## [0.17.3] - Oct 24, 2019 +* Change the preStartCommand to support templating + +## [0.17.2] - Oct 21, 2019 +* Add support for setting `artifactory.primary.labels` +* Add support for setting `artifactory.node.labels` +* Add support for setting `nginx.labels` + +## [0.17.1] - Oct 10, 2019 +* Updated Artifactory version to 6.13.1 + +## [0.17.0] - Oct 7, 2019 +* Updated Artifactory version to 6.13.0 + +## [0.16.7] - Sep 24, 2019 +* Option to skip wait-for-db init container with '--set waitForDatabase=false' + +## [0.16.6] - Sep 24, 2019 +* Add support for setting `nginx.service.labels` + +## [0.16.5] - Sep 23, 2019 +* Add support for setting `artifactory.customInitContainersBegin` + +## [0.16.4] - Sep 20, 2019 +* Add support for setting `initContainers.resources` + +## [0.16.3] - Sep 11, 2019 +* Updated Artifactory version to 6.12.2 + +## [0.16.2] - Sep 9, 2019 +* Updated Artifactory version to 6.12.1 + +## [0.16.1] - Aug 22, 2019 +* Fix the nginx server_name directive used with ingress.hosts + +## [0.16.0] - Aug 21, 2019 +* Updated Artifactory version to 6.12.0 + +## [0.15.15] - Aug 18, 2019 +* Fix existingSharedClaim permissions issue and example + +## [0.15.14] - Aug 14, 2019 +* Updated Artifactory version to 6.11.6 + +## [0.15.13] - Aug 11, 2019 +* Fix Ingress routing and add an example + +## [0.15.12] - Aug 6, 2019 +* Do not mount `access/etc/bootstrap.creds` unless user specifies a custom password or secret (Access already generates a random password if not provided one) +* If custom `bootstrap.creds` is provided (using keys or custom secret), prepare it with an init container so the temp file does not persist + +## [0.15.11] - Aug 5, 2019 +* Improve binarystore config + 1. Convert to a secret + 2. Move config to values.yaml + 3. Support an external secret + +## [0.15.10] - Aug 5, 2019 +* Don't create the nginx configmaps when nginx.enabled is false + +## [0.15.9] - Aug 1, 2019 +* Fix masterkey/masterKeySecretName not specified warning render logic in NOTES.txt + +## [0.15.8] - Jul 28, 2019 +* Simplify nginx setup and shorten initial wait for probes + +## [0.15.7] - Jul 25, 2019 +* Updated README about how to apply Artifactory licenses + +## [0.15.6] - Jul 22, 2019 +* Change Ingress API to be compatible with recent kubernetes versions + +## [0.15.5] - Jul 22, 2019 +* Updated Artifactory version to 6.11.3 + +## [0.15.4] - Jul 11, 2019 +* Add `artifactory.customVolumeMounts` support to member node statefulset template + +## [0.15.3] - Jul 11, 2019 +* Add ingress.hosts to the Nginx server_name directive when ingress is enabled to help with Docker repository sub domain configuration + +## [0.15.2] - Jul 3, 2019 +* Add the option for changing nginx config using values.yaml and remove outdated reverse proxy documentation + +## [0.15.1] - Jul 1, 2019 +* Updated Artifactory version to 6.11.1 + +## [0.15.0] - Jun 27, 2019 +* Updated Artifactory version to 6.11.0 and Restart Primary node when bootstrap.creds file has been modified in artifactory-ha + +## [0.14.4] - Jun 24, 2019 +* Add the option to provide an IP for the access-admin endpoints + +## [0.14.3] - Jun 24, 2019 +* Update chart maintainers + +## [0.14.2] - Jun 24, 2019 +* Change Nginx to point to the artifactory externalPort + +## [0.14.1] - Jun 23, 2019 +* Add values files for small, medium and large installations + +## [0.14.0] - Jun 20, 2019 +* Use ConfigMaps for nginx configuration and remove nginx postStart command + +## [0.13.10] - Jun 19, 2019 +* Updated Artifactory version to 6.10.4 + +## [0.13.9] - Jun 18, 2019 +* Add the option to provide additional ingress rules + +## [0.13.8] - Jun 14, 2019 +* Updated readme with improved external database setup example + +## [0.13.7] - Jun 6, 2019 +* Updated Artifactory version to 6.10.3 +* Updated installer-info template + +## [0.13.6] - Jun 6, 2019 +* Updated Google Cloud Storage API URL and https settings + +## [0.13.5] - Jun 5, 2019 +* Delete the db.properties file on Artifactory startup + +## [0.13.4] - Jun 3, 2019 +* Updated Artifactory version to 6.10.2 + +## [0.13.3] - May 21, 2019 +* Updated Artifactory version to 6.10.1 + +## [0.13.2] - May 19, 2019 +* Fix missing logger image tag + +## [0.13.1] - May 15, 2019 +* Support `artifactory.persistence.cacheProviderDir` for on-premise cluster + +## [0.13.0] - May 7, 2019 +* Updated Artifactory version to 6.10.0 + +## [0.12.23] - May 5, 2019 +* Add support for setting `artifactory.async.corePoolSize` + +## [0.12.22] - May 2, 2019 +* Remove unused property `artifactory.releasebundle.feature.enabled` + +## [0.12.21] - Apr 30, 2019 +* Add support for JMX monitoring + +## [0.12.20] - Apr29, 2019 +* Added support for headless services + +## [0.12.19] - Apr 28, 2019 +* Added support for `cacheProviderDir` + +## [0.12.18] - Apr 18, 2019 +* Changing API StatefulSet version to `v1` and permission fix for custom `artifactory.conf` for Nginx + +## [0.12.17] - Apr 16, 2019 +* Updated documentation for Reverse Proxy Configuration + +## [0.12.16] - Apr 12, 2019 +* Added support for `customVolumeMounts` + +## [0.12.15] - Aprl 12, 2019 +* Added support for `bucketExists` flag for googleStorage + +## [0.12.14] - Apr 11, 2019 +* Replace `curl` examples with `wget` due to the new base image + +## [0.12.13] - Aprl 07, 2019 +* Add support for providing the Artifactory license as a parameter + +## [0.12.12] - Apr 10, 2019 +* Updated Artifactory version to 6.9.1 + +## [0.12.11] - Aprl 04, 2019 +* Add support for templated extraEnvironmentVariables + +## [0.12.10] - Aprl 07, 2019 +* Change network policy API group + +## [0.12.9] - Aprl 04, 2019 +* Apply the existing PVC for members (in addition to primary) + +## [0.12.8] - Aprl 03, 2019 +* Bugfix for userPluginSecrets + +## [0.12.7] - Apr 4, 2019 +* Add information about upgrading Artifactory with auto-generated postgres password + +## [0.12.6] - Aprl 03, 2019 +* Added installer info + +## [0.12.5] - Aprl 03, 2019 +* Allow secret names for user plugins to contain template language + +## [0.12.4] - Apr 02, 2019 +* Fix issue #253 (use existing PVC for data and backup storage) + +## [0.12.3] - Apr 02, 2019 +* Allow NetworkPolicy configurations (defaults to allow all) + +## [0.12.2] - Aprl 01, 2019 +* Add support for user plugin secret + +## [0.12.1] - Mar 26, 2019 +* Add the option to copy a list of files to ARTIFACTORY_HOME on startup + +## [0.12.0] - Mar 26, 2019 +* Updated Artifactory version to 6.9.0 + +## [0.11.18] - Mar 25, 2019 +* Add CI tests for persistence, ingress support and nginx + +## [0.11.17] - Mar 22, 2019 +* Add the option to change the default access-admin password + +## [0.11.16] - Mar 22, 2019 +* Added support for `.Probe.path` to customise the paths used for health probes + +## [0.11.15] - Mar 21, 2019 +* Added support for `artifactory.customSidecarContainers` to create custom sidecar containers +* Added support for `artifactory.customVolumes` to create custom volumes + +## [0.11.14] - Mar 21, 2019 +* Make ingress path configurable + +## [0.11.13] - Mar 19, 2019 +* Move the copy of bootstrap config from postStart to preStart for Primary + +## [0.11.12] - Mar 19, 2019 +* Fix existingClaim example + +## [0.11.11] - Mar 18, 2019 +* Disable the option to use nginx PVC with more than one replica + +## [0.11.10] - Mar 15, 2019 +* Wait for nginx configuration file before using it + +## [0.11.9] - Mar 15, 2019 +* Revert securityContext changes since they were causing issues + +## [0.11.8] - Mar 15, 2019 +* Fix issue #247 (init container failing to run) + +## [0.11.7] - Mar 14, 2019 +* Updated Artifactory version to 6.8.7 + +## [0.11.6] - Mar 13, 2019 +* Move securityContext to container level + +## [0.11.5] - Mar 11, 2019 +* Add the option to use existing volume claims for Artifactory storage + +## [0.11.4] - Mar 11, 2019 +* Updated Artifactory version to 6.8.6 + +## [0.11.3] - Mar 5, 2019 +* Updated Artifactory version to 6.8.4 + +## [0.11.2] - Mar 4, 2019 +* Add support for catalina logs sidecars + +## [0.11.1] - Feb 27, 2019 +* Updated Artifactory version to 6.8.3 + +## [0.11.0] - Feb 25, 2019 +* Add nginx support for tail sidecars + +## [0.10.3] - Feb 21, 2019 +* Add s3AwsVersion option to awsS3 configuration for use with IAM roles + +## [0.10.2] - Feb 19, 2019 +* Updated Artifactory version to 6.8.2 + +## [0.10.1] - Feb 17, 2019 +* Updated Artifactory version to 6.8.1 +* Add example of `SERVER_XML_EXTRA_CONNECTOR` usage + +## [0.10.0] - Feb 15, 2019 +* Updated Artifactory version to 6.8.0 + +## [0.9.7] - Feb 13, 2019 +* Updated Artifactory version to 6.7.3 + +## [0.9.6] - Feb 7, 2019 +* Add support for tail sidecars to view logs from k8s api + +## [0.9.5] - Feb 6, 2019 +* Fix support for customizing statefulset `terminationGracePeriodSeconds` + +## [0.9.4] - Feb 5, 2019 +* Add support for customizing statefulset `terminationGracePeriodSeconds` + +## [0.9.3] - Feb 5, 2019 +* Remove the inactive server remove plugin + +## [0.9.2] - Feb 3, 2019 +* Updated Artifactory version to 6.7.2 + +## [0.9.1] - Jan 27, 2019 +* Fix support for Azure Blob Storage Binary provider + +## [0.9.0] - Jan 23, 2019 +* Updated Artifactory version to 6.7.0 + +## [0.8.10] - Jan 22, 2019 +* Added support for `artifactory.customInitContainers` to create custom init containers + +## [0.8.9] - Jan 18, 2019 +* Added support of values ingress.labels + +## [0.8.8] - Jan 16, 2019 +* Mount replicator.yaml (config) directly to /replicator_extra_conf + +## [0.8.7] - Jan 15, 2018 +* Add support for Azure Blob Storage Binary provider + +## [0.8.6] - Jan 13, 2019 +* Fix documentation about nginx group id + +## [0.8.5] - Jan 13, 2019 +* Updated Artifactory version to 6.6.5 + +## [0.8.4] - Jan 8, 2019 +* Make artifactory.replicator.publicUrl required when the replicator is enabled + +## [0.8.3] - Jan 1, 2019 +* Updated Artifactory version to 6.6.3 +* Add support for `artifactory.extraEnvironmentVariables` to pass more environment variables to Artifactory + +## [0.8.2] - Dec 28, 2018 +* Fix location `replicator.yaml` is copied to + +## [0.8.1] - Dec 27, 2018 +* Updated Artifactory version to 6.6.1 + +## [0.8.0] - Dec 20, 2018 +* Updated Artifactory version to 6.6.0 + +## [0.7.17] - Dec 17, 2018 +* Updated Artifactory version to 6.5.13 + +## [0.7.16] - Dec 12, 2018 +* Fix documentation about Artifactory license setup using secret + +## [0.7.15] - Dec 9, 2018 +* AWS S3 add `roleName` for using IAM role + +## [0.7.14] - Dec 6, 2018 +* AWS S3 `identity` and `credential` are now added only if have a value to allow using IAM role + +## [0.7.13] - Dec 5, 2018 +* Remove Distribution certificates creation. + +## [0.7.12] - Dec 2, 2018 +* Remove Java option "-Dartifactory.locking.provider.type=db". This is already the default setting. + +## [0.7.11] - Nov 30, 2018 +* Updated Artifactory version to 6.5.9 + +## [0.7.10] - Nov 29, 2018 +* Fixed the volumeMount for the replicator.yaml + +## [0.7.9] - Nov 29, 2018 +* Optionally include primary node into poddisruptionbudget + +## [0.7.8] - Nov 29, 2018 +* Updated postgresql version to 9.6.11 + +## [0.7.7] - Nov 27, 2018 +* Updated Artifactory version to 6.5.8 + +## [0.7.6] - Nov 18, 2018 +* Added support for configMap to use custom Reverse Proxy Configuration with Nginx + +## [0.7.5] - Nov 14, 2018 +* Updated Artifactory version to 6.5.3 + +## [0.7.4] - Nov 13, 2018 +* Allow pod anti-affinity settings to include primary node + +## [0.7.3] - Nov 12, 2018 +* Support artifactory.preStartCommand for running command before entrypoint starts + +## [0.7.2] - Nov 7, 2018 +* Support database.url parameter (DB_URL) + +## [0.7.1] - Oct 29, 2018 +* Change probes port to 8040 (so they will not be blocked when all tomcat threads on 8081 are exhausted) + +## [0.7.0] - Oct 28, 2018 +* Update postgresql chart to version 0.9.5 to be able and use `postgresConfig` options + +## [0.6.9] - Oct 23, 2018 +* Fix providing external secret for database credentials + +## [0.6.8] - Oct 22, 2018 +* Allow user to configure externalTrafficPolicy for Loadbalancer + +## [0.6.7] - Oct 22, 2018 +* Updated ingress annotation support (with examples) to support docker registry v2 + +## [0.6.6] - Oct 21, 2018 +* Updated Artifactory version to 6.5.2 + +## [0.6.5] - Oct 19, 2018 +* Allow providing pre-existing secret containing master key +* Allow arbitrary annotations on primary and member node pods +* Enforce size limits when using local storage with `emptyDir` +* Allow `soft` or `hard` specification of member node anti-affinity +* Allow providing pre-existing secrets containing external database credentials +* Fix `s3` binary store provider to properly use the `cache-fs` provider +* Allow arbitrary properties when using the `s3` binary store provider + +## [0.6.4] - Oct 18, 2018 +* Updated Artifactory version to 6.5.1 + +## [0.6.3] - Oct 17, 2018 +* Add Apache 2.0 license + +## [0.6.2] - Oct 14, 2018 +* Make S3 endpoint configurable (was hardcoded with `s3.amazonaws.com`) + +## [0.6.1] - Oct 11, 2018 +* Allows ingress default `backend` to be enabled or disabled (defaults to enabled) + +## [0.6.0] - Oct 11, 2018 +* Updated Artifactory version to 6.5.0 + +## [0.5.3] - Oct 9, 2018 +* Quote ingress hosts to support wildcard names + +## [0.5.2] - Oct 2, 2018 +* Add `helm repo add jfrog https://charts.jfrog.io` to README + +## [0.5.1] - Oct 2, 2018 +* Set Artifactory to 6.4.1 + +## [0.5.0] - Sep 27, 2018 +* Set Artifactory to 6.4.0 + +## [0.4.7] - Sep 26, 2018 +* Add ci/test-values.yaml + +## [0.4.6] - Sep 25, 2018 +* Add PodDisruptionBudget for member nodes, defaulting to minAvailable of 1 + +## [0.4.4] - Sep 2, 2018 +* Updated Artifactory version to 6.3.2 + +## [0.4.0] - Aug 22, 2018 +* Added support to run as non root +* Updated Artifactory version to 6.2.0 + +## [0.3.0] - Aug 22, 2018 +* Enabled RBAC Support +* Added support for PostStartCommand (To download Database JDBC connector) +* Increased postgresql max_connections +* Added support for `nginx.conf` ConfigMap +* Updated Artifactory version to 6.1.0 diff --git a/charts/jfrog/artifactory-ha/107.90.14/Chart.lock b/charts/jfrog/artifactory-ha/107.90.14/Chart.lock new file mode 100644 index 0000000000..eb94099719 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +digest: sha256:404ce007353baaf92a6c5f24b249d5b336c232e5fd2c29f8a0e4d0095a09fd53 +generated: "2022-03-08T08:54:51.805126+05:30" diff --git a/charts/jfrog/artifactory-ha/107.90.14/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.14/Chart.yaml new file mode 100644 index 0000000000..fae27e2b97 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/Chart.yaml @@ -0,0 +1,30 @@ +annotations: + artifactoryServiceVersion: 7.90.20 + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Artifactory HA + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-ha +apiVersion: v2 +appVersion: 7.90.14 +dependencies: +- condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. +home: https://www.jfrog.com/artifactory/ +icon: file://assets/icons/artifactory-ha.png +keywords: +- artifactory +- jfrog +- devops +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: installers@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory-ha +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.14 diff --git a/charts/jfrog/artifactory-ha/107.90.14/LICENSE b/charts/jfrog/artifactory-ha/107.90.14/LICENSE new file mode 100644 index 0000000000..8dada3edaf --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-ha/107.90.14/README.md b/charts/jfrog/artifactory-ha/107.90.14/README.md new file mode 100644 index 0000000000..49155926e0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/README.md @@ -0,0 +1,69 @@ +# JFrog Artifactory High Availability Helm Chart + +**IMPORTANT!** Our Helm Chart docs have moved to our main documentation site. Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to [Installing Artifactory - Helm HA Installation](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-HelmHAInstallation). + +**Note:** From Artifactory 7.17.4 and above, the Helm HA installation can be installed so that each node you install can run all tasks in the cluster. + +Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to the documentation site. + +## Prerequisites Details + +* Kubernetes 1.19+ +* Artifactory HA license + +## Chart Details +This chart will do the following: + +* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes. +* Deploy a PostgreSQL database **NOTE:** For production grade installations it is recommended to use an external PostgreSQL +* Deploy an Nginx server + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client + +```bash +helm repo add jfrog https://charts.jfrog.io +``` +2. Next, create a unique Master Key (Artifactory requires a unique master key) and pass it to the template during installation. +3. Now, update the repository. + +```bash +helm repo update +``` + +### Install Chart +To install the chart with the release name `artifactory`: +```bash +helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha --create-namespace +``` + +### Apply Sizing configurations to the Chart +To apply the chart with recommended sizing configurations : +For small configurations : +```bash +helm upgrade --install artifactory-ha jfrog/artifactory-ha -f sizing/artifactory-small-extra-config.yaml -f sizing/artifactory-small.yaml --namespace artifactory-ha --create-namespace +``` + +## Uninstalling Artifactory + +Uninstall is supported only on Helm v3 and on. + +Uninstall Artifactory using the following command. + +```bash +helm uninstall artifactory-ha && sleep 90 && kubectl delete pvc -l app=artifactory-ha +``` + +## Deleting Artifactory + +**IMPORTANT:** Deleting Artifactory will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. You do not need to uninstall Artifactory before deleting it. + +To delete Artifactory use the following command. + +```bash +helm delete artifactory-ha --namespace artifactory-ha +``` + diff --git a/charts/jfrog/artifactory-ha/107.90.14/app-readme.md b/charts/jfrog/artifactory-ha/107.90.14/app-readme.md new file mode 100644 index 0000000000..a5aa5fd478 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/app-readme.md @@ -0,0 +1,16 @@ +# JFrog Artifactory High Availability Helm Chart + +Universal Repository Manager supporting all major packaging formats, build tools and CI servers. + +## Chart Details +This chart will do the following: + +* Deploy Artifactory highly available cluster. 1 primary node and 2 member nodes. +* Deploy a PostgreSQL database +* Deploy an Nginx server(optional) + +## Useful links +Blog: [Herd Trust Into Your Rancher Labs Multi-Cloud Strategy with Artifactory](https://jfrog.com/blog/herd-trust-into-your-rancher-labs-multi-cloud-strategy-with-artifactory/) + +## Activate Your Artifactory Instance +Don't have a license? Please send an email to [rancher-jfrog-licenses@jfrog.com](mailto:rancher-jfrog-licenses@jfrog.com) to get it. diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/.helmignore b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/.helmignore new file mode 100644 index 0000000000..f0c1319444 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.lock b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.lock new file mode 100644 index 0000000000..3687f52df5 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.4.2 +digest: sha256:dce0349883107e3ff103f4f17d3af4ad1ea3c7993551b1c28865867d3e53d37c +generated: "2021-03-30T09:13:28.360322819Z" diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.yaml new file mode 100644 index 0000000000..4b197b2071 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/Chart.yaml @@ -0,0 +1,29 @@ +annotations: + category: Database +apiVersion: v2 +appVersion: 11.11.0 +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.x.x +description: Chart for PostgreSQL, an object-relational database management system + (ORDBMS) with an emphasis on extensibility and on standards-compliance. +home: https://github.com/bitnami/charts/tree/master/bitnami/postgresql +icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-220x234.png +keywords: +- postgresql +- postgres +- database +- sql +- replication +- cluster +maintainers: +- email: containers@bitnami.com + name: Bitnami +- email: cedric@desaintmartin.fr + name: desaintmartin +name: postgresql +sources: +- https://github.com/bitnami/bitnami-docker-postgresql +- https://www.postgresql.org/ +version: 10.3.18 diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/README.md b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/README.md new file mode 100644 index 0000000000..63d3605bb8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/README.md @@ -0,0 +1,770 @@ +# PostgreSQL + +[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance. + +For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha) + +## TL;DR + +```console +$ helm repo add bitnami https://charts.bitnami.com/bitnami +$ helm install my-release bitnami/postgresql +``` + +## Introduction + +This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/). + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 +- PV provisioner support in the underlying infrastructure + +## Installing the Chart +To install the chart with the release name `my-release`: + +```console +$ helm install my-release bitnami/postgresql +``` + +The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```console +$ helm delete my-release +``` + +The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release. + +To delete the PVC's associated with `my-release`: + +```console +$ kubectl delete pvc -l release=my-release +``` + +> **Note**: Deleting the PVC's will delete postgresql data as well. Please be cautious before doing it. + +## Parameters + +The following tables lists the configurable parameters of the PostgreSQL chart and their default values. + +| Parameter | Description | Default | +|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| +| `global.imageRegistry` | Global Docker Image registry | `nil` | +| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` | +| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` | +| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` | +| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` | +| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` | +| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| `image.registry` | PostgreSQL Image registry | `docker.io` | +| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` | +| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` | +| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `image.debug` | Specify if debug values should be set | `false` | +| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | +| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | +| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | +| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | +| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | +| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | +| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | +| `volumePermissions.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` | +| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` | +| `ldap.enabled` | Enable LDAP support | `false` | +| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` | +| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` | +| `ldap.server` | IP address or name of the LDAP server. | `nil` | +| `ldap.port` | Port number on the LDAP server to connect to | `nil` | +| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` | +| `ldap.tls` | Set to `1` to use TLS encryption | `nil` | +| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` | +| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` | +| `ldap.search_attr` | Attribute to match against the user name in the search | `nil` | +| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` | +| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` | +| `ldap.bindDN` | DN of user to bind to LDAP | `nil` | +| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` | +| `replication.enabled` | Enable replication | `false` | +| `replication.user` | Replication user | `repl_user` | +| `replication.password` | Replication user password | `repl_password` | +| `replication.readReplicas` | Number of read replicas replicas | `1` | +| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` | +| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.readReplicas`. | `0` | +| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` | +| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-postgres-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be used to authenticate on LDAP. The value is evaluated as a template. | `nil` | +| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`, in which case`postgres` is the admin username). | _random 10 character alphanumeric string_ | +| `postgresqlUsername` | PostgreSQL user (creates a non-admin user when `postgresqlUsername` is not `postgres`) | `postgres` | +| `postgresqlPassword` | PostgreSQL user password | _random 10 character alphanumeric string_ | +| `postgresqlDatabase` | PostgreSQL database | `nil` | +| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) | +| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` | +| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` | +| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` | +| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` | +| `postgresqlConfiguration` | Runtime Config Parameters | `nil` | +| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` | +| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` | +| `postgresqlSharedPreloadLibraries` | Shared preload libraries (comma-separated list) | `pgaudit` | +| `postgresqlMaxConnections` | Maximum total connections | `nil` | +| `postgresqlPostgresConnectionLimit` | Maximum total connections for the postgres user | `nil` | +| `postgresqlDbUserConnectionLimit` | Maximum total connections for the non-admin user | `nil` | +| `postgresqlTcpKeepalivesInterval` | TCP keepalives interval | `nil` | +| `postgresqlTcpKeepalivesIdle` | TCP keepalives idle | `nil` | +| `postgresqlTcpKeepalivesCount` | TCP keepalives count | `nil` | +| `postgresqlStatementTimeout` | Statement timeout | `nil` | +| `postgresqlPghbaRemoveFilters` | Comma-separated list of patterns to remove from the pg_hba.conf file | `nil` | +| `customStartupProbe` | Override default startup probe | `nil` | +| `customLivenessProbe` | Override default liveness probe | `nil` | +| `customReadinessProbe` | Override default readiness probe | `nil` | +| `audit.logHostname` | Add client hostnames to the log file | `false` | +| `audit.logConnections` | Add client log-in operations to the log file | `false` | +| `audit.logDisconnections` | Add client log-outs operations to the log file | `false` | +| `audit.pgAuditLog` | Add operations to log using the pgAudit extension | `nil` | +| `audit.clientMinMessages` | Message log level to share with the user | `nil` | +| `audit.logLinePrefix` | Template string for the log line prefix | `nil` | +| `audit.logTimezone` | Timezone for the log timestamps | `nil` | +| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` | +| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` | +| `initdbScripts` | Dictionary of initdb scripts | `nil` | +| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` | +| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` | +| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` | +| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` | +| `service.type` | Kubernetes Service type | `ClusterIP` | +| `service.port` | PostgreSQL port | `5432` | +| `service.nodePort` | Kubernetes Service nodePort | `nil` | +| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) | +| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | +| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) | +| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | +| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for primary and read replica(s) Pod(s) | `true` | +| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` | +| `persistence.enabled` | Enable persistence using PVC | `true` | +| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` | +| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` | +| `persistence.subPath` | Subdirectory of the volume to mount at | `""` | +| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` | +| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` | +| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` | +| `persistence.annotations` | Annotations for the PVC | `{}` | +| `persistence.selector` | Selector to match an existing Persistent Volume (this value is evaluated as a template) | `{}` | +| `commonAnnotations` | Annotations to be added to all deployed resources (rendered as a template) | `{}` | +| `primary.podAffinityPreset` | PostgreSQL primary pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.podAntiAffinityPreset` | PostgreSQL primary pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `primary.nodeAffinityPreset.type` | PostgreSQL primary node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.nodeAffinityPreset.key` | PostgreSQL primary node label key to match Ignored if `primary.affinity` is set. | `""` | +| `primary.nodeAffinityPreset.values` | PostgreSQL primary node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `primary.affinity` | Affinity for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.nodeSelector` | Node labels for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.tolerations` | Tolerations for PostgreSQL primary pods assignment | `[]` (evaluated as a template) | +| `primary.anotations` | Map of annotations to add to the statefulset (postgresql primary) | `{}` | +| `primary.labels` | Map of labels to add to the statefulset (postgresql primary) | `{}` | +| `primary.podAnnotations` | Map of annotations to add to the pods (postgresql primary) | `{}` | +| `primary.podLabels` | Map of labels to add to the pods (postgresql primary) | `{}` | +| `primary.priorityClassName` | Priority Class to use for each pod (postgresql primary) | `nil` | +| `primary.extraInitContainers` | Additional init containers to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumes` | Additional volumes to add to the pods (postgresql primary) | `[]` | +| `primary.sidecars` | Add additional containers to the pod | `[]` | +| `primary.service.type` | Allows using a different service type for primary | `nil` | +| `primary.service.nodePort` | Allows using a different nodePort for primary | `nil` | +| `primary.service.clusterIP` | Allows using a different clusterIP for primary | `nil` | +| `primaryAsStandBy.enabled` | Whether to enable current cluster's primary as standby server of another cluster or not. | `false` | +| `primaryAsStandBy.primaryHost` | The Host of replication primary in the other cluster. | `nil` | +| `primaryAsStandBy.primaryPort ` | The Port of replication primary in the other cluster. | `nil` | +| `readReplicas.podAffinityPreset` | PostgreSQL read only pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.podAntiAffinityPreset` | PostgreSQL read only pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `readReplicas.nodeAffinityPreset.type` | PostgreSQL read only node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.nodeAffinityPreset.key` | PostgreSQL read only node label key to match Ignored if `primary.affinity` is set. | `""` | +| `readReplicas.nodeAffinityPreset.values` | PostgreSQL read only node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `readReplicas.affinity` | Affinity for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.nodeSelector` | Node labels for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.anotations` | Map of annotations to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.resources` | CPU/Memory resource requests/limits override for readReplicass. Will fallback to `values.resources` if not defined. | `{}` | +| `readReplicas.labels` | Map of labels to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.podAnnotations` | Map of annotations to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.podLabels` | Map of labels to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.priorityClassName` | Priority Class to use for each pod (postgresql readReplicas) | `nil` | +| `readReplicas.extraInitContainers` | Additional init containers to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumes` | Additional volumes to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.sidecars` | Add additional containers to the pod | `[]` | +| `readReplicas.service.type` | Allows using a different service type for readReplicas | `nil` | +| `readReplicas.service.nodePort` | Allows using a different nodePort for readReplicas | `nil` | +| `readReplicas.service.clusterIP` | Allows using a different clusterIP for readReplicas | `nil` | +| `readReplicas.persistence.enabled` | Whether to enable readReplicas replicas persistence | `true` | +| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` | +| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | +| `securityContext.*` | Other pod security context to be included as-is in the pod spec | `{}` | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.fsGroup` | Group ID for the pod | `1001` | +| `containerSecurityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `containerSecurityContext.enabled` | Enable container security context | `true` | +| `containerSecurityContext.runAsUser` | User ID for the container | `1001` | +| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` | +| `serviceAccount.name` | Name of existing service account | `nil` | +| `networkPolicy.enabled` | Enable NetworkPolicy | `false` | +| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | +| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` | +| `startupProbe.enabled` | Enable startupProbe | `false` | +| `startupProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `startupProbe.periodSeconds` | How often to perform the probe | 15 | +| `startupProbe.timeoutSeconds` | When the probe times | 5 | +| `startupProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 10 | +| `startupProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 | +| `livenessProbe.enabled` | Enable livenessProbe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `readinessProbe.enabled` | Enable readinessProbe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 | +| `readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `tls.enabled` | Enable TLS traffic support | `false` | +| `tls.preferServerCiphers` | Whether to use the server's TLS cipher preferences rather than the client's | `true` | +| `tls.certificatesSecret` | Name of an existing secret that contains the certificates | `nil` | +| `tls.certFilename` | Certificate filename | `""` | +| `tls.certKeyFilename` | Certificate key filename | `""` | +| `tls.certCAFilename` | CA Certificate filename. If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate. | `nil` | +| `tls.crlFilename` | File containing a Certificate Revocation List | `nil` | +| `metrics.enabled` | Start a prometheus exporter | `false` | +| `metrics.service.type` | Kubernetes Service type | `ClusterIP` | +| `service.clusterIP` | Static clusterIP or None for headless services | `nil` | +| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` | +| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` | +| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` | +| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` | +| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` | +| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` | +| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` | +| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` | +| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql | +| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` | +| `metrics.image.registry` | PostgreSQL Exporter Image registry | `docker.io` | +| `metrics.image.repository` | PostgreSQL Exporter Image name | `bitnami/postgres-exporter` | +| `metrics.image.tag` | PostgreSQL Exporter Image tag | `{TAG_NAME}` | +| `metrics.image.pullPolicy` | PostgreSQL Exporter Image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `metrics.customMetrics` | Additional custom metrics | `nil` | +| `metrics.extraEnvVars` | Extra environment variables to add to exporter | `{}` (evaluated as a template) | +| `metrics.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `metrics.securityContext.enabled` | Enable security context for metrics | `false` | +| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` | +| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` | +| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | +| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` | +| `psp.create` | Create Pod Security Policy | `false` | +| `rbac.create` | Create Role and RoleBinding (required for PSP to work) | `false` | +| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```console +$ helm install my-release \ + --set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \ + bitnami/postgresql +``` + +The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`. + +> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```console +$ helm install my-release -f values.yaml bitnami/postgresql +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +### Customizing primary and read replica services in a replicated configuration + +At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object. + +### Change PostgreSQL version + +To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=X.Y.Z`. This approach is also applicable to other images like exporters. + +### postgresql.conf / pg_hba.conf files as configMap + +This helm chart also supports to customize the whole configuration file. + +Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server. + +Alternatively, you can add additional PostgreSQL configuration parameters using the `postgresqlExtendedConf` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}. Alternatively, to replace the entire default configuration use `postgresqlConfiguration`. + +In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options. + +### Allow settings to be loaded from files other than the default `postgresql.conf` + +If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory. +Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option. + +### Initialize a fresh instance + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap. + +Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict. + +In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter. + +The allowed extensions are `.sh`, `.sql` and `.sql.gz`. + +### Securing traffic using TLS + +TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart: + +- `tls.enabled`: Enable TLS support. Defaults to `false` +- `tls.certificatesSecret`: Name of an existing secret that contains the certificates. No defaults. +- `tls.certFilename`: Certificate filename. No defaults. +- `tls.certKeyFilename`: Certificate key filename. No defaults. + +For example: + +* First, create the secret with the cetificates files: + + ```console + kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt + ``` + +* Then, use the following parameters: + + ```console + volumePermissions.enabled=true + tls.enabled=true + tls.certificatesSecret="certificates-tls-secret" + tls.certFilename="cert.crt" + tls.certKeyFilename="cert.key" + ``` + + > Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going [issue](https://github.com/kubernetes/kubernetes/issues/57923) regarding kubernetes permissions and the use of `containerSecurityContext.runAsUser`, you must enable `volumePermissions` to ensure everything works as expected. + +### Sidecars + +If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec. + +```yaml +# For the PostgreSQL primary +primary: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +# For the PostgreSQL replicas +readReplicas: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +``` + +### Metrics + +The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml). + +The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details. + +### Use of global variables + +In more complex scenarios, we may have the following tree of dependencies + +``` + +--------------+ + | | + +------------+ Chart 1 +-----------+ + | | | | + | --------+------+ | + | | | + | | | + | | | + | | | + v v v ++-------+------+ +--------+------+ +--------+------+ +| | | | | | +| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 | +| | | | | | ++--------------+ +---------------+ +---------------+ +``` + +The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters: + +``` +postgresql.postgresqlPassword=testtest +subchart1.postgresql.postgresqlPassword=testtest +subchart2.postgresql.postgresqlPassword=testtest +postgresql.postgresqlDatabase=db1 +subchart1.postgresql.postgresqlDatabase=db1 +subchart2.postgresql.postgresqlDatabase=db1 +``` + +If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows: + +``` +global.postgresql.postgresqlPassword=testtest +global.postgresql.postgresqlDatabase=db1 +``` + +This way, the credentials will be available in all of the subcharts. + +## Persistence + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container. + +Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. +See the [Parameters](#parameters) section to configure the PVC or to disable persistence. + +If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished. + +## NetworkPolicy + +To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`. + +For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace: + +```console +$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}" +``` + +With NetworkPolicy enabled, traffic will be limited to just port 5432. + +For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL. +This label will be displayed in the output of a successful install. + +## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image + +- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image. +- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift. +- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,containerSecurityContext.enabled=false,shmVolume.chmod.enabled=false + +### Deploy chart using Docker Official PostgreSQL Image + +From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image. +Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory. + +``` +image.repository=postgres +image.tag=10.6 +postgresqlDataDir=/data/pgdata +persistence.mountPath=/data/ +``` + +### Setting Pod's affinity + +This chart allows you to set your custom affinity using the `XXX.affinity` paremeter(s). Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). + +As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters. + +## Troubleshooting + +Find more information about how to deal with common errors related to Bitnami’s Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues). + +## Upgrading + +It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart: + +```bash +$ helm upgrade my-release bitnami/postgresql \ + --set postgresqlPassword=[POSTGRESQL_PASSWORD] \ + --set replication.password=[REPLICATION_PASSWORD] +``` + +> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes. + +### To 10.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Move dependency information from the *requirements.yaml* to the *Chart.yaml* +- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock* +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart. + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ + +#### Breaking changes + +- The term `master` has been replaced with `primary` and `slave` with `readReplicas` throughout the chart. Role names have changed from `master` and `slave` to `primary` and `read`. + +To upgrade to `10.0.0`, it should be done reusing the PVCs used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is `postgresql`): + +> NOTE: Please, create a backup of your database before running any of those actions. + +Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release: + +```console +$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) +$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=master -o jsonpath="{.items[0].metadata.name}") +``` + +Delete the PostgreSQL statefulset. Notice the option `--cascade=false`: + +```console +$ kubectl delete statefulsets.apps postgresql-postgresql --cascade=false +``` + +Now the upgrade works: + +```console +$ helm upgrade postgresql bitnami/postgresql --set postgresqlPassword=$POSTGRESQL_PASSWORD --set persistence.existingClaim=$POSTGRESQL_PVC +``` + +You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one + +```console +$ kubectl delete pod postgresql-postgresql-0 +``` + +Finally, you should see the lines below in PostgreSQL container logs: + +```console +$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}") +... +postgresql 08:05:12.59 INFO ==> Deploying PostgreSQL with persisted data... +... +``` + +### To 9.0.0 + +In this version the chart was adapted to follow the Helm label best practices, see [PR 3021](https://github.com/bitnami/charts/pull/3021). That means the backward compatibility is not guarantee when upgrading the chart to this major version. + +As a workaround, you can delete the existing statefulset (using the `--cascade=false` flag pods are not deleted) before upgrade the chart. For example, this can be a valid workflow: + +- Deploy an old version (8.X.X) + +```console +$ helm install postgresql bitnami/postgresql --version 8.10.14 +``` + +- Old version is up and running + +```console +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 1 2020-08-04 13:39:54.783480286 +0000 UTC deployed postgresql-8.10.14 11.8.0 + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 76s +``` + +- The upgrade to the latest one (9.X.X) is going to fail + +```console +$ helm upgrade postgresql bitnami/postgresql +Error: UPGRADE FAILED: cannot patch "postgresql-postgresql" with kind StatefulSet: StatefulSet.apps "postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden +``` + +- Delete the statefulset + +```console +$ kubectl delete statefulsets.apps --cascade=false postgresql-postgresql +statefulset.apps "postgresql-postgresql" deleted +``` + +- Now the upgrade works + +```console +$ helm upgrade postgresql bitnami/postgresql +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 3 2020-08-04 13:42:08.020385884 +0000 UTC deployed postgresql-9.1.2 11.8.0 +``` + +- We can kill the existing pod and the new statefulset is going to create a new one: + +```console +$ kubectl delete pod postgresql-postgresql-0 +pod "postgresql-postgresql-0" deleted + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 19s +``` + +Please, note that without the `--cascade=false` both objects (statefulset and pod) are going to be removed and both objects will be deployed again with the `helm upgrade` command + +### To 8.0.0 + +Prefixes the port names with their protocols to comply with Istio conventions. + +If you depend on the port names in your setup, make sure to update them to reflect this change. + +### To 7.1.0 + +Adds support for LDAP configuration. + +### To 7.0.0 + +Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec. + +In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage. + +This major version bump signifies this change. + +### To 6.5.7 + +In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies: + +- protobuf +- protobuf-c +- json-c +- geos +- proj + +### To 5.0.0 + +In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/). + +For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs: + +```console +Welcome to the Bitnami postgresql container +Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql +Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues +Send us your feedback at containers@bitnami.com + +INFO ==> ** Starting PostgreSQL setup ** +NFO ==> Validating settings in POSTGRESQL_* env vars.. +INFO ==> Initializing PostgreSQL database... +INFO ==> postgresql.conf file not detected. Generating it... +INFO ==> pg_hba.conf file not detected. Generating it... +INFO ==> Deploying PostgreSQL with persisted data... +INFO ==> Configuring replication parameters +INFO ==> Loading custom scripts... +INFO ==> Enabling remote connections +INFO ==> Stopping PostgreSQL... +INFO ==> ** PostgreSQL setup finished! ** + +INFO ==> ** Starting PostgreSQL ** + [1] FATAL: database files are incompatible with server + [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3. +``` + +In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one. + +### To 4.0.0 + +This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately. + +IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error + +``` +The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development +``` + +### To 3.0.0 + +This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods. +It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride. + +#### Breaking changes + +- `affinty` has been renamed to `master.affinity` and `slave.affinity`. +- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`. +- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`. + +### To 2.0.0 + +In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps: + +- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running + +```console +$ kubectl get svc +``` + +- Install (not upgrade) the new version + +```console +$ helm repo update +$ helm install my-release bitnami/postgresql +``` + +- Connect to the new pod (you can obtain the name by running `kubectl get pods`): + +```console +$ kubectl exec -it NAME bash +``` + +- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart: + +```console +$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql +``` + +After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`). +This operation could take some time depending on the database size. + +- Once you have the backup file, you can restore it with a command like the one below: + +```console +$ psql -U postgres DATABASE_NAME < /tmp/backup.sql +``` + +In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt). + +If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below. + +```console +$ psql -U postgres +postgres=# drop database DATABASE_NAME; +postgres=# create database DATABASE_NAME; +postgres=# create user USER_NAME; +postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD'; +postgres=# grant all privileges on database DATABASE_NAME to USER_NAME; +postgres=# alter database DATABASE_NAME owner to USER_NAME; +``` diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/.helmignore b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/.helmignore new file mode 100644 index 0000000000..50af031725 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/Chart.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/Chart.yaml new file mode 100644 index 0000000000..bcc3808d08 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/Chart.yaml @@ -0,0 +1,23 @@ +annotations: + category: Infrastructure +apiVersion: v2 +appVersion: 1.4.2 +description: A Library Helm Chart for grouping common logic between bitnami charts. + This chart is not deployable by itself. +home: https://github.com/bitnami/charts/tree/master/bitnami/common +icon: https://bitnami.com/downloads/logos/bitnami-mark.png +keywords: +- common +- helper +- template +- function +- bitnami +maintainers: +- email: containers@bitnami.com + name: Bitnami +name: common +sources: +- https://github.com/bitnami/charts +- http://www.bitnami.com/ +type: library +version: 1.4.2 diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/README.md b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/README.md new file mode 100644 index 0000000000..7287cbb5fc --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/README.md @@ -0,0 +1,322 @@ +# Bitnami Common Library Chart + +A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts. + +## TL;DR + +```yaml +dependencies: + - name: common + version: 0.x.x + repository: https://charts.bitnami.com/bitnami +``` + +```bash +$ helm dependency update +``` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "common.names.fullname" . }} +data: + myvalue: "Hello World" +``` + +## Introduction + +This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications. + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 + +## Parameters + +The following table lists the helpers available in the library which are scoped in different sections. + +### Affinities + +| Helper identifier | Description | Expected Input | +|-------------------------------|------------------------------------------------------|------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | +| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | + +### Capabilities + +| Helper identifier | Description | Expected Input | +|----------------------------------------------|------------------------------------------------------------------------------------------------|-------------------| +| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context | +| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context | +| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context | +| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context | +| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context | +| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context | +| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context | + +### Errors + +| Helper identifier | Description | Expected Input | +|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| +| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` | + +### Images + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. | +| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` | + +### Ingress + +| Helper identifier | Description | Expected Input | +|--------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences | + +### Labels + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|-------------------| +| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context | +| `common.labels.matchLabels` | Return the proper Docker Image Registry Secret Names | `.` Chart context | + +### Names + +| Helper identifier | Description | Expected Inpput | +|-------------------------|------------------------------------------------------------|-------------------| +| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context | +| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context | +| `common.names.chart` | Chart name plus version | `.` Chart context | + +### Secrets + +| Helper identifier | Description | Expected Input | +|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. | +| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. | +| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. | +| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` | + +### Storage + +| Helper identifier | Description | Expected Input | +|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. | + +### TplValues + +| Helper identifier | Description | Expected Input | +|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` | + +### Utils + +| Helper identifier | Description | Expected Input | +|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------| +| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` | +| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` | +| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` | +| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` | + +### Validations + +| Helper identifier | Description | Expected Input | +|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) | +| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) | +| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. | +| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. | +| `common.validations.values.redis.passwords` | This helper will ensure required password for RedisTM are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. | +| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. | +| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. | + +### Warnings + +| Helper identifier | Description | Expected Input | +|------------------------------|----------------------------------|------------------------------------------------------------| +| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. | + +## Special input schemas + +### ImageRoot + +```yaml +registry: + type: string + description: Docker registry where the image is located + example: docker.io + +repository: + type: string + description: Repository and image name + example: bitnami/nginx + +tag: + type: string + description: image tag + example: 1.16.1-debian-10-r63 + +pullPolicy: + type: string + description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + +pullSecrets: + type: array + items: + type: string + description: Optionally specify an array of imagePullSecrets. + +debug: + type: boolean + description: Set to true if you would like to see extra information on logs + example: false + +## An instance would be: +# registry: docker.io +# repository: bitnami/nginx +# tag: 1.16.1-debian-10-r63 +# pullPolicy: IfNotPresent +# debug: false +``` + +### Persistence + +```yaml +enabled: + type: boolean + description: Whether enable persistence. + example: true + +storageClass: + type: string + description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning. + example: "-" + +accessMode: + type: string + description: Access mode for the Persistent Volume Storage. + example: ReadWriteOnce + +size: + type: string + description: Size the Persistent Volume Storage. + example: 8Gi + +path: + type: string + description: Path to be persisted. + example: /bitnami + +## An instance would be: +# enabled: true +# storageClass: "-" +# accessMode: ReadWriteOnce +# size: 8Gi +# path: /bitnami +``` + +### ExistingSecret + +```yaml +name: + type: string + description: Name of the existing secret. + example: mySecret +keyMapping: + description: Mapping between the expected key name and the name of the key in the existing secret. + type: object + +## An instance would be: +# name: mySecret +# keyMapping: +# password: myPasswordKey +``` + +#### Example of use + +When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets. + +```yaml +# templates/secret.yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "common.names.fullname" . }} + labels: + app: {{ include "common.names.fullname" . }} +type: Opaque +data: + password: {{ .Values.password | b64enc | quote }} + +# templates/dpl.yaml +--- +... + env: + - name: PASSWORD + valueFrom: + secretKeyRef: + name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }} + key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }} +... + +# values.yaml +--- +name: mySecret +keyMapping: + password: myPasswordKey +``` + +### ValidateValue + +#### NOTES.txt + +```console +{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}} + +{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} +``` + +If we force those values to be empty we will see some alerts + +```console +$ helm install test mychart --set path.to.value00="",path.to.value01="" + 'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value: + + export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode) + + 'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value: + + export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode) +``` + +## Upgrading + +### To 1.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information. +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_affinities.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_affinities.tpl new file mode 100644 index 0000000000..493a6dc7e4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_affinities.tpl @@ -0,0 +1,94 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return a soft nodeAffinity definition +{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.soft" -}} +preferredDuringSchedulingIgnoredDuringExecution: + - preference: + matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} + weight: 1 +{{- end -}} + +{{/* +Return a hard nodeAffinity definition +{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.hard" -}} +requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} +{{- end -}} + +{{/* +Return a nodeAffinity definition +{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.nodes.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.nodes.hard" . -}} + {{- end -}} +{{- end -}} + +{{/* +Return a soft podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.soft" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.soft" -}} +{{- $component := default "" .component -}} +preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname + weight: 1 +{{- end -}} + +{{/* +Return a hard podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.hard" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.hard" -}} +{{- $component := default "" .component -}} +requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname +{{- end -}} + +{{/* +Return a podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.pods" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.pods.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.pods.hard" . -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_capabilities.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_capabilities.tpl new file mode 100644 index 0000000000..4dde56a38d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_capabilities.tpl @@ -0,0 +1,95 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return the target Kubernetes version +*/}} +{{- define "common.capabilities.kubeVersion" -}} +{{- if .Values.global }} + {{- if .Values.global.kubeVersion }} + {{- .Values.global.kubeVersion -}} + {{- else }} + {{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} + {{- end -}} +{{- else }} +{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for deployment. +*/}} +{{- define "common.capabilities.deployment.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for statefulset. +*/}} +{{- define "common.capabilities.statefulset.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apps/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "common.capabilities.ingress.apiVersion" -}} +{{- if .Values.ingress -}} +{{- if .Values.ingress.apiVersion -}} +{{- .Values.ingress.apiVersion -}} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end }} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for RBAC resources. +*/}} +{{- define "common.capabilities.rbac.apiVersion" -}} +{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "rbac.authorization.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "rbac.authorization.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for CRDs. +*/}} +{{- define "common.capabilities.crd.apiVersion" -}} +{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apiextensions.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "apiextensions.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Returns true if the used Helm version is 3.3+. +A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure. +This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error. +**To be removed when the catalog's minimun Helm version is 3.3** +*/}} +{{- define "common.capabilities.supportsHelmVersion" -}} +{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_errors.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_errors.tpl new file mode 100644 index 0000000000..a79cc2e322 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_errors.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Through error when upgrading using empty passwords values that must not be empty. + +Usage: +{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}} +{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}} +{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }} + +Required password params: + - validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error. + - context - Context - Required. Parent context. +*/}} +{{- define "common.errors.upgrade.passwords.empty" -}} + {{- $validationErrors := join "" .validationErrors -}} + {{- if and $validationErrors .context.Release.IsUpgrade -}} + {{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}} + {{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}} + {{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}} + {{- $errorString = print $errorString "\n%s" -}} + {{- printf $errorString $validationErrors | fail -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_images.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_images.tpl new file mode 100644 index 0000000000..60f04fd6e2 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_images.tpl @@ -0,0 +1,47 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper image name +{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }} +*/}} +{{- define "common.images.image" -}} +{{- $registryName := .imageRoot.registry -}} +{{- $repositoryName := .imageRoot.repository -}} +{{- $tag := .imageRoot.tag | toString -}} +{{- if .global }} + {{- if .global.imageRegistry }} + {{- $registryName = .global.imageRegistry -}} + {{- end -}} +{{- end -}} +{{- if $registryName }} +{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- else -}} +{{- printf "%s:%s" $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }} +*/}} +{{- define "common.images.pullSecrets" -}} + {{- $pullSecrets := list }} + + {{- if .global }} + {{- range .global.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- range .images -}} + {{- range .pullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- if (not (empty $pullSecrets)) }} +imagePullSecrets: + {{- range $pullSecrets }} + - name: {{ . }} + {{- end }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_ingress.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_ingress.tpl new file mode 100644 index 0000000000..622ef50e3c --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_ingress.tpl @@ -0,0 +1,42 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Generate backend entry that is compatible with all Kubernetes API versions. + +Usage: +{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }} + +Params: + - serviceName - String. Name of an existing service backend + - servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.ingress.backend" -}} +{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}} +{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}} +serviceName: {{ .serviceName }} +servicePort: {{ .servicePort }} +{{- else -}} +service: + name: {{ .serviceName }} + port: + {{- if typeIs "string" .servicePort }} + name: {{ .servicePort }} + {{- else if typeIs "int" .servicePort }} + number: {{ .servicePort }} + {{- end }} +{{- end -}} +{{- end -}} + +{{/* +Print "true" if the API pathType field is supported +Usage: +{{ include "common.ingress.supportsPathType" . }} +*/}} +{{- define "common.ingress.supportsPathType" -}} +{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}} +{{- print "false" -}} +{{- else -}} +{{- print "true" -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_labels.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_labels.tpl new file mode 100644 index 0000000000..252066c7e2 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_labels.tpl @@ -0,0 +1,18 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Kubernetes standard labels +*/}} +{{- define "common.labels.standard" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +helm.sh/chart: {{ include "common.names.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end -}} + +{{/* +Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector +*/}} +{{- define "common.labels.matchLabels" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_names.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_names.tpl new file mode 100644 index 0000000000..adf2a74f48 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_names.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "common.names.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "common.names.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "common.names.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_secrets.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_secrets.tpl new file mode 100644 index 0000000000..60b84a7019 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_secrets.tpl @@ -0,0 +1,129 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Generate secret name. + +Usage: +{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.secrets.name" -}} +{{- $name := (include "common.names.fullname" .context) -}} + +{{- if .defaultNameSuffix -}} +{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- with .existingSecret -}} +{{- if not (typeIs "string" .) -}} +{{- with .name -}} +{{- $name = . -}} +{{- end -}} +{{- else -}} +{{- $name = . -}} +{{- end -}} +{{- end -}} + +{{- printf "%s" $name -}} +{{- end -}} + +{{/* +Generate secret key. + +Usage: +{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - key - String - Required. Name of the key in the secret. +*/}} +{{- define "common.secrets.key" -}} +{{- $key := .key -}} + +{{- if .existingSecret -}} + {{- if not (typeIs "string" .existingSecret) -}} + {{- if .existingSecret.keyMapping -}} + {{- $key = index .existingSecret.keyMapping $.key -}} + {{- end -}} + {{- end }} +{{- end -}} + +{{- printf "%s" $key -}} +{{- end -}} + +{{/* +Generate secret password or retrieve one if already created. + +Usage: +{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - key - String - Required - Name of the key in the secret. + - providedValues - List - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value. + - length - int - Optional - Length of the generated random password. + - strong - Boolean - Optional - Whether to add symbols to the generated random password. + - chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.passwords.manage" -}} + +{{- $password := "" }} +{{- $subchart := "" }} +{{- $chartName := default "" .chartName }} +{{- $passwordLength := default 10 .length }} +{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }} +{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- if index $secret.data .key }} + {{- $password = index $secret.data .key }} + {{- end -}} +{{- else if $providedPasswordValue }} + {{- $password = $providedPasswordValue | toString | b64enc | quote }} +{{- else }} + + {{- if .context.Values.enabled }} + {{- $subchart = $chartName }} + {{- end -}} + + {{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}} + {{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}} + {{- $passwordValidationErrors := list $requiredPasswordError -}} + {{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}} + + {{- if .strong }} + {{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }} + {{- $password = randAscii $passwordLength }} + {{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }} + {{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }} + {{- else }} + {{- $password = randAlphaNum $passwordLength | b64enc | quote }} + {{- end }} +{{- end -}} +{{- printf "%s" $password -}} +{{- end -}} + +{{/* +Returns whether a previous generated secret already exists + +Usage: +{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.exists" -}} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_storage.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_storage.tpl new file mode 100644 index 0000000000..60e2a844f6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_storage.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper Storage Class +{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }} +*/}} +{{- define "common.storage.class" -}} + +{{- $storageClass := .persistence.storageClass -}} +{{- if .global -}} + {{- if .global.storageClass -}} + {{- $storageClass = .global.storageClass -}} + {{- end -}} +{{- end -}} + +{{- if $storageClass -}} + {{- if (eq "-" $storageClass) -}} + {{- printf "storageClassName: \"\"" -}} + {{- else }} + {{- printf "storageClassName: %s" $storageClass -}} + {{- end -}} +{{- end -}} + +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_tplvalues.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_tplvalues.tpl new file mode 100644 index 0000000000..2db166851b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_tplvalues.tpl @@ -0,0 +1,13 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Renders a value that contains template. +Usage: +{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "common.tplvalues.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_utils.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_utils.tpl new file mode 100644 index 0000000000..ea083a249f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_utils.tpl @@ -0,0 +1,62 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Print instructions to get a secret value. +Usage: +{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }} +*/}} +{{- define "common.utils.secret.getvalue" -}} +{{- $varname := include "common.utils.fieldToEnvVar" . -}} +export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode) +{{- end -}} + +{{/* +Build env var name given a field +Usage: +{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }} +*/}} +{{- define "common.utils.fieldToEnvVar" -}} + {{- $fieldNameSplit := splitList "-" .field -}} + {{- $upperCaseFieldNameSplit := list -}} + + {{- range $fieldNameSplit -}} + {{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}} + {{- end -}} + + {{ join "_" $upperCaseFieldNameSplit }} +{{- end -}} + +{{/* +Gets a value from .Values given +Usage: +{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }} +*/}} +{{- define "common.utils.getValueFromKey" -}} +{{- $splitKey := splitList "." .key -}} +{{- $value := "" -}} +{{- $latestObj := $.context.Values -}} +{{- range $splitKey -}} + {{- if not $latestObj -}} + {{- printf "please review the entire path of '%s' exists in values" $.key | fail -}} + {{- end -}} + {{- $value = ( index $latestObj . ) -}} + {{- $latestObj = $value -}} +{{- end -}} +{{- printf "%v" (default "" $value) -}} +{{- end -}} + +{{/* +Returns first .Values key with a defined value or first of the list if all non-defined +Usage: +{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }} +*/}} +{{- define "common.utils.getKeyFromList" -}} +{{- $key := first .keys -}} +{{- $reverseKeys := reverse .keys }} +{{- range $reverseKeys }} + {{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }} + {{- if $value -}} + {{- $key = . }} + {{- end -}} +{{- end -}} +{{- printf "%s" $key -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_warnings.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_warnings.tpl new file mode 100644 index 0000000000..ae10fa41ee --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/_warnings.tpl @@ -0,0 +1,14 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Warning about using rolling tag. +Usage: +{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }} +*/}} +{{- define "common.warnings.rollingTag" -}} + +{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }} +WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. ++info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +{{- end }} + +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_cassandra.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_cassandra.tpl new file mode 100644 index 0000000000..8679ddffb1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_cassandra.tpl @@ -0,0 +1,72 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Cassandra required passwords are not empty. + +Usage: +{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret" + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.cassandra.passwords" -}} + {{- $existingSecret := include "common.cassandra.values.existingSecret" . -}} + {{- $enabled := include "common.cassandra.values.enabled" . -}} + {{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}} + {{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.cassandra.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.cassandra.dbUser.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.dbUser.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled cassandra. + +Usage: +{{ include "common.cassandra.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.cassandra.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.cassandra.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key dbUser + +Usage: +{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.key.dbUser" -}} + {{- if .subchart -}} + cassandra.dbUser + {{- else -}} + dbUser + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mariadb.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mariadb.tpl new file mode 100644 index 0000000000..bb5ed7253d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mariadb.tpl @@ -0,0 +1,103 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MariaDB required passwords are not empty. + +Usage: +{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret" + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mariadb.passwords" -}} + {{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mariadb.values.enabled" . -}} + {{- $architecture := include "common.mariadb.values.architecture" . -}} + {{- $authPrefix := include "common.mariadb.values.key.auth" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- if not (empty $valueUsername) -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replication") -}} + {{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mariadb. + +Usage: +{{ include "common.mariadb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mariadb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mariadb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.key.auth" -}} + {{- if .subchart -}} + mariadb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mongodb.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mongodb.tpl new file mode 100644 index 0000000000..7d5ecbccb4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_mongodb.tpl @@ -0,0 +1,108 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MongoDB(R) required passwords are not empty. + +Usage: +{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MongoDB(R) values are stored, e.g: "mongodb-passwords-secret" + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mongodb.passwords" -}} + {{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mongodb.values.enabled" . -}} + {{- $authPrefix := include "common.mongodb.values.key.auth" . -}} + {{- $architecture := include "common.mongodb.values.architecture" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyDatabase := printf "%s.database" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}} + {{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}} + + {{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}} + + {{- if and (not $existingSecret) (eq $enabled "true") (eq $authEnabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }} + {{- if and $valueUsername $valueDatabase -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replicaset") -}} + {{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mongodb. + +Usage: +{{ include "common.mongodb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mongodb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mongodb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.key.auth" -}} + {{- if .subchart -}} + mongodb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_postgresql.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_postgresql.tpl new file mode 100644 index 0000000000..992bcd3908 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_postgresql.tpl @@ -0,0 +1,131 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate PostgreSQL required passwords are not empty. + +Usage: +{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret" + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.postgresql.passwords" -}} + {{- $existingSecret := include "common.postgresql.values.existingSecret" . -}} + {{- $enabled := include "common.postgresql.values.enabled" . -}} + {{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}} + {{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}} + + {{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}} + {{- if (eq $enabledReplication "true") -}} + {{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to decide whether evaluate global values. + +Usage: +{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }} +Params: + - key - String - Required. Field to be evaluated within global, e.g: "existingSecret" +*/}} +{{- define "common.postgresql.values.use.global" -}} + {{- if .context.Values.global -}} + {{- if .context.Values.global.postgresql -}} + {{- index .context.Values.global.postgresql .key | quote -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.postgresql.values.existingSecret" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.existingSecret" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}} + + {{- if .subchart -}} + {{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}} + {{- else -}} + {{- default (.context.Values.existingSecret | quote) $globalValue -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled postgresql. + +Usage: +{{ include "common.postgresql.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key postgressPassword. + +Usage: +{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.postgressPassword" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}} + + {{- if not $globalValue -}} + {{- if .subchart -}} + postgresql.postgresqlPassword + {{- else -}} + postgresqlPassword + {{- end -}} + {{- else -}} + global.postgresql.postgresqlPassword + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled.replication. + +Usage: +{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.enabled.replication" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.replication.enabled -}} + {{- else -}} + {{- printf "%v" .context.Values.replication.enabled -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key replication.password. + +Usage: +{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.replicationPassword" -}} + {{- if .subchart -}} + postgresql.replication.password + {{- else -}} + replication.password + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_redis.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_redis.tpl new file mode 100644 index 0000000000..3e2a47c039 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_redis.tpl @@ -0,0 +1,72 @@ + +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Redis(TM) required passwords are not empty. + +Usage: +{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret" + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.redis.passwords" -}} + {{- $existingSecret := include "common.redis.values.existingSecret" . -}} + {{- $enabled := include "common.redis.values.enabled" . -}} + {{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}} + {{- $valueKeyRedisPassword := printf "%s%s" $valueKeyPrefix "password" -}} + {{- $valueKeyRedisUsePassword := printf "%s%s" $valueKeyPrefix "usePassword" -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $usePassword := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUsePassword "context" .context) -}} + {{- if eq $usePassword "true" -}} + {{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Redis Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.redis.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Redis(TM) is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.redis.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled redis. + +Usage: +{{ include "common.redis.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.redis.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.redis.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right prefix path for the values + +Usage: +{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.keys.prefix" -}} + {{- if .subchart -}}redis.{{- else -}}{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_validations.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_validations.tpl new file mode 100644 index 0000000000..9a814cf40d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/templates/validations/_validations.tpl @@ -0,0 +1,46 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate values must not be empty. + +Usage: +{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}} +{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" +*/}} +{{- define "common.validations.values.multiple.empty" -}} + {{- range .required -}} + {{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}} + {{- end -}} +{{- end -}} + +{{/* +Validate a value must not be empty. + +Usage: +{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" + - subchart - String - Optional - Name of the subchart that the validated password is part of. +*/}} +{{- define "common.validations.values.single.empty" -}} + {{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }} + {{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }} + + {{- if not $value -}} + {{- $varname := "my-value" -}} + {{- $getCurrentValue := "" -}} + {{- if and .secret .field -}} + {{- $varname = include "common.utils.fieldToEnvVar" . -}} + {{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}} + {{- end -}} + {{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/values.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/values.yaml new file mode 100644 index 0000000000..9ecdc93f58 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/charts/common/values.yaml @@ -0,0 +1,3 @@ +## bitnami/common +## It is required by CI/CD tools and processes. +exampleValue: common-chart diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/commonAnnotations.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/commonAnnotations.yaml new file mode 100644 index 0000000000..97e18a4cc0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/commonAnnotations.yaml @@ -0,0 +1,3 @@ +commonAnnotations: + helm.sh/hook: "\"pre-install, pre-upgrade\"" + helm.sh/hook-weight: "-1" diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/default-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/default-values.yaml new file mode 100644 index 0000000000..fc2ba605ad --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/default-values.yaml @@ -0,0 +1 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/shmvolume-disabled-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/shmvolume-disabled-values.yaml new file mode 100644 index 0000000000..347d3b40a8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/ci/shmvolume-disabled-values.yaml @@ -0,0 +1,2 @@ +shmVolume: + enabled: false diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/README.md b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/README.md new file mode 100644 index 0000000000..1813a2feaa --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/README.md @@ -0,0 +1 @@ +Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map. diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/conf.d/README.md b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/conf.d/README.md new file mode 100644 index 0000000000..184c1875d5 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/conf.d/README.md @@ -0,0 +1,4 @@ +If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files. +These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file). diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/docker-entrypoint-initdb.d/README.md b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/docker-entrypoint-initdb.d/README.md new file mode 100644 index 0000000000..cba38091e0 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/files/docker-entrypoint-initdb.d/README.md @@ -0,0 +1,3 @@ +You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image. + +More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository. \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/NOTES.txt b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/NOTES.txt new file mode 100644 index 0000000000..4e98958c13 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/NOTES.txt @@ -0,0 +1,59 @@ +** Please be patient while the chart is being deployed ** + +PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster: + + {{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection +{{- if .Values.replication.enabled }} + {{ template "common.names.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection +{{- end }} + +{{- if not (eq (include "postgresql.username" .) "postgres") }} + +To get the password for "postgres" run: + + export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode) +{{- end }} + +To get the password for "{{ template "postgresql.username" . }}" run: + + export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode) + +To connect to your database run the following command: + + kubectl run {{ template "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} + --labels="{{ template "common.names.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "common.names.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} +Note: Since NetworkPolicy is enabled, only pods with label {{ template "common.names.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster. +{{- end }} + +To connect to your database from outside the cluster execute the following commands: + +{{- if contains "NodePort" .Values.service.type }} + + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }}) + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "LoadBalancer" .Values.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "ClusterIP" .Values.service.type }} + + kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} & + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{- end }} + +{{- include "postgresql.validateValues" . -}} + +{{- include "common.warnings.rollingTag" .Values.image -}} + +{{- $passwordValidationErrors := include "common.validations.values.postgresql.passwords" (dict "secret" (include "common.names.fullname" .) "context" $) -}} + +{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $passwordValidationErrors) "context" $) -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/_helpers.tpl b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/_helpers.tpl new file mode 100644 index 0000000000..1f98efe789 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/_helpers.tpl @@ -0,0 +1,337 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "postgresql.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "postgresql.primary.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}} +{{- if .Values.replication.enabled -}} +{{- printf "%s-%s" $fullname "primary" | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper PostgreSQL image name +*/}} +{{- define "postgresql.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper PostgreSQL metrics image name +*/}} +{{- define "postgresql.metrics.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper image name (for the init container volume-permissions image) +*/}} +{{- define "postgresql.volumePermissions.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +*/}} +{{- define "postgresql.imagePullSecrets" -}} +{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image .Values.volumePermissions.image) "global" .Values.global) }} +{{- end -}} + +{{/* +Return PostgreSQL postgres user password +*/}} +{{- define "postgresql.postgres.password" -}} +{{- if .Values.global.postgresql.postgresqlPostgresPassword }} + {{- .Values.global.postgresql.postgresqlPostgresPassword -}} +{{- else if .Values.postgresqlPostgresPassword -}} + {{- .Values.postgresqlPostgresPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL password +*/}} +{{- define "postgresql.password" -}} +{{- if .Values.global.postgresql.postgresqlPassword }} + {{- .Values.global.postgresql.postgresqlPassword -}} +{{- else if .Values.postgresqlPassword -}} + {{- .Values.postgresqlPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication password +*/}} +{{- define "postgresql.replication.password" -}} +{{- if .Values.global.postgresql.replicationPassword }} + {{- .Values.global.postgresql.replicationPassword -}} +{{- else if .Values.replication.password -}} + {{- .Values.replication.password -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL username +*/}} +{{- define "postgresql.username" -}} +{{- if .Values.global.postgresql.postgresqlUsername }} + {{- .Values.global.postgresql.postgresqlUsername -}} +{{- else -}} + {{- .Values.postgresqlUsername -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication username +*/}} +{{- define "postgresql.replication.username" -}} +{{- if .Values.global.postgresql.replicationUser }} + {{- .Values.global.postgresql.replicationUser -}} +{{- else -}} + {{- .Values.replication.user -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL port +*/}} +{{- define "postgresql.port" -}} +{{- if .Values.global.postgresql.servicePort }} + {{- .Values.global.postgresql.servicePort -}} +{{- else -}} + {{- .Values.service.port -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL created database +*/}} +{{- define "postgresql.database" -}} +{{- if .Values.global.postgresql.postgresqlDatabase }} + {{- .Values.global.postgresql.postgresqlDatabase -}} +{{- else if .Values.postgresqlDatabase -}} + {{- .Values.postgresqlDatabase -}} +{{- end -}} +{{- end -}} + +{{/* +Get the password secret. +*/}} +{{- define "postgresql.secretName" -}} +{{- if .Values.global.postgresql.existingSecret }} + {{- printf "%s" (tpl .Values.global.postgresql.existingSecret $) -}} +{{- else if .Values.existingSecret -}} + {{- printf "%s" (tpl .Values.existingSecret $) -}} +{{- else -}} + {{- printf "%s" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if we should use an existingSecret. +*/}} +{{- define "postgresql.useExistingSecret" -}} +{{- if or .Values.global.postgresql.existingSecret .Values.existingSecret -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a secret object should be created +*/}} +{{- define "postgresql.createSecret" -}} +{{- if not (include "postgresql.useExistingSecret" .) -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the configuration ConfigMap name. +*/}} +{{- define "postgresql.configurationCM" -}} +{{- if .Values.configurationConfigMap -}} +{{- printf "%s" (tpl .Values.configurationConfigMap $) -}} +{{- else -}} +{{- printf "%s-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the extended configuration ConfigMap name. +*/}} +{{- define "postgresql.extendedConfigurationCM" -}} +{{- if .Values.extendedConfConfigMap -}} +{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}} +{{- else -}} +{{- printf "%s-extended-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a configmap should be mounted with PostgreSQL configuration +*/}} +{{- define "postgresql.mountConfigurationCM" -}} +{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts ConfigMap name. +*/}} +{{- define "postgresql.initdbScriptsCM" -}} +{{- if .Values.initdbScriptsConfigMap -}} +{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}} +{{- else -}} +{{- printf "%s-init-scripts" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts Secret name. +*/}} +{{- define "postgresql.initdbScriptsSecret" -}} +{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}} +{{- end -}} + +{{/* +Get the metrics ConfigMap name. +*/}} +{{- define "postgresql.metricsCM" -}} +{{- printf "%s-metrics" (include "common.names.fullname" .) -}} +{{- end -}} + +{{/* +Get the readiness probe command +*/}} +{{- define "postgresql.readinessProbeCommand" -}} +- | +{{- if (include "postgresql.database" .) }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- else }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- end }} +{{- if contains "bitnami/" .Values.image.repository }} + [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] +{{- end -}} +{{- end -}} + +{{/* +Compile all warnings into a single message, and call fail. +*/}} +{{- define "postgresql.validateValues" -}} +{{- $messages := list -}} +{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.psp" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.tls" .) -}} +{{- $messages := without $messages "" -}} +{{- $message := join "\n" $messages -}} + +{{- if $message -}} +{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}} +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap +*/}} +{{- define "postgresql.validateValues.ldapConfigurationMethod" -}} +{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }} +postgresql: ldap.url, ldap.server + You cannot set both `ldap.url` and `ldap.server` at the same time. + Please provide a unique way to configure LDAP. + More info at https://www.postgresql.org/docs/current/auth-ldap.html +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If PSP is enabled RBAC should be enabled too +*/}} +{{- define "postgresql.validateValues.psp" -}} +{{- if and .Values.psp.create (not .Values.rbac.create) }} +postgresql: psp.create, rbac.create + RBAC should be enabled if PSP is enabled in order for PSP to work. + More info at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for podsecuritypolicy. +*/}} +{{- define "podsecuritypolicy.apiVersion" -}} +{{- if semverCompare "<1.10-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "policy/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "postgresql.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"extensions/v1beta1" +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"networking.k8s.io/v1" +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql TLS - When TLS is enabled, so must be VolumePermissions +*/}} +{{- define "postgresql.validateValues.tls" -}} +{{- if and .Values.tls.enabled (not .Values.volumePermissions.enabled) }} +postgresql: tls.enabled, volumePermissions.enabled + When TLS is enabled you must enable volumePermissions as well to ensure certificates files have + the right permissions. +{{- end -}} +{{- end -}} + +{{/* +Return the path to the cert file. +*/}} +{{- define "postgresql.tlsCert" -}} +{{- required "Certificate filename is required when TLS in enabled" .Values.tls.certFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the cert key file. +*/}} +{{- define "postgresql.tlsCertKey" -}} +{{- required "Certificate Key filename is required when TLS in enabled" .Values.tls.certKeyFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the CA cert file. +*/}} +{{- define "postgresql.tlsCACert" -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.certCAFilename -}} +{{- end -}} + +{{/* +Return the path to the CRL file. +*/}} +{{- define "postgresql.tlsCRL" -}} +{{- if .Values.tls.crlFilename -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.crlFilename -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/configmap.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/configmap.yaml new file mode 100644 index 0000000000..3a5ea18ae9 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/configmap.yaml @@ -0,0 +1,31 @@ +{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- if (.Files.Glob "files/postgresql.conf") }} +{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }} +{{- else if .Values.postgresqlConfiguration }} + postgresql.conf: | +{{- range $key, $value := default dict .Values.postgresqlConfiguration }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- if (.Files.Glob "files/pg_hba.conf") }} +{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }} +{{- else if .Values.pgHbaConfiguration }} + pg_hba.conf: | +{{ .Values.pgHbaConfiguration | indent 4 }} +{{- end }} +{{ end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extended-config-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extended-config-configmap.yaml new file mode 100644 index 0000000000..b0dad253b5 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extended-config-configmap.yaml @@ -0,0 +1,26 @@ +{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-extended-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- with .Files.Glob "files/conf.d/*.conf" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{ with .Values.postgresqlExtendedConf }} + override.conf: | +{{- range $key, $value := . }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extra-list.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extra-list.yaml new file mode 100644 index 0000000000..9ac65f9e16 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/extra-list.yaml @@ -0,0 +1,4 @@ +{{- range .Values.extraDeploy }} +--- +{{ include "common.tplvalues.render" (dict "value" . "context" $) }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/initialization-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/initialization-configmap.yaml new file mode 100644 index 0000000000..7796c67a93 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/initialization-configmap.yaml @@ -0,0 +1,25 @@ +{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-init-scripts + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }} +binaryData: +{{- range $path, $bytes := . }} + {{ base $path }}: {{ $.Files.Get $path | b64enc | quote }} +{{- end }} +{{- end }} +data: +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{- with .Values.initdbScripts }} +{{ toYaml . | indent 2 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-configmap.yaml new file mode 100644 index 0000000000..fa539582bb --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-configmap.yaml @@ -0,0 +1,14 @@ +{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "postgresql.metricsCM" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: + custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-svc.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-svc.yaml new file mode 100644 index 0000000000..af8b67e2ff --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/metrics-svc.yaml @@ -0,0 +1,26 @@ +{{- if .Values.metrics.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-metrics + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- toYaml .Values.metrics.service.annotations | nindent 4 }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ .Values.metrics.service.type }} + {{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }} + {{- end }} + ports: + - name: http-metrics + port: 9187 + targetPort: http-metrics + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/networkpolicy.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/networkpolicy.yaml new file mode 100644 index 0000000000..4f2740ea0c --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/networkpolicy.yaml @@ -0,0 +1,39 @@ +{{- if .Values.networkPolicy.enabled }} +kind: NetworkPolicy +apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + ingress: + # Allow inbound connections + - ports: + - port: {{ template "postgresql.port" . }} + {{- if not .Values.networkPolicy.allowExternal }} + from: + - podSelector: + matchLabels: + {{ template "common.names.fullname" . }}-client: "true" + {{- if .Values.networkPolicy.explicitNamespacesSelector }} + namespaceSelector: +{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }} + {{- end }} + - podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 14 }} + role: read + {{- end }} + {{- if .Values.metrics.enabled }} + # Allow prometheus scrapes + - ports: + - port: 9187 + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/podsecuritypolicy.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..0c49694fad --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/podsecuritypolicy.yaml @@ -0,0 +1,38 @@ +{{- if .Values.psp.create }} +apiVersion: {{ include "podsecuritypolicy.apiVersion" . }} +kind: PodSecurityPolicy +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + privileged: false + volumes: + - 'configMap' + - 'secret' + - 'persistentVolumeClaim' + - 'emptyDir' + - 'projected' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/prometheusrule.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/prometheusrule.yaml new file mode 100644 index 0000000000..d0f408c78f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/prometheusrule.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ template "common.names.fullname" . }} +{{- with .Values.metrics.prometheusRule.namespace }} + namespace: {{ . }} +{{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- with .Values.metrics.prometheusRule.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} +spec: +{{- with .Values.metrics.prometheusRule.rules }} + groups: + - name: {{ template "postgresql.name" $ }} + rules: {{ tpl (toYaml .) $ | nindent 8 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/role.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/role.yaml new file mode 100644 index 0000000000..017a5716b1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/role.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: Role +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +rules: + {{- if .Values.psp.create }} + - apiGroups: ["extensions"] + resources: ["podsecuritypolicies"] + verbs: ["use"] + resourceNames: + - {{ template "common.names.fullname" . }} + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/rolebinding.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/rolebinding.yaml new file mode 100644 index 0000000000..189775a15a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: RoleBinding +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: {{ template "common.names.fullname" . }} + apiGroup: rbac.authorization.k8s.io +subjects: + - kind: ServiceAccount + name: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/secrets.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/secrets.yaml new file mode 100644 index 0000000000..d492cd593b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/secrets.yaml @@ -0,0 +1,24 @@ +{{- if (include "postgresql.createSecret" .) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +type: Opaque +data: + {{- if not (eq (include "postgresql.username" .) "postgres") }} + postgresql-postgres-password: {{ include "postgresql.postgres.password" . | b64enc | quote }} + {{- end }} + postgresql-password: {{ include "postgresql.password" . | b64enc | quote }} + {{- if .Values.replication.enabled }} + postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }} + {{- end }} + {{- if (and .Values.ldap.enabled .Values.ldap.bind_password)}} + postgresql-ldap-password: {{ .Values.ldap.bind_password | b64enc | quote }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/serviceaccount.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/serviceaccount.yaml new file mode 100644 index 0000000000..03f0f50e7d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "common.labels.standard" . | nindent 4 }} + name: {{ template "common.names.fullname" . }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/servicemonitor.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/servicemonitor.yaml new file mode 100644 index 0000000000..587ce85b87 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/servicemonitor.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "common.names.fullname" . }} + {{- if .Values.metrics.serviceMonitor.namespace }} + namespace: {{ .Values.metrics.serviceMonitor.namespace }} + {{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.metrics.serviceMonitor.additionalLabels }} + {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + +spec: + endpoints: + - port: http-metrics + {{- if .Values.metrics.serviceMonitor.interval }} + interval: {{ .Values.metrics.serviceMonitor.interval }} + {{- end }} + {{- if .Values.metrics.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout }} + {{- end }} + namespaceSelector: + matchNames: + - {{ .Release.Namespace }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset-readreplicas.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset-readreplicas.yaml new file mode 100644 index 0000000000..b038299bf6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset-readreplicas.yaml @@ -0,0 +1,411 @@ +{{- if .Values.replication.enabled }} +{{- $readReplicasResources := coalesce .Values.readReplicas.resources .Values.resources -}} +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: "{{ template "common.names.fullname" . }}-read" + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: read +{{- with .Values.readReplicas.labels }} +{{ toYaml . | indent 4 }} +{{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.readReplicas.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: {{ .Values.replication.readReplicas }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: read + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + app.kubernetes.io/component: read + role: read +{{- with .Values.readReplicas.podLabels }} +{{ toYaml . | indent 8 }} +{{- end }} +{{- with .Values.readReplicas.podAnnotations }} + annotations: +{{ toYaml . | indent 8 }} +{{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.readReplicas.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAffinityPreset "component" "read" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAntiAffinityPreset "component" "read" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.readReplicas.nodeAffinityPreset.type "key" .Values.readReplicas.nodeAffinityPreset.key "values" .Values.readReplicas.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.readReplicas.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.readReplicas.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name}} + {{- end }} + {{- if or .Values.readReplicas.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{ if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.readReplicas.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.readReplicas.priorityClassName }} + priorityClassName: {{ .Values.readReplicas.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if $readReplicasResources }} + resources: {{- toYaml $readReplicasResources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + - name: POSTGRES_REPLICATION_MODE + value: "slave" + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + - name: POSTGRES_MASTER_HOST + value: {{ template "common.names.fullname" . }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ include "postgresql.port" . | quote }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{ end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.readReplicas.extraVolumeMounts }} + {{- toYaml .Values.readReplicas.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.readReplicas.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.sidecars "context" $ ) | nindent 8 }} +{{- end }} + volumes: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} + {{- if or (not .Values.persistence.enabled) (not .Values.readReplicas.persistence.enabled) }} + - name: data + emptyDir: {} + {{- end }} + {{- if .Values.readReplicas.extraVolumes }} + {{- toYaml .Values.readReplicas.extraVolumes | nindent 8 }} + {{- end }} + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} +{{- if and .Values.persistence.enabled .Values.readReplicas.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset.yaml new file mode 100644 index 0000000000..f8163fd99f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/statefulset.yaml @@ -0,0 +1,609 @@ +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: {{ template "postgresql.primary.fullname" . }} + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: primary + {{- with .Values.primary.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.primary.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: 1 + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: primary + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + role: primary + app.kubernetes.io/component: primary + {{- with .Values.primary.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.primary.podAnnotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.primary.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.primary.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAffinityPreset "component" "primary" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAntiAffinityPreset "component" "primary" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.primary.nodeAffinityPreset.type "key" .Values.primary.nodeAffinityPreset.key "values" .Values.primary.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.primary.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.primary.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.primary.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.primary.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + {{- end }} + {{- if or .Values.primary.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.primary.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.primary.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.primary.priorityClassName }} + priorityClassName: {{ .Values.primary.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + {{- if .Values.postgresqlInitdbArgs }} + - name: POSTGRES_INITDB_ARGS + value: {{ .Values.postgresqlInitdbArgs | quote }} + {{- end }} + {{- if .Values.postgresqlInitdbWalDir }} + - name: POSTGRES_INITDB_WALDIR + value: {{ .Values.postgresqlInitdbWalDir | quote }} + {{- end }} + {{- if .Values.initdbUser }} + - name: POSTGRESQL_INITSCRIPTS_USERNAME + value: {{ .Values.initdbUser }} + {{- end }} + {{- if .Values.initdbPassword }} + - name: POSTGRESQL_INITSCRIPTS_PASSWORD + value: {{ .Values.initdbPassword }} + {{- end }} + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + {{- if .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_MASTER_HOST + value: {{ .Values.primaryAsStandBy.primaryHost }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ .Values.primaryAsStandBy.primaryPort | quote }} + {{- end }} + {{- if or .Values.replication.enabled .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_REPLICATION_MODE + {{- if .Values.primaryAsStandBy.enabled }} + value: "slave" + {{- else }} + value: "master" + {{- end }} + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + {{- if not (eq .Values.replication.synchronousCommit "off")}} + - name: POSTGRES_SYNCHRONOUS_COMMIT_MODE + value: {{ .Values.replication.synchronousCommit | quote }} + - name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS + value: {{ .Values.replication.numSynchronousReplicas | quote }} + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + {{- end }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + - name: POSTGRES_USER + value: {{ include "postgresql.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + {{- if (include "postgresql.database" .) }} + - name: POSTGRES_DB + value: {{ (include "postgresql.database" .) | quote }} + {{- end }} + {{- if .Values.extraEnv }} + {{- include "common.tplvalues.render" (dict "value" .Values.extraEnv "context" $) | nindent 12 }} + {{- end }} + - name: POSTGRESQL_ENABLE_LDAP + value: {{ ternary "yes" "no" .Values.ldap.enabled | quote }} + {{- if .Values.ldap.enabled }} + - name: POSTGRESQL_LDAP_SERVER + value: {{ .Values.ldap.server }} + - name: POSTGRESQL_LDAP_PORT + value: {{ .Values.ldap.port | quote }} + - name: POSTGRESQL_LDAP_SCHEME + value: {{ .Values.ldap.scheme }} + {{- if .Values.ldap.tls }} + - name: POSTGRESQL_LDAP_TLS + value: "1" + {{- end }} + - name: POSTGRESQL_LDAP_PREFIX + value: {{ .Values.ldap.prefix | quote }} + - name: POSTGRESQL_LDAP_SUFFIX + value: {{ .Values.ldap.suffix | quote }} + - name: POSTGRESQL_LDAP_BASE_DN + value: {{ .Values.ldap.baseDN }} + - name: POSTGRESQL_LDAP_BIND_DN + value: {{ .Values.ldap.bindDN }} + {{- if (not (empty .Values.ldap.bind_password)) }} + - name: POSTGRESQL_LDAP_BIND_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-ldap-password + {{- end}} + - name: POSTGRESQL_LDAP_SEARCH_ATTR + value: {{ .Values.ldap.search_attr }} + - name: POSTGRESQL_LDAP_SEARCH_FILTER + value: {{ .Values.ldap.search_filter }} + - name: POSTGRESQL_LDAP_URL + value: {{ .Values.ldap.url }} + {{- end}} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + {{- if .Values.extraEnvVarsCM }} + envFrom: + - configMapRef: + name: {{ tpl .Values.extraEnvVarsCM . }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.startupProbe.enabled }} + startupProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }} + successThreshold: {{ .Values.startupProbe.successThreshold }} + failureThreshold: {{ .Values.startupProbe.failureThreshold }} + {{- else if .Values.customStartupProbe }} + startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + mountPath: /docker-entrypoint-initdb.d/ + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + mountPath: /docker-entrypoint-initdb.d/secret + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.primary.extraVolumeMounts }} + {{- toYaml .Values.primary.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.primary.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.primary.sidecars "context" $ ) | nindent 8 }} +{{- end }} +{{- if .Values.metrics.enabled }} + - name: metrics + image: {{ template "postgresql.metrics.image" . }} + imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }} + {{- if .Values.metrics.securityContext.enabled }} + securityContext: {{- omit .Values.metrics.securityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }} + {{- $sslmode := ternary "require" "disable" .Values.tls.enabled }} + {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} + - name: DATA_SOURCE_NAME + value: {{ printf "host=127.0.0.1 port=%d user=%s sslmode=%s sslcert=%s sslkey=%s" (int (include "postgresql.port" .)) (include "postgresql.username" .) $sslmode (include "postgresql.tlsCert" .) (include "postgresql.tlsCertKey" .) }} + {{- else }} + - name: DATA_SOURCE_URI + value: {{ printf "127.0.0.1:%d/%s?sslmode=%s" (int (include "postgresql.port" .)) $database $sslmode }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: DATA_SOURCE_PASS_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: DATA_SOURCE_PASS + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: DATA_SOURCE_USER + value: {{ template "postgresql.username" . }} + {{- if .Values.metrics.extraEnvVars }} + {{- include "common.tplvalues.render" (dict "value" .Values.metrics.extraEnvVars "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.metrics.customMetrics }} + - name: custom-metrics + mountPath: /conf + readOnly: true + args: ["--extend.query-path", "/conf/custom-metrics.yaml"] + {{- end }} + ports: + - name: http-metrics + containerPort: 9187 + {{- if .Values.metrics.resources }} + resources: {{- toYaml .Values.metrics.resources | nindent 12 }} + {{- end }} +{{- end }} + volumes: + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + configMap: + name: {{ template "postgresql.initdbScriptsCM" . }} + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + secret: + secretName: {{ template "postgresql.initdbScriptsSecret" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.primary.extraVolumes }} + {{- toYaml .Values.primary.extraVolumes | nindent 8 }} + {{- end }} + {{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} + - name: custom-metrics + configMap: + name: {{ template "postgresql.metricsCM" . }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} +{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }} + - name: data + persistentVolumeClaim: +{{- with .Values.persistence.existingClaim }} + claimName: {{ tpl . $ }} +{{- end }} +{{- else if not .Values.persistence.enabled }} + - name: data + emptyDir: {} +{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-headless.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-headless.yaml new file mode 100644 index 0000000000..6f5f3b9ee4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-headless.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-headless + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + # Use this annotation in addition to the actual publishNotReadyAddresses + # field below because the annotation will stop being respected soon but the + # field is broken in some versions of Kubernetes: + # https://github.com/kubernetes/kubernetes/issues/58662 + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" + namespace: {{ .Release.Namespace }} +spec: + type: ClusterIP + clusterIP: None + # We want all pods in the StatefulSet to have their addresses published for + # the sake of the other Postgresql pods even before they're ready, since they + # have to be able to talk to each other in order to become ready. + publishNotReadyAddresses: true + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-read.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-read.yaml new file mode 100644 index 0000000000..56195ea1e6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc-read.yaml @@ -0,0 +1,43 @@ +{{- if .Values.replication.enabled }} +{{- $serviceAnnotations := coalesce .Values.readReplicas.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.readReplicas.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.readReplicas.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.readReplicas.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.readReplicas.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.readReplicas.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-read + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: read +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc.yaml new file mode 100644 index 0000000000..a29431b6a4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/templates/svc.yaml @@ -0,0 +1,41 @@ +{{- $serviceAnnotations := coalesce .Values.primary.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.primary.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.primary.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.primary.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.primary.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.primary.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.schema.json b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.schema.json new file mode 100644 index 0000000000..66a2a9dd06 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.schema.json @@ -0,0 +1,103 @@ +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "properties": { + "postgresqlUsername": { + "type": "string", + "title": "Admin user", + "form": true + }, + "postgresqlPassword": { + "type": "string", + "title": "Password", + "form": true + }, + "persistence": { + "type": "object", + "properties": { + "size": { + "type": "string", + "title": "Persistent Volume Size", + "form": true, + "render": "slider", + "sliderMin": 1, + "sliderMax": 100, + "sliderUnit": "Gi" + } + } + }, + "resources": { + "type": "object", + "title": "Required Resources", + "description": "Configure resource requests", + "form": true, + "properties": { + "requests": { + "type": "object", + "properties": { + "memory": { + "type": "string", + "form": true, + "render": "slider", + "title": "Memory Request", + "sliderMin": 10, + "sliderMax": 2048, + "sliderUnit": "Mi" + }, + "cpu": { + "type": "string", + "form": true, + "render": "slider", + "title": "CPU Request", + "sliderMin": 10, + "sliderMax": 2000, + "sliderUnit": "m" + } + } + } + } + }, + "replication": { + "type": "object", + "form": true, + "title": "Replication Details", + "properties": { + "enabled": { + "type": "boolean", + "title": "Enable Replication", + "form": true + }, + "readReplicas": { + "type": "integer", + "title": "read Replicas", + "form": true, + "hidden": { + "value": false, + "path": "replication/enabled" + } + } + } + }, + "volumePermissions": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "form": true, + "title": "Enable Init Containers", + "description": "Change the owner of the persist volume mountpoint to RunAsUser:fsGroup" + } + } + }, + "metrics": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "title": "Configure metrics exporter", + "form": true + } + } + } + } +} diff --git a/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.yaml b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.yaml new file mode 100644 index 0000000000..82ce092344 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/charts/postgresql/values.yaml @@ -0,0 +1,824 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: + postgresql: {} +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + +## Bitnami PostgreSQL image version +## ref: https://hub.docker.com/r/bitnami/postgresql/tags/ +## +image: + registry: docker.io + repository: bitnami/postgresql + tag: 11.11.0-debian-10-r71 + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + + ## Set to true if you would like to see extra information on logs + ## It turns BASH and/or NAMI debugging in the image + ## + debug: false + +## String to partially override common.names.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override common.names.fullname template +## +# fullnameOverride: + +## +## Init containers parameters: +## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup +## +volumePermissions: + enabled: false + image: + registry: docker.io + repository: bitnami/bitnami-shell + tag: "10" + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: Always + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Init container Security Context + ## Note: the chown of the data folder is done to securityContext.runAsUser + ## and not the below volumePermissions.securityContext.runAsUser + ## When runAsUser is set to special value "auto", init container will try to chwon the + ## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2` + ## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed). + ## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with + ## pod securityContext.enabled=false and shmVolume.chmod.enabled=false + ## + securityContext: + runAsUser: 0 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pod Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +securityContext: + enabled: true + fsGroup: 1001 + +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +containerSecurityContext: + enabled: true + runAsUser: 1001 + +## Pod Service Account +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + enabled: false + ## Name of an already existing service account. Setting this value disables the automatic service account creation. + # name: + +## Pod Security Policy +## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +psp: + create: false + +## Creates role for ServiceAccount +## Required for PSP +## +rbac: + create: false + +replication: + enabled: false + user: repl_user + password: repl_password + readReplicas: 1 + ## Set synchronous commit mode: on, off, remote_apply, remote_write and local + ## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL + synchronousCommit: 'off' + ## From the number of `readReplicas` defined above, set the number of those that will have synchronous replication + ## NOTE: It cannot be > readReplicas + numSynchronousReplicas: 0 + ## Replication Cluster application name. Useful for defining multiple replication policies + ## + applicationName: my_application + +## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!) +# postgresqlPostgresPassword: + +## PostgreSQL user (has superuser privileges if username is `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +postgresqlUsername: postgres + +## PostgreSQL password +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +# postgresqlPassword: + +## PostgreSQL password using existing secret +## existingSecret: secret +## + +## Mount PostgreSQL secret as a file instead of passing environment variable +# usePasswordFile: false + +## Create a database +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run +## +# postgresqlDatabase: + +## PostgreSQL data dir +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +postgresqlDataDir: /bitnami/postgresql/data + +## An array to add extra environment variables +## For example: +## extraEnv: +## - name: FOO +## value: "bar" +## +# extraEnv: +extraEnv: [] + +## Name of a ConfigMap containing extra env vars +## +# extraEnvVarsCM: + +## Specify extra initdb args +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbArgs: + +## Specify a custom location for the PostgreSQL transaction log +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbWalDir: + +## PostgreSQL configuration +## Specify runtime configuration parameters as a dict, using camelCase, e.g. +## {"sharedBuffers": "500MB"} +## Alternatively, you can put your postgresql.conf under the files/ directory +## ref: https://www.postgresql.org/docs/current/static/runtime-config.html +## +# postgresqlConfiguration: + +## PostgreSQL extended configuration +## As above, but _appended_ to the main configuration +## Alternatively, you can put your *.conf under the files/conf.d/ directory +## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf +## +# postgresqlExtendedConf: + +## Configure current cluster's primary server to be the standby server in other cluster. +## This will allow cross cluster replication and provide cross cluster high availability. +## You will need to configure pgHbaConfiguration if you want to enable this feature with local cluster replication enabled. +## +primaryAsStandBy: + enabled: false + # primaryHost: + # primaryPort: + +## PostgreSQL client authentication configuration +## Specify content for pg_hba.conf +## Default: do not create pg_hba.conf +## Alternatively, you can put your pg_hba.conf under the files/ directory +# pgHbaConfiguration: |- +# local all all trust +# host all all localhost trust +# host mydatabase mysuser 192.168.0.0/24 md5 + +## ConfigMap with PostgreSQL configuration +## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration +# configurationConfigMap: + +## ConfigMap with PostgreSQL extended configuration +# extendedConfConfigMap: + +## initdb scripts +## Specify dictionary of scripts to be run at first boot +## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory +## +# initdbScripts: +# my_init_script.sh: | +# #!/bin/sh +# echo "Do something." + +## ConfigMap with scripts to be run at first boot +## NOTE: This will override initdbScripts +# initdbScriptsConfigMap: + +## Secret with scripts to be run at first boot (in case it contains sensitive information) +## NOTE: This can work along initdbScripts or initdbScriptsConfigMap +# initdbScriptsSecret: + +## Specify the PostgreSQL username and password to execute the initdb scripts +# initdbUser: +# initdbPassword: + +## Audit settings +## https://github.com/bitnami/bitnami-docker-postgresql#auditing +## +audit: + ## Log client hostnames + ## + logHostname: false + ## Log connections to the server + ## + logConnections: false + ## Log disconnections + ## + logDisconnections: false + ## Operation to audit using pgAudit (default if not set) + ## + pgAuditLog: "" + ## Log catalog using pgAudit + ## + pgAuditLogCatalog: "off" + ## Log level for clients + ## + clientMinMessages: error + ## Template for log line prefix (default if not set) + ## + logLinePrefix: "" + ## Log timezone + ## + logTimezone: "" + +## Shared preload libraries +## +postgresqlSharedPreloadLibraries: "pgaudit" + +## Maximum total connections +## +postgresqlMaxConnections: + +## Maximum connections for the postgres user +## +postgresqlPostgresConnectionLimit: + +## Maximum connections for the created user +## +postgresqlDbUserConnectionLimit: + +## TCP keepalives interval +## +postgresqlTcpKeepalivesInterval: + +## TCP keepalives idle +## +postgresqlTcpKeepalivesIdle: + +## TCP keepalives count +## +postgresqlTcpKeepalivesCount: + +## Statement timeout +## +postgresqlStatementTimeout: + +## Remove pg_hba.conf lines with the following comma-separated patterns +## (cannot be used with custom pg_hba.conf) +## +postgresqlPghbaRemoveFilters: + +## Optional duration in seconds the pod needs to terminate gracefully. +## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +## +# terminationGracePeriodSeconds: 30 + +## LDAP configuration +## +ldap: + enabled: false + url: '' + server: '' + port: '' + prefix: '' + suffix: '' + baseDN: '' + bindDN: '' + bind_password: + search_attr: '' + search_filter: '' + scheme: '' + tls: {} + +## PostgreSQL service configuration +## +service: + ## PosgresSQL service type + ## + type: ClusterIP + # clusterIP: None + port: 5432 + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: + + ## Provide any additional annotations which may be required. Evaluated as a template. + ## + annotations: {} + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + ## Load Balancer sources. Evaluated as a template. + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + +## Start primary and read(s) pod(s) without limitations on shm memory. +## By default docker and containerd (and possibly other container runtimes) +## limit `/dev/shm` to `64M` (see e.g. the +## [docker issue](https://github.com/docker-library/postgres/issues/416) and the +## [containerd issue](https://github.com/containerd/containerd/issues/3654), +## which could be not enough if PostgreSQL uses parallel workers heavily. +## +shmVolume: + ## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove + ## this limitation. + ## + enabled: true + ## Set to `true` to `chmod 777 /dev/shm` on a initContainer. + ## This option is ignored if `volumePermissions.enabled` is `false` + ## + chmod: + enabled: true + +## PostgreSQL data Persistent Volume Storage Class +## If defined, storageClassName: +## If set to "-", storageClassName: "", which disables dynamic provisioning +## If undefined (the default) or set to null, no storageClassName spec is +## set, choosing the default provisioner. (gp2 on AWS, standard on +## GKE, AWS & OpenStack) +## +persistence: + enabled: true + ## A manually managed Persistent Volume and Claim + ## If defined, PVC must be created manually before volume will be bound + ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart + ## + # existingClaim: + + ## The path the volume will be mounted at, useful when using different + ## PostgreSQL images. + ## + mountPath: /bitnami/postgresql + + ## The subdirectory of the volume to mount to, useful in dev environments + ## and one PV for multiple services. + ## + subPath: '' + + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 8Gi + annotations: {} + ## selector can be used to match an existing PersistentVolume + ## selector: + ## matchLabels: + ## app: my-app + selector: {} + +## updateStrategy for PostgreSQL StatefulSet and its reads StatefulSets +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies +## +updateStrategy: + type: RollingUpdate + +## +## PostgreSQL Primary parameters +## +primary: + ## PostgreSQL Primary pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL Primary pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL Primary node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: primary.podAffinityPreset, primary.podAntiAffinityPreset, and primary.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL primary Volume mounts + ## + extraVolumeMounts: [] + ## Additional PostgreSQL primary Volumes + ## + extraVolumes: [] + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for primary + ## + service: {} + # type: + # nodePort: + # clusterIP: + +## +## PostgreSQL read only replica parameters +## +readReplicas: + ## PostgreSQL read only pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL read only pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL read only node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: readReplicas.podAffinityPreset, readReplicas.podAntiAffinityPreset, and readReplicas.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL read replicas Volume mounts + ## + extraVolumeMounts: [] + + ## Additional PostgreSQL read replicas Volumes + ## + extraVolumes: [] + + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for read + ## + service: {} + # type: + # nodePort: + # clusterIP: + + ## Whether to enable PostgreSQL read replicas data Persistent + ## + persistence: + enabled: true + + # Override the resource configuration for read replicas + resources: {} + # requests: + # memory: 256Mi + # cpu: 250m + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: + requests: + memory: 256Mi + cpu: 250m + +## Add annotations to all the deployed resources +## +commonAnnotations: {} + +networkPolicy: + ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## + enabled: false + + ## The Policy model to apply. When set to false, only pods with the correct + ## client label will have network access to the port PostgreSQL is listening + ## on. When true, PostgreSQL will accept connections from any source + ## (with the correct destination port). + ## + allowExternal: true + + ## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace + ## and that match other criteria, the ones that have the good label, can reach the DB. + ## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this + ## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added. + ## + ## Example: + ## explicitNamespacesSelector: + ## matchLabels: + ## role: frontend + ## matchExpressions: + ## - {key: role, operator: In, values: [frontend]} + ## + explicitNamespacesSelector: {} + +## Configure extra options for startup, liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes +## +startupProbe: + enabled: false + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 5 + failureThreshold: 10 + successThreshold: 1 + +livenessProbe: + enabled: true + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Custom Startup probe +## +customStartupProbe: {} + +## Custom Liveness probe +## +customLivenessProbe: {} + +## Custom Rediness probe +## +customReadinessProbe: {} + +## +## TLS configuration +## +tls: + # Enable TLS traffic + enabled: false + # + # Whether to use the server's TLS cipher preferences rather than the client's. + preferServerCiphers: true + # + # Name of the Secret that contains the certificates + certificatesSecret: '' + # + # Certificate filename + certFilename: '' + # + # Certificate Key filename + certKeyFilename: '' + # + # CA Certificate filename + # If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate + # ref: https://www.postgresql.org/docs/9.6/auth-methods.html + certCAFilename: + # + # File containing a Certificate Revocation List + crlFilename: + +## Configure metrics exporter +## +metrics: + enabled: false + # resources: {} + service: + type: ClusterIP + annotations: + prometheus.io/scrape: 'true' + prometheus.io/port: '9187' + loadBalancerIP: + serviceMonitor: + enabled: false + additionalLabels: {} + # namespace: monitoring + # interval: 30s + # scrapeTimeout: 10s + ## Custom PrometheusRule to be defined + ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart + ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions + ## + prometheusRule: + enabled: false + additionalLabels: {} + namespace: '' + ## These are just examples rules, please adapt them to your needs. + ## Make sure to constraint the rules to the current postgresql service. + ## rules: + ## - alert: HugeReplicationLag + ## expr: pg_replication_lag{service="{{ template "common.names.fullname" . }}-metrics"} / 3600 > 1 + ## for: 1m + ## labels: + ## severity: critical + ## annotations: + ## description: replication for {{ template "common.names.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s). + ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). + ## + rules: [] + + image: + registry: docker.io + repository: bitnami/postgres-exporter + tag: 0.9.0-debian-10-r43 + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Define additional custom metrics + ## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file + # customMetrics: + # pg_database: + # query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size_bytes FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')" + # metrics: + # - name: + # usage: "LABEL" + # description: "Name of the database" + # - size_bytes: + # usage: "GAUGE" + # description: "Size of the database in bytes" + # + ## An array to add extra env vars to configure postgres-exporter + ## see: https://github.com/wrouesnel/postgres_exporter#environment-variables + ## For example: + # extraEnvVars: + # - name: PG_EXPORTER_DISABLE_DEFAULT_METRICS + # value: "true" + extraEnvVars: {} + + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## + securityContext: + enabled: false + runAsUser: 1001 + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness and readiness probes + ## + livenessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + + readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Array with extra yaml to deploy with the chart. Evaluated as a template +## +extraDeploy: [] diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/access-tls-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/access-tls-values.yaml new file mode 100644 index 0000000000..27e24d3468 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/access-tls-values.yaml @@ -0,0 +1,34 @@ +databaseUpgradeReady: true +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/default-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/default-values.yaml new file mode 100644 index 0000000000..020f523352 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/default-values.yaml @@ -0,0 +1,32 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true +## This is an exception here because HA needs masterKey to connect with other node members and it is commented in values to support 6.x to 7.x Migration +## Please refer https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#special-upgrade-notes-1 +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/global-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/global-values.yaml new file mode 100644 index 0000000000..0987e17ca7 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/global-values.yaml @@ -0,0 +1,255 @@ +databaseUpgradeReady: true +artifactory: + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + customInitContainersBegin: | + - name: "custom-init-begin-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + customInitContainers: | + - name: "custom-init-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-local + mountPath: "/scriptslocal" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-local" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +global: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + customInitContainersBegin: | + - name: "custom-init-begin-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + customInitContainers: | + - name: "custom-init-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: volume + # Add custom volumes + customVolumes: | + - name: custom-script-global + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-global + mountPath: "/scripts" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-global" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in global' >> /scripts/sidecarglobal.txt; cat /scripts/sidecarglobal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scripts" + name: custom-script-global + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +nginx: + customInitContainers: | + - name: "custom-init-begin-nginx" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in nginx" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: custom-script-local + customSidecarContainers: | + - name: "sidecar-list-nginx" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + + artifactoryConf: | + {{- if .Values.nginx.https.enabled }} + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; + ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt; + ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key; + ssl_session_cache shared:SSL:1m; + ssl_prefer_server_ciphers on; + {{- end }} + ## server configuration + server { + listen 8088; + {{- if .Values.nginx.internalPortHttps }} + listen {{ .Values.nginx.internalPortHttps }} ssl; + {{- else -}} + {{- if .Values.nginx.https.enabled }} + listen {{ .Values.nginx.https.internalPort }} ssl; + {{- end }} + {{- end }} + {{- if .Values.nginx.internalPortHttp }} + listen {{ .Values.nginx.internalPortHttp }}; + {{- else -}} + {{- if .Values.nginx.http.enabled }} + listen {{ .Values.nginx.http.internalPort }}; + {{- end }} + {{- end }} + server_name ~(?.+)\.{{ include "artifactory-ha.fullname" . }} {{ include "artifactory-ha.fullname" . }} + {{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} + {{- end -}}; + if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; + } + ## Application specific logs + ## access_log /var/log/nginx/artifactory-access.log timing; + ## error_log /var/log/nginx/artifactory-error.log; + rewrite ^/artifactory/?$ / redirect; + if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; + } + chunked_transfer_encoding on; + client_max_body_size 0; + + location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + } + } + + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: + - containerPort: 8088 + name: http2 + service: + ## A list of custom ports to expose through the Ingress controller service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: + - port: 8088 + targetPort: 8088 + protocol: TCP + name: http2 diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/large-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/large-values.yaml new file mode 100644 index 0000000000..153307aa26 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/large-values.yaml @@ -0,0 +1,85 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 300 + primary: + replicaCount: 4 + resources: + requests: + memory: "6Gi" + cpu: "2" + limits: + memory: "10Gi" + cpu: "8" + javaOpts: + xms: "8g" + xmx: "10g" +access: + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 100 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 150 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/loggers-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/loggers-values.yaml new file mode 100644 index 0000000000..03c94be953 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/loggers-values.yaml @@ -0,0 +1,43 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + loggers: + - access-audit.log + - access-request.log + - access-security-audit.log + - access-service.log + - artifactory-access.log + - artifactory-event.log + - artifactory-import-export.log + - artifactory-request.log + - artifactory-service.log + - frontend-request.log + - frontend-service.log + - metadata-request.log + - metadata-service.log + - router-request.log + - router-service.log + - router-traefik.log + + catalinaLoggers: + - tomcat-catalina.log + - tomcat-localhost.log diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/medium-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/medium-values.yaml new file mode 100644 index 0000000000..115e7d4602 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/medium-values.yaml @@ -0,0 +1,85 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 200 + primary: + replicaCount: 3 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "8Gi" + cpu: "6" + javaOpts: + xms: "6g" + xmx: "8g" +access: + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 100 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/migration-disabled-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/migration-disabled-values.yaml new file mode 100644 index 0000000000..44895a3731 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/migration-disabled-values.yaml @@ -0,0 +1,31 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + migration: + enabled: false + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/nginx-autoreload-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/nginx-autoreload-values.yaml new file mode 100644 index 0000000000..a6f4e8001f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/nginx-autoreload-values.yaml @@ -0,0 +1,53 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true +## This is an exception here because HA needs masterKey to connect with other node members and it is commented in values to support 6.x to 7.x Migration +## Please refer https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#special-upgrade-notes-1 +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false + +nginx: + customVolumes: | + - name: scripts + configMap: + name: {{ template "artifactory-ha.fullname" . }}-nginx-scripts + defaultMode: 0550 + customVolumeMounts: | + - name: scripts + mountPath: /var/opt/jfrog/nginx/scripts/ + customCommand: + - /bin/sh + - -c + - | + # watch for configmap changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/conf.d:n & + {{ if .Values.nginx.https.enabled -}} + # watch for tls secret changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/ssl:n & + {{ end -}} + nginx -g 'daemon off;' diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-access-tls-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-access-tls-values.yaml new file mode 100644 index 0000000000..6f3b13cb14 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-access-tls-values.yaml @@ -0,0 +1,106 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-values.yaml new file mode 100644 index 0000000000..87832a5051 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/rtsplit-values.yaml @@ -0,0 +1,155 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + + # Add lifecycle hooks for artifactory container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for jfconect container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for router container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for frontend container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/small-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/small-values.yaml new file mode 100644 index 0000000000..b4557289ef --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/small-values.yaml @@ -0,0 +1,87 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 200 + primary: + replicaCount: 1 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "6g" + node: + replicaCount: 2 +access: + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 80 + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-ha/107.90.14/ci/test-values.yaml b/charts/jfrog/artifactory-ha/107.90.14/ci/test-values.yaml new file mode 100644 index 0000000000..8bbbb5b3e5 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/ci/test-values.yaml @@ -0,0 +1,85 @@ +databaseUpgradeReady: true +artifactory: + metrics: + enabled: true + podSecurityContext: + fsGroupChangePolicy: "OnRootMismatch" + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + unifiedSecretInstallation: false + persistence: + enabled: false + primary: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + node: + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + statefulset: + annotations: + artifactory: test + +postgresql: + postgresqlPassword: "password" + postgresqlExtendedConf: + maxConnections: "102" + persistence: + enabled: false +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false + +jfconnect: + enabled: false + +## filebeat sidecar +filebeat: + enabled: true + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output.file: + path: "/tmp/filebeat" + filename: filebeat + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 diff --git a/charts/jfrog/artifactory-ha/107.90.14/files/binarystore.xml b/charts/jfrog/artifactory-ha/107.90.14/files/binarystore.xml new file mode 100644 index 0000000000..0e7bc5af0f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/files/binarystore.xml @@ -0,0 +1,439 @@ +{{- if and (eq .Values.artifactory.persistence.type "nfs") (.Values.artifactory.haDataDir.enabled) }} + + + + + + + +{{- end }} +{{- if and (eq .Values.artifactory.persistence.type "nfs") (not .Values.artifactory.haDataDir.enabled) }} + + {{- if (.Values.artifactory.persistence.maxCacheSize) }} + + + + + + {{- else }} + + + + {{- end }} + + {{- if .Values.artifactory.persistence.maxCacheSize }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + + {{ .Values.artifactory.persistence.nfs.dataDir }}/filestore + + + +{{- end }} + +{{- if eq .Values.artifactory.persistence.type "file-system" }} + +{{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + + + + + + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}} + + {{- end }} + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + // Specify the read and write strategy and redundancy for the sharding binary provider + + roundRobin + percentageFreeSpace + 2 + + + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) -}} + //For each sub-provider (mount), specify the filestore location + + filestore{{ $sharedClaimNumber }} + + {{- end }} + +{{- else }} + + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + 2 + 2 + + + + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + + + shard-fs-1 + local + + + + + 30 + tester-remote1 + 10000 + remote + + + +{{- end }} +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") (eq .Values.artifactory.persistence.type "google-storage-v2-direct") }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + 2 + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "google-storage-v2-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + local + + + + + 30 + 10000 + remote + + {{- end }} + + + + {{- if .Values.artifactory.persistence.googleStorage.useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .Values.artifactory.persistence.googleStorage.enableSignedUrlRedirect }} + google-cloud-storage + {{ .Values.artifactory.persistence.googleStorage.endpoint }} + {{ .Values.artifactory.persistence.googleStorage.httpsOnly }} + {{ .Values.artifactory.persistence.googleStorage.bucketName }} + {{ .Values.artifactory.persistence.googleStorage.path }} + {{ .Values.artifactory.persistence.googleStorage.bucketExists }} + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "s3-storage-v3-archive") }} + + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-archive" }} + + + + + + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + + + + + remote + + + + local + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- with .Values.artifactory.persistence.awsS3V3 }} + + {{ .testConnection }} + {{- if .identity }} + {{ .identity }} + {{- end }} + {{- if .credential }} + {{ .credential }} + {{- end }} + {{ .region }} + {{ .bucketName }} + {{ .path }} + {{ .endpoint }} + {{- with .port }} + {{ . }} + {{- end }} + {{- with .useHttp }} + {{ . }} + {{- end }} + {{- with .maxConnections }} + {{ . }} + {{- end }} + {{- with .connectionTimeout }} + {{ . }} + {{- end }} + {{- with .socketTimeout }} + {{ . }} + {{- end }} + {{- with .kmsServerSideEncryptionKeyId }} + {{ . }} + {{- end }} + {{- with .kmsKeyRegion }} + {{ . }} + {{- end }} + {{- with .kmsCryptoMode }} + {{ . }} + {{- end }} + {{- if .useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .usePresigning }} + {{ .signatureExpirySeconds }} + {{ .signedUrlExpirySeconds }} + {{- with .cloudFrontDomainName }} + {{ . }} + {{- end }} + {{- with .cloudFrontKeyPairId }} + {{ . }} + {{- end }} + {{- with .cloudFrontPrivateKey }} + {{ . }} + {{- end }} + {{- with .enableSignedUrlRedirect }} + {{ . }} + {{- end }} + {{- with .enablePathStyleAccess }} + {{ . }} + {{- end }} + {{- with .multiPartLimit }} + {{ . | int64 }} + {{- end }} + {{- with .multipartElementSize }} + {{ . | int64 }} + {{- end }} + + {{- end }} + +{{- end }} + +{{- if or (eq .Values.artifactory.persistence.type "azure-blob") (eq .Values.artifactory.persistence.type "azure-blob-storage-direct") }} + + + {{- if eq .Values.artifactory.persistence.type "azure-blob" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "azure-blob-storage-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "azure-blob" }} + + + crossNetworkStrategy + crossNetworkStrategy + 2 + 1 + + + + + remote + + + + local + + {{- end }} + + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "azure-blob-storage-v2-direct" -}} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/files/installer-info.json b/charts/jfrog/artifactory-ha/107.90.14/files/installer-info.json new file mode 100644 index 0000000000..cf6b020fb3 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/files/installer-info.json @@ -0,0 +1,32 @@ +{ + "productId": "Helm_artifactory-ha/{{ .Chart.Version }}", + "features": [ + { + "featureId": "Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}" + }, + { + "featureId": "Database/{{ .Values.database.type }}" + }, + { + "featureId": "PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}" + }, + { + "featureId": "Nginx_Enabled/{{ .Values.nginx.enabled }}" + }, + { + "featureId": "ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}" + }, + { + "featureId": "SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}" + }, + { + "featureId": "UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}" + }, + { + "featureId": "Filebeat_Enabled/{{ .Values.filebeat.enabled }}" + }, + { + "featureId": "ReplicaCount/{{ add .Values.artifactory.primary.replicaCount .Values.artifactory.node.replicaCount }}" + } + ] +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/files/migrate.sh b/charts/jfrog/artifactory-ha/107.90.14/files/migrate.sh new file mode 100644 index 0000000000..ba44160f47 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/files/migrate.sh @@ -0,0 +1,4311 @@ +#!/bin/bash + +# Flags +FLAG_Y="y" +FLAG_N="n" +FLAGS_Y_N="$FLAG_Y $FLAG_N" +FLAG_NOT_APPLICABLE="_NA_" + +CURRENT_VERSION=$1 + +WRAPPER_SCRIPT_TYPE_RPMDEB="RPMDEB" +WRAPPER_SCRIPT_TYPE_DOCKER_COMPOSE="DOCKERCOMPOSE" + +SENSITIVE_KEY_VALUE="__sensitive_key_hidden___" + +# Shared system keys +SYS_KEY_SHARED_JFROGURL="shared.jfrogUrl" +SYS_KEY_SHARED_SECURITY_JOINKEY="shared.security.joinKey" +SYS_KEY_SHARED_SECURITY_MASTERKEY="shared.security.masterKey" + +SYS_KEY_SHARED_NODE_ID="shared.node.id" +SYS_KEY_SHARED_JAVAHOME="shared.javaHome" + +SYS_KEY_SHARED_DATABASE_TYPE="shared.database.type" +SYS_KEY_SHARED_DATABASE_TYPE_VALUE_POSTGRES="postgresql" +SYS_KEY_SHARED_DATABASE_DRIVER="shared.database.driver" +SYS_KEY_SHARED_DATABASE_URL="shared.database.url" +SYS_KEY_SHARED_DATABASE_USERNAME="shared.database.username" +SYS_KEY_SHARED_DATABASE_PASSWORD="shared.database.password" + +SYS_KEY_SHARED_ELASTICSEARCH_URL="shared.elasticsearch.url" +SYS_KEY_SHARED_ELASTICSEARCH_USERNAME="shared.elasticsearch.username" +SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD="shared.elasticsearch.password" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP="shared.elasticsearch.clusterSetup" +SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE="shared.elasticsearch.unicastFile" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP_VALUE="YES" + +# Define this in product specific script. Should contain the path to unitcast file +# File used by insight server to write cluster active nodes info. This will be read by elasticsearch +#SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE_VALUE="" + +SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME="shared.rabbitMq.active.node.name" +SYS_KEY_RABBITMQ_ACTIVE_NODE_IP="shared.rabbitMq.active.node.ip" + +# Filenames +FILE_NAME_SYSTEM_YAML="system.yaml" +FILE_NAME_JOIN_KEY="join.key" +FILE_NAME_MASTER_KEY="master.key" +FILE_NAME_INSTALLER_YAML="installer.yaml" + +# Global constants used in business logic +NODE_TYPE_STANDALONE="standalone" +NODE_TYPE_CLUSTER_NODE="node" +NODE_TYPE_DATABASE="database" + +# External(isable) databases +DATABASE_POSTGRES="POSTGRES" +DATABASE_ELASTICSEARCH="ELASTICSEARCH" +DATABASE_RABBITMQ="RABBITMQ" + +POSTGRES_LABEL="PostgreSQL" +ELASTICSEARCH_LABEL="Elasticsearch" +RABBITMQ_LABEL="Rabbitmq" + +ARTIFACTORY_LABEL="Artifactory" +JFMC_LABEL="Mission Control" +DISTRIBUTION_LABEL="Distribution" +XRAY_LABEL="Xray" + +POSTGRES_CONTAINER="postgres" +ELASTICSEARCH_CONTAINER="elasticsearch" +RABBITMQ_CONTAINER="rabbitmq" +REDIS_CONTAINER="redis" + +#Adding a small timeout before a read ensures it is positioned correctly in the screen +read_timeout=0.5 + +# Options related to data directory location +PROMPT_DATA_DIR_LOCATION="Installation Directory" +KEY_DATA_DIR_LOCATION="installer.data_dir" + +SYS_KEY_SHARED_NODE_HAENABLED="shared.node.haEnabled" +PROMPT_ADD_TO_CLUSTER="Are you adding an additional node to an existing product cluster?" +KEY_ADD_TO_CLUSTER="installer.ha" +VALID_VALUES_ADD_TO_CLUSTER="$FLAGS_Y_N" + +MESSAGE_POSTGRES_INSTALL="The installer can install a $POSTGRES_LABEL database, or you can connect to an existing compatible $POSTGRES_LABEL database\n(compatible databases: https://www.jfrog.com/confluence/display/JFROG/System+Requirements#SystemRequirements-RequirementsMatrix)" +PROMPT_POSTGRES_INSTALL="Do you want to install $POSTGRES_LABEL?" +KEY_POSTGRES_INSTALL="installer.install_postgresql" +VALID_VALUES_POSTGRES_INSTALL="$FLAGS_Y_N" + +# Postgres connection details +RPM_DEB_POSTGRES_HOME_DEFAULT="/var/opt/jfrog/postgres" +RPM_DEB_MESSAGE_STANDALONE_POSTGRES_DATA="$POSTGRES_LABEL home will have data and its configuration" +RPM_DEB_PROMPT_STANDALONE_POSTGRES_DATA="Type desired $POSTGRES_LABEL home location" +RPM_DEB_KEY_STANDALONE_POSTGRES_DATA="installer.postgresql.home" + +MESSAGE_DATABASE_URL="Provide the database connection details" +PROMPT_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://:/artifactory" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://:/mission_control?sslmode=disable" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://:/distribution?sslmode=disable" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://:/xraydb?sslmode=disable" + ;; + esac + if [ -z "$databaseURlExample" ]; then + echo -n "$POSTGRES_LABEL URL" # For consistency with username and password + return + fi + echo -n "$POSTGRES_LABEL url. Example: [$databaseURlExample]" +} +REGEX_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://.*/artifactory.*" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://.*/mission_control.*" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://.*/distribution.*" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://.*/xraydb.*" + ;; + esac + echo -n "^$databaseURlExample\$" +} +ERROR_MESSAGE_DATABASE_URL="Invalid $POSTGRES_LABEL URL" +KEY_DATABASE_URL="$SYS_KEY_SHARED_DATABASE_URL" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_USERNAME="$POSTGRES_LABEL username" +KEY_DATABASE_USERNAME="$SYS_KEY_SHARED_DATABASE_USERNAME" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_PASSWORD="$POSTGRES_LABEL password" +KEY_DATABASE_PASSWORD="$SYS_KEY_SHARED_DATABASE_PASSWORD" +IS_SENSITIVE_DATABASE_PASSWORD="$FLAG_Y" + +MESSAGE_STANDALONE_ELASTICSEARCH_INSTALL="The installer can install a $ELASTICSEARCH_LABEL database or you can connect to an existing compatible $ELASTICSEARCH_LABEL database" +PROMPT_STANDALONE_ELASTICSEARCH_INSTALL="Do you want to install $ELASTICSEARCH_LABEL?" +KEY_STANDALONE_ELASTICSEARCH_INSTALL="installer.install_elasticsearch" +VALID_VALUES_STANDALONE_ELASTICSEARCH_INSTALL="$FLAGS_Y_N" + +# Elasticsearch connection details +MESSAGE_ELASTICSEARCH_DETAILS="Provide the $ELASTICSEARCH_LABEL connection details" +PROMPT_ELASTICSEARCH_URL="$ELASTICSEARCH_LABEL URL" +KEY_ELASTICSEARCH_URL="$SYS_KEY_SHARED_ELASTICSEARCH_URL" + +PROMPT_ELASTICSEARCH_USERNAME="$ELASTICSEARCH_LABEL username" +KEY_ELASTICSEARCH_USERNAME="$SYS_KEY_SHARED_ELASTICSEARCH_USERNAME" + +PROMPT_ELASTICSEARCH_PASSWORD="$ELASTICSEARCH_LABEL password" +KEY_ELASTICSEARCH_PASSWORD="$SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD" +IS_SENSITIVE_ELASTICSEARCH_PASSWORD="$FLAG_Y" + +# Cluster related questions +MESSAGE_CLUSTER_MASTER_KEY="Provide the cluster's master key. It can be found in the data directory of the first node under /etc/security/master.key" +PROMPT_CLUSTER_MASTER_KEY="Master Key" +KEY_CLUSTER_MASTER_KEY="$SYS_KEY_SHARED_SECURITY_MASTERKEY" +IS_SENSITIVE_CLUSTER_MASTER_KEY="$FLAG_Y" + +MESSAGE_JOIN_KEY="The Join key is the secret key used to establish trust between services in the JFrog Platform.\n(You can copy the Join Key from Admin > User Management > Settings)" +PROMPT_JOIN_KEY="Join Key" +KEY_JOIN_KEY="$SYS_KEY_SHARED_SECURITY_JOINKEY" +IS_SENSITIVE_JOIN_KEY="$FLAG_Y" +REGEX_JOIN_KEY="^[a-zA-Z0-9]{16,}\$" +ERROR_MESSAGE_JOIN_KEY="Invalid Join Key" + +# Rabbitmq related cluster information +MESSAGE_RABBITMQ_ACTIVE_NODE_NAME="Provide an active ${RABBITMQ_LABEL} node name. Run the command [ hostname -s ] on any of the existing nodes in the product cluster to get this" +PROMPT_RABBITMQ_ACTIVE_NODE_NAME="${RABBITMQ_LABEL} active node name" +KEY_RABBITMQ_ACTIVE_NODE_NAME="$SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME" + +# Rabbitmq related cluster information (necessary only for docker-compose) +PROMPT_RABBITMQ_ACTIVE_NODE_IP="${RABBITMQ_LABEL} active node ip" +KEY_RABBITMQ_ACTIVE_NODE_IP="$SYS_KEY_RABBITMQ_ACTIVE_NODE_IP" + +MESSAGE_JFROGURL(){ + echo -e "The JFrog URL allows ${PRODUCT_NAME} to connect to a JFrog Platform Instance.\n(You can copy the JFrog URL from Administration > User Management > Settings > Connection details)" +} +PROMPT_JFROGURL="JFrog URL" +KEY_JFROGURL="$SYS_KEY_SHARED_JFROGURL" +REGEX_JFROGURL="^https?://.*:{0,}[0-9]{0,4}\$" +ERROR_MESSAGE_JFROGURL="Invalid JFrog URL" + + +# Set this to FLAG_Y on upgrade +IS_UPGRADE="${FLAG_N}" + +# This belongs in JFMC but is the ONLY one that needs it so keeping it here for now. Can be made into a method and overridden if necessary +MESSAGE_MULTIPLE_PG_SCHEME="Please setup $POSTGRES_LABEL with schema as described in https://www.jfrog.com/confluence/display/JFROG/Installing+Mission+Control" + +_getMethodOutputOrVariableValue() { + unset EFFECTIVE_MESSAGE + local keyToSearch=$1 + local effectiveMessage= + local result="0" + # logSilly "Searching for method: [$keyToSearch]" + LC_ALL=C type "$keyToSearch" > /dev/null 2>&1 || result="$?" + if [[ "$result" == "0" ]]; then + # logSilly "Found method for [$keyToSearch]" + EFFECTIVE_MESSAGE="$($keyToSearch)" + return + fi + eval EFFECTIVE_MESSAGE=\${$keyToSearch} + if [ ! -z "$EFFECTIVE_MESSAGE" ]; then + return + fi + # logSilly "Didn't find method or variable for [$keyToSearch]" +} + + +# REF https://misc.flogisoft.com/bash/tip_colors_and_formatting +cClear="\e[0m" +cBlue="\e[38;5;69m" +cRedDull="\e[1;31m" +cYellow="\e[1;33m" +cRedBright="\e[38;5;197m" +cBold="\e[1m" + + +_loggerGetModeRaw() { + local MODE="$1" + case $MODE in + INFO) + printf "" + ;; + DEBUG) + printf "%s" "[${MODE}] " + ;; + WARN) + printf "${cRedDull}%s%s${cClear}" "[" "${MODE}" "] " + ;; + ERROR) + printf "${cRedBright}%s%s${cClear}" "[" "${MODE}" "] " + ;; + esac +} + + +_loggerGetMode() { + local MODE="$1" + case $MODE in + INFO) + printf "${cBlue}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + DEBUG) + printf "%-7s" "[${MODE}]" + ;; + WARN) + printf "${cRedDull}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + ERROR) + printf "${cRedBright}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + esac +} + +# Capitalises the first letter of the message +_loggerGetMessage() { + local originalMessage="$*" + local firstChar=$(echo "${originalMessage:0:1}" | awk '{ print toupper($0) }') + local resetOfMessage="${originalMessage:1}" + echo "$firstChar$resetOfMessage" +} + +# The spec also says content should be left-trimmed but this is not necessary in our case. We don't reach the limit. +_loggerGetStackTrace() { + printf "%s%-30s%s" "[" "$1:$2" "]" +} + +_loggerGetThread() { + printf "%s" "[main]" +} + +_loggerGetServiceType() { + printf "%s%-5s%s" "[" "shell" "]" +} + +#Trace ID is not applicable to scripts +_loggerGetTraceID() { + printf "%s" "[]" +} + +logRaw() { + echo "" + printf "$1" + echo "" +} + +logBold(){ + echo "" + printf "${cBold}$1${cClear}" + echo "" +} + +# The date binary works differently based on whether it is GNU/BSD +is_date_supported=0 +date --version > /dev/null 2>&1 || is_date_supported=1 +IS_GNU=$(echo $is_date_supported) + +_loggerGetTimestamp() { + if [ "${IS_GNU}" == "0" ]; then + echo -n $(date -u +%FT%T.%3NZ) + else + echo -n $(date -u +%FT%T.000Z) + fi +} + +# https://www.shellscript.sh/tips/spinner/ +_spin() +{ + spinner="/|\\-/|\\-" + while : + do + for i in `seq 0 7` + do + echo -n "${spinner:$i:1}" + echo -en "\010" + sleep 1 + done + done +} + +showSpinner() { + # Start the Spinner: + _spin & + # Make a note of its Process ID (PID): + SPIN_PID=$! + # Kill the spinner on any signal, including our own exit. + trap "kill -9 $SPIN_PID" `seq 0 15` &> /dev/null || return 0 +} + +stopSpinner() { + local occurrences=$(ps -ef | grep -wc "${SPIN_PID}") + let "occurrences+=0" + # validate that it is present (2 since this search itself will show up in the results) + if [ $occurrences -gt 1 ]; then + kill -9 $SPIN_PID &>/dev/null || return 0 + wait $SPIN_ID &>/dev/null + fi +} + +_getEffectiveMessage(){ + local MESSAGE="$1" + local MODE=${2-"INFO"} + + if [ -z "$CONTEXT" ]; then + CONTEXT=$(caller) + fi + + _EFFECTIVE_MESSAGE= + if [ -z "$LOG_BEHAVIOR_ADD_META" ]; then + _EFFECTIVE_MESSAGE="$(_loggerGetModeRaw $MODE)$(_loggerGetMessage $MESSAGE)" + else + local SERVICE_TYPE="script" + local TRACE_ID="" + local THREAD="main" + + local CONTEXT_LINE=$(echo "$CONTEXT" | awk '{print $1}') + local CONTEXT_FILE=$(echo "$CONTEXT" | awk -F"/" '{print $NF}') + + _EFFECTIVE_MESSAGE="$(_loggerGetTimestamp) $(_loggerGetServiceType) $(_loggerGetMode $MODE) $(_loggerGetTraceID) $(_loggerGetStackTrace $CONTEXT_FILE $CONTEXT_LINE) $(_loggerGetThread) - $(_loggerGetMessage $MESSAGE)" + fi + CONTEXT= +} + +# Important - don't call any log method from this method. Will become an infinite loop. Use echo to debug +_logToFile() { + local MODE=${1-"INFO"} + local targetFile="$LOG_BEHAVIOR_ADD_REDIRECTION" + # IF the file isn't passed, abort + if [ -z "$targetFile" ]; then + return + fi + # IF this is not being run in verbose mode and mode is debug or lower, abort + if [ "${VERBOSE_MODE}" != "$FLAG_Y" ] && [ "${VERBOSE_MODE}" != "true" ] && [ "${VERBOSE_MODE}" != "debug" ]; then + if [ "$MODE" == "DEBUG" ] || [ "$MODE" == "SILLY" ]; then + return + fi + fi + + # Create the file if it doesn't exist + if [ ! -f "${targetFile}" ]; then + return + # touch $targetFile > /dev/null 2>&1 || true + fi + # # Make it readable + # chmod 640 $targetFile > /dev/null 2>&1 || true + + # Log contents + printf "%s\n" "$_EFFECTIVE_MESSAGE" >> "$targetFile" || true +} + +logger() { + if [ "$LOG_BEHAVIOR_ADD_NEW_LINE" == "$FLAG_Y" ]; then + echo "" + fi + _getEffectiveMessage "$@" + local MODE=${2-"INFO"} + printf "%s\n" "$_EFFECTIVE_MESSAGE" + _logToFile "$MODE" +} + +logDebug(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "$FLAG_Y" ] || [ "${VERBOSE_MODE}" == "true" ] || [ "${VERBOSE_MODE}" == "debug" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logSilly(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "silly" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logError() { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= +} + +errorExit () { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= + exit 1 +} + +warn () { + CONTEXT=$(caller) + logger "$1" "WARN" + CONTEXT= +} + +note () { + CONTEXT=$(caller) + logger "$1" "NOTE" + CONTEXT= +} + +bannerStart() { + title=$1 + echo + echo -e "\033[1m${title}\033[0m" + echo +} + +bannerSection() { + title=$1 + echo + echo -e "******************************** ${title} ********************************" + echo +} + +bannerSubSection() { + title=$1 + echo + echo -e "************** ${title} *******************" + echo +} + +bannerMessge() { + title=$1 + echo + echo -e "********************************" + echo -e "${title}" + echo -e "********************************" + echo +} + +setRed () { + local input="$1" + echo -e \\033[31m${input}\\033[0m +} +setGreen () { + local input="$1" + echo -e \\033[32m${input}\\033[0m +} +setYellow () { + local input="$1" + echo -e \\033[33m${input}\\033[0m +} + +logger_addLinebreak () { + echo -e "---\n" +} + +bannerImportant() { + title=$1 + local bold="\033[1m" + local noColour="\033[0m" + echo + echo -e "${bold}######################################## IMPORTANT ########################################${noColour}" + echo -e "${bold}${title}${noColour}" + echo -e "${bold}###########################################################################################${noColour}" + echo +} + +bannerEnd() { + #TODO pass a title and calculate length dynamically so that start and end look alike + echo + echo "*****************************************************************************" + echo +} + +banner() { + title=$1 + content=$2 + bannerStart "${title}" + echo -e "$content" +} + +# The logic below helps us redirect content we'd normally hide to the log file. + # + # We have several commands which clutter the console with output and so use + # `cmd > /dev/null` - this redirects the command's output to null. + # + # However, the information we just hid maybe useful for support. Using the code pattern + # `cmd >&6` (instead of `cmd> >/dev/null` ), the command's output is hidden from the console + # but redirected to the installation log file + # + +#Default value of 6 is just null +exec 6>>/dev/null +redirectLogsToFile() { + echo "" + # local file=$1 + + # [ ! -z "${file}" ] || return 0 + + # local logDir=$(dirname "$file") + + # if [ ! -f "${file}" ]; then + # [ -d "${logDir}" ] || mkdir -p ${logDir} || \ + # ( echo "WARNING : Could not create parent directory (${logDir}) to redirect console log : ${file}" ; return 0 ) + # fi + + # #6 now points to the log file + # exec 6>>${file} + # #reference https://unix.stackexchange.com/questions/145651/using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time + # exec 2>&1 > >(tee -a "${file}") +} + +# Check if a give key contains any sensitive string as part of it +# Based on the result, the caller can decide its value can be displayed or not +# Sample usage : isKeySensitive "${key}" && displayValue="******" || displayValue=${value} +isKeySensitive(){ + local key=$1 + local sensitiveKeys="password|secret|key|token" + + if [ -z "${key}" ]; then + return 1 + else + local lowercaseKey=$(echo "${key}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + [[ "${lowercaseKey}" =~ ${sensitiveKeys} ]] && return 0 || return 1 + fi +} + +getPrintableValueOfKey(){ + local displayValue= + local key="$1" + if [ -z "$key" ]; then + # This is actually an incorrect usage of this method but any logging will cause unexpected content in the caller + echo -n "" + return + fi + + local value="$2" + isKeySensitive "${key}" && displayValue="$SENSITIVE_KEY_VALUE" || displayValue="${value}" + echo -n $displayValue +} + +_createConsoleLog(){ + if [ -z "${JF_PRODUCT_HOME}" ]; then + return + fi + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + mkdir -p "${JF_PRODUCT_HOME}/var/log" || true + if [ ! -f ${targetFile} ]; then + touch $targetFile > /dev/null 2>&1 || true + fi + chmod 640 $targetFile > /dev/null 2>&1 || true +} + +# Output from application's logs are piped to this method. It checks a configuration variable to determine if content should be logged to +# the common console.log file +redirectServiceLogsToFile() { + + local result="0" + # check if the function getSystemValue exists + LC_ALL=C type getSystemValue > /dev/null 2>&1 || result="$?" + if [[ "$result" != "0" ]]; then + warn "Couldn't find the systemYamlHelper. Skipping log redirection" + return 0 + fi + + getSystemValue "shared.consoleLog" "NOT_SET" + if [[ "${YAML_VALUE}" == "false" ]]; then + logger "Redirection is set to false. Skipping log redirection" + return 0; + fi + + if [ -z "${JF_PRODUCT_HOME}" ] || [ "${JF_PRODUCT_HOME}" == "" ]; then + warn "JF_PRODUCT_HOME is unavailable. Skipping log redirection" + return 0 + fi + + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + + _createConsoleLog + + while read -r line; do + printf '%s\n' "${line}" >> $targetFile || return 0 # Don't want to log anything - might clutter the screen + done +} + +## Display environment variables starting with JF_ along with its value +## Value of sensitive keys will be displayed as "******" +## +## Sample Display : +## +## ======================== +## JF Environment variables +## ======================== +## +## JF_SHARED_NODE_ID : locahost +## JF_SHARED_JOINKEY : ****** +## +## +displayEnv() { + local JFEnv=$(printenv | grep ^JF_ 2>/dev/null) + local key= + local value= + + if [ -z "${JFEnv}" ]; then + return + fi + + cat << ENV_START_MESSAGE + +======================== +JF Environment variables +======================== +ENV_START_MESSAGE + + for entry in ${JFEnv}; do + key=$(echo "${entry}" | awk -F'=' '{print $1}') + value=$(echo "${entry}" | awk -F'=' '{print $2}') + + isKeySensitive "${key}" && value="******" || value=${value} + + printf "\n%-35s%s" "${key}" " : ${value}" + done + echo; +} + +_addLogRotateConfiguration() { + logDebug "Method ${FUNCNAME[0]}" + # mandatory inputs + local confFile="$1" + local logFile="$2" + + # Method available in _ioOperations.sh + LC_ALL=C type io_setYQPath > /dev/null 2>&1 || return 1 + + io_setYQPath + + # Method available in _systemYamlHelper.sh + LC_ALL=C type getSystemValue > /dev/null 2>&1 || return 1 + + local frequency="daily" + local archiveFolder="archived" + + local compressLogFiles= + getSystemValue "shared.logging.rotation.compress" "true" + if [[ "${YAML_VALUE}" == "true" ]]; then + compressLogFiles="compress" + fi + + getSystemValue "shared.logging.rotation.maxFiles" "10" + local noOfBackupFiles="${YAML_VALUE}" + + getSystemValue "shared.logging.rotation.maxSizeMb" "25" + local sizeOfFile="${YAML_VALUE}M" + + logDebug "Adding logrotate configuration for [$logFile] to [$confFile]" + + # Add configuration to file + local confContent=$(cat << LOGROTATECONF +$logFile { + $frequency + missingok + rotate $noOfBackupFiles + $compressLogFiles + notifempty + olddir $archiveFolder + dateext + extension .log + dateformat -%Y-%m-%d + size ${sizeOfFile} +} +LOGROTATECONF +) + echo "${confContent}" > ${confFile} || return 1 +} + +_operationIsBySameUser() { + local targetUser="$1" + local currentUserID=$(id -u) + local currentUserName=$(id -un) + + if [ $currentUserID == $targetUser ] || [ $currentUserName == $targetUser ]; then + echo -n "yes" + else + echo -n "no" + fi +} + +_addCronJobForLogrotate() { + logDebug "Method ${FUNCNAME[0]}" + + # Abort if logrotate is not available + [ "$(io_commandExists 'crontab')" != "yes" ] && warn "cron is not available" && return 1 + + # mandatory inputs + local productHome="$1" + local confFile="$2" + local cronJobOwner="$3" + + # We want to use our binary if possible. It may be more recent than the one in the OS + local logrotateBinary="$productHome/app/third-party/logrotate/logrotate" + + if [ ! -f "$logrotateBinary" ]; then + logrotateBinary="logrotate" + [ "$(io_commandExists 'logrotate')" != "yes" ] && warn "logrotate is not available" && return 1 + fi + local cmd="$logrotateBinary ${confFile} --state $productHome/var/etc/logrotate/logrotate-state" #--verbose + + id -u $cronJobOwner > /dev/null 2>&1 || { warn "User $cronJobOwner does not exist. Aborting logrotate configuration" && return 1; } + + # Remove the existing line + removeLogRotation "$productHome" "$cronJobOwner" || true + + # Run logrotate daily at 23:55 hours + local cronInterval="55 23 * * * $cmd" + + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + # If this is standalone mode, we cannot use -u - the user running this process may not have the necessary privileges + if [ "$standaloneMode" == "no" ]; then + (crontab -l -u $cronJobOwner 2>/dev/null; echo "$cronInterval") | crontab -u $cronJobOwner - + else + (crontab -l 2>/dev/null; echo "$cronInterval") | crontab - + fi +} + +## Configure logrotate for a product +## Failure conditions: +## If logrotation could not be setup for some reason +## Parameters: +## $1: The product name +## $2: The product home +## Depends on global: none +## Updates global: none +## Returns: NA + +configureLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + + # mandatory inputs + local productName="$1" + if [ -z $productName ]; then + warn "Incorrect usage. A product name is necessary for configuring log rotation" && return 1 + fi + + local productHome="$2" + if [ -z $productHome ]; then + warn "Incorrect usage. A product home folder is necessary for configuring log rotation" && return 1 + fi + + local logFile="${productHome}/var/log/console.log" + if [[ $(uname) == "Darwin" ]]; then + logger "Log rotation for [$logFile] has not been configured. Please setup manually" + return 0 + fi + + local userID="$3" + if [ -z $userID ]; then + warn "Incorrect usage. A userID is necessary for configuring log rotation" && return 1 + fi + + local groupID=${4:-$userID} + local logConfigOwner=${5:-$userID} + + logDebug "Configuring log rotation as user [$userID], group [$groupID], effective cron User [$logConfigOwner]" + + local errorMessage="Could not configure logrotate. Please configure log rotation of the file: [$logFile] manually" + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + # TODO move to recursive method + createDir "${productHome}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log/archived" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + + # TODO move to recursive method + createDir "${productHome}/var/etc" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/etc/logrotate" "$logConfigOwner" || { warn "${errorMessage}" && return 1; } + + # conf file should be owned by the user running the script + createFile "${confFile}" "${logConfigOwner}" || { warn "Could not create configuration file [$confFile]" return 1; } + + _addLogRotateConfiguration "${confFile}" "${logFile}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + _addCronJobForLogrotate "${productHome}" "${confFile}" "${logConfigOwner}" || { warn "${errorMessage}" && return 1; } +} + +_pauseExecution() { + if [ "${VERBOSE_MODE}" == "debug" ]; then + + local breakPoint="$1" + if [ ! -z "$breakPoint" ]; then + printf "${cBlue}Breakpoint${cClear} [$breakPoint] " + echo "" + fi + printf "${cBlue}Press enter once you are ready to continue${cClear}" + read -s choice + echo "" + fi +} + +# removeLogRotation "$productHome" "$cronJobOwner" || true +removeLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + if [[ $(uname) == "Darwin" ]]; then + logDebug "Not implemented for Darwin." + return 0 + fi + local productHome="$1" + local cronJobOwner="$2" + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + if [ "$standaloneMode" == "no" ]; then + crontab -l -u $cronJobOwner 2>/dev/null | grep -v "$confFile" | crontab -u $cronJobOwner - + else + crontab -l 2>/dev/null | grep -v "$confFile" | crontab - + fi +} + +# NOTE: This method does not check the configuration to see if redirection is necessary. +# This is intentional. If we don't redirect, tomcat logs might get redirected to a folder/file +# that does not exist, causing the service itself to not start +setupTomcatRedirection() { + logDebug "Method ${FUNCNAME[0]}" + local consoleLog="${JF_PRODUCT_HOME}/var/log/console.log" + _createConsoleLog + export CATALINA_OUT="${consoleLog}" +} + +setupScriptLogsRedirection() { + logDebug "Method ${FUNCNAME[0]}" + if [ -z "${JF_PRODUCT_HOME}" ]; then + logDebug "No JF_PRODUCT_HOME. Returning" + return + fi + # Create the console.log file if it is not already present + # _createConsoleLog || true + # # Ensure any logs (logger/logError/warn) also get redirected to the console.log + # # Using installer.log as a temparory fix. Please change this to console.log once INST-291 is fixed + export LOG_BEHAVIOR_ADD_REDIRECTION="${JF_PRODUCT_HOME}/var/log/console.log" + export LOG_BEHAVIOR_ADD_META="$FLAG_Y" +} + +# Returns Y if this method is run inside a container +isRunningInsideAContainer() { + local check1=$(grep -sq 'docker\|kubepods' /proc/1/cgroup; echo $?) + local check2=$(grep -sq 'containers' /proc/self/mountinfo; echo $?) + if [[ $check1 == 0 || $check2 == 0 || -f "/.dockerenv" ]]; then + echo -n "$FLAG_Y" + else + echo -n "$FLAG_N" + fi +} + +POSTGRES_USER=999 +NGINX_USER=104 +NGINX_GROUP=107 +ES_USER=1000 +REDIS_USER=999 +MONGO_USER=999 +RABBITMQ_USER=999 +LOG_FILE_PERMISSION=640 +PID_FILE_PERMISSION=644 + +# Copy file +copyFile(){ + local source=$1 + local target=$2 + local mode=${3:-overwrite} + local enableVerbose=${4:-"${FLAG_N}"} + local verboseFlag="" + + if [ ! -z "${enableVerbose}" ] && [ "${enableVerbose}" == "${FLAG_Y}" ]; then + verboseFlag="-v" + fi + + if [[ ! ( $source && $target ) ]]; then + warn "Source and target is mandatory to copy file" + return 1 + fi + + if [[ -f "${target}" ]]; then + [[ "$mode" = "overwrite" ]] && ( cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}") || true + else + cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}" + fi +} + +# Copy files recursively from given source directory to destination directory +# This method wil copy but will NOT overwrite +# Destination will be created if its not available +copyFilesNoOverwrite(){ + local src=$1 + local dest=$2 + local enableVerboseCopy="${3:-${FLAG_Y}}" + + if [[ -z "${src}" || -z "${dest}" ]]; then + return + fi + + if [ -d "${src}" ] && [ "$(ls -A ${src})" ]; then + local relativeFilePath="" + local targetFilePath="" + + for file in $(find ${src} -type f 2>/dev/null) ; do + # Derive relative path and attach it to destination + # Example : + # src=/extra_config + # dest=/var/opt/jfrog/artifactory/etc + # file=/extra_config/config.xml + # relativeFilePath=config.xml + # targetFilePath=/var/opt/jfrog/artifactory/etc/config.xml + relativeFilePath=${file/${src}/} + targetFilePath=${dest}${relativeFilePath} + + createDir "$(dirname "$targetFilePath")" + copyFile "${file}" "${targetFilePath}" "no_overwrite" "${enableVerboseCopy}" + done + fi +} + +# TODO : WINDOWS ? +# Check the max open files and open processes set on the system +checkULimits () { + local minMaxOpenFiles=${1:-32000} + local minMaxOpenProcesses=${2:-1024} + local setValue=${3:-true} + local warningMsgForFiles=${4} + local warningMsgForProcesses=${5} + + logger "Checking open files and processes limits" + + local currentMaxOpenFiles=$(ulimit -n) + logger "Current max open files is $currentMaxOpenFiles" + if [ ${currentMaxOpenFiles} != "unlimited" ] && [ "$currentMaxOpenFiles" -lt "$minMaxOpenFiles" ]; then + if [ "${setValue}" ]; then + ulimit -n "${minMaxOpenFiles}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForFiles}" ] || warn "${warningMsgForFiles}" + else + errorExit "Max number of open files $currentMaxOpenFiles, is too low. Cannot run the application!" + fi + fi + + local currentMaxOpenProcesses=$(ulimit -u) + logger "Current max open processes is $currentMaxOpenProcesses" + if [ "$currentMaxOpenProcesses" != "unlimited" ] && [ "$currentMaxOpenProcesses" -lt "$minMaxOpenProcesses" ]; then + if [ "${setValue}" ]; then + ulimit -u "${minMaxOpenProcesses}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForProcesses}" ] || warn "${warningMsgForProcesses}" + else + errorExit "Max number of open files $currentMaxOpenProcesses, is too low. Cannot run the application!" + fi + fi +} + +createDirs() { + local appDataDir=$1 + local serviceName=$2 + local folders="backup bootstrap data etc logs work" + + [ -z "${appDataDir}" ] && errorExit "An application directory is mandatory to create its data structure" || true + [ -z "${serviceName}" ] && errorExit "A service name is mandatory to create service data structure" || true + + for folder in ${folders} + do + folder=${appDataDir}/${folder}/${serviceName} + if [ ! -d "${folder}" ]; then + logger "Creating folder : ${folder}" + mkdir -p "${folder}" || errorExit "Failed to create ${folder}" + fi + done +} + + +testReadWritePermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local test_file=${dir_to_check}/test-permissions + + # Write file + if echo test > ${test_file} 1> /dev/null 2>&1; then + # Write succeeded. Testing read... + if cat ${test_file} > /dev/null; then + rm -f ${test_file} + else + error=true + fi + else + error=true + fi + + if [ ${error} == true ]; then + return 1 + else + return 0 + fi +} + +# Test directory has read/write permissions for current user +testDirectoryPermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local u_id=$(id -u) + local id_str="id ${u_id}" + + logger "Testing directory ${dir_to_check} has read/write permissions for user ${id_str}" + + if ! testReadWritePermissions ${dir_to_check}; then + error=true + fi + + if [ "${error}" == true ]; then + local stat_data=$(stat -Lc "Directory: %n, permissions: %a, owner: %U, group: %G" ${dir_to_check}) + logger "###########################################################" + logger "${dir_to_check} DOES NOT have proper permissions for user ${id_str}" + logger "${stat_data}" + logger "Mounted directory must have read/write permissions for user ${id_str}" + logger "###########################################################" + errorExit "Directory ${dir_to_check} has bad permissions for user ${id_str}" + fi + logger "Permissions for ${dir_to_check} are good" +} + +# Utility method to create a directory path recursively with chown feature as +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: Root directory from where the path can be created +## $2: List of recursive child directories separated by space +## $3: user who should own the directory. Optional +## $4: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA +# +# Usage: +# createRecursiveDir "/opt/jfrog/product/var" "bootstrap tomcat lib" "user_name" "group_name" +createRecursiveDir(){ + local rootDir=$1 + local pathDirs=$2 + local user=$3 + local group=${4:-${user}} + local fullPath= + + [ ! -z "${rootDir}" ] || return 0 + + createDir "${rootDir}" "${user}" "${group}" + + [ ! -z "${pathDirs}" ] || return 0 + + fullPath=${rootDir} + + for dir in ${pathDirs}; do + fullPath=${fullPath}/${dir} + createDir "${fullPath}" "${user}" "${group}" + done +} + +# Utility method to create a directory +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: directory to create +## $2: user who should own the directory. Optional +## $3: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA + +createDir(){ + local dirName="$1" + local printMessage=no + logSilly "Method ${FUNCNAME[0]} invoked with [$dirName]" + [ -z "${dirName}" ] && return + + logDebug "Attempting to create ${dirName}" + mkdir -p "${dirName}" || errorExit "Unable to create directory: [${dirName}]" + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + # Earlier, this line would have returned 1 if it failed. Now it just warns. + # This is intentional. Earlier, this line would NOT be reached if the folder already existed. + # Since it will always come to this line and the script may be running as a non-root user, this method will just warn if + # setting permissions fails (so as to not affect any existing flows) + io_setOwnershipNonRecursive "$dirName" "$userID" "$groupID" || warn "Could not set owner of [$dirName] to [$userID:$groupID]" + fi + # logging message to print created dir with user and group + local logMessage=${4:-$printMessage} + if [[ "${logMessage}" == "yes" ]]; then + logger "Successfully created directory [${dirName}]. Owner: [${userID}:${groupID}]" + fi +} + +removeSoftLinkAndCreateDir () { + local dirName="$1" + local userID="$2" + local groupID="$3" + local logMessage="$4" + removeSoftLink "${dirName}" + createDir "${dirName}" "${userID}" "${groupID}" "${logMessage}" +} + +# Utility method to remove a soft link +removeSoftLink () { + local dirName="$1" + if [[ -L "${dirName}" ]]; then + targetLink=$(readlink -f "${dirName}") + logger "Removing the symlink [${dirName}] pointing to [${targetLink}]" + rm -f "${dirName}" + fi +} + +# Check Directory exist in the path +checkDirExists () { + local directoryPath="$1" + + [[ -d "${directoryPath}" ]] && echo -n "true" || echo -n "false" +} + + +# Utility method to create a file +# Failure conditions: +# Parameters: +## $1: file to create +# Depends on global: none +# Updates global: none +# Returns: NA + +createFile(){ + local fileName="$1" + logSilly "Method ${FUNCNAME[0]} [$fileName]" + [ -f "${fileName}" ] && return 0 + touch "${fileName}" || return 1 + + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + io_setOwnership "$fileName" "$userID" "$groupID" || return 1 + fi +} + +# Check File exist in the filePath +# IMPORTANT- DON'T ADD LOGGING to this method +checkFileExists () { + local filePath="$1" + + [[ -f "${filePath}" ]] && echo -n "true" || echo -n "false" +} + +# Check for directories contains any (files or sub directories) +# IMPORTANT- DON'T ADD LOGGING to this method +checkDirContents () { + local directoryPath="$1" + if [[ "$(ls -1 "${directoryPath}" | wc -l)" -gt 0 ]]; then + echo -n "true" + else + echo -n "false" + fi +} + +# Check contents exist in directory +# IMPORTANT- DON'T ADD LOGGING to this method +checkContentExists () { + local source="$1" + + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + echo -n "false" + else + echo -n "true" + fi +} + +# Resolve the variable +# IMPORTANT- DON'T ADD LOGGING to this method +evalVariable () { + local output="$1" + local input="$2" + + eval "${output}"=\${"${input}"} + eval echo \${"${output}"} +} + +# Usage: if [ "$(io_commandExists 'curl')" == "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_commandExists() { + local commandToExecute="$1" + hash "${commandToExecute}" 2>/dev/null + local rt=$? + if [ "$rt" == 0 ]; then echo -n "yes"; else echo -n "no"; fi +} + +# Usage: if [ "$(io_curlExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_curlExists() { + io_commandExists "curl" +} + + +io_hasMatch() { + logSilly "Method ${FUNCNAME[0]}" + local result=0 + logDebug "Executing [echo \"$1\" | grep \"$2\" >/dev/null 2>&1]" + echo "$1" | grep "$2" >/dev/null 2>&1 || result=1 + return $result +} + +# Utility method to check if the string passed (usually a connection url) corresponds to this machine itself +# Failure conditions: None +# Parameters: +## $1: string to check against +# Depends on global: none +# Updates global: IS_LOCALHOST with value "yes/no" +# Returns: NA + +io_getIsLocalhost() { + logSilly "Method ${FUNCNAME[0]}" + IS_LOCALHOST="$FLAG_N" + local inputString="$1" + logDebug "Parsing [$inputString] to check if we are dealing with this machine itself" + + io_hasMatch "$inputString" "localhost" && { + logDebug "Found localhost. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for localhost" + + local hostIP=$(io_getPublicHostIP) + io_hasMatch "$inputString" "$hostIP" && { + logDebug "Found $hostIP. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostIP" + + local hostID=$(io_getPublicHostID) + io_hasMatch "$inputString" "$hostID" && { + logDebug "Found $hostID. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostID" + + local hostName=$(io_getPublicHostName) + io_hasMatch "$inputString" "$hostName" && { + logDebug "Found $hostName. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostName" + +} + +# Usage: if [ "$(io_tarExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_tarExists() { + io_commandExists "tar" +} + +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostIP() { + local OS_TYPE=$(uname) + local publicHostIP= + if [ "${OS_TYPE}" == "Darwin" ]; then + ipStatus=$(ifconfig en0 | grep "status" | awk '{print$2}') + if [ "${ipStatus}" == "active" ]; then + publicHostIP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}') + else + errorExit "Host IP could not be resolved!" + fi + elif [ "${OS_TYPE}" == "Linux" ]; then + publicHostIP=$(hostname -i 2>/dev/null || echo "127.0.0.1") + fi + publicHostIP=$(echo "${publicHostIP}" | awk '{print $1}') + echo -n "${publicHostIP}" +} + +# Will return the short host name (up to the first dot) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostName() { + echo -n "$(hostname -s)" +} + +# Will return the full host name (use this as much as possible) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostID() { + echo -n "$(hostname)" +} + +# Utility method to backup a file +# Failure conditions: NA +# Parameters: filePath +# Depends on global: none, +# Updates global: none +# Returns: NA +io_backupFile() { + logSilly "Method ${FUNCNAME[0]}" + fileName="$1" + if [ ! -f "${filePath}" ]; then + logDebug "No file: [${filePath}] to backup" + return + fi + dateTime=$(date +"%Y-%m-%d-%H-%M-%S") + targetFileName="${fileName}.backup.${dateTime}" + yes | \cp -f "$fileName" "${targetFileName}" + logger "File [${fileName}] backedup as [${targetFileName}]" +} + +# Reference https://stackoverflow.com/questions/4023830/how-to-compare-two-strings-in-dot-separated-version-format-in-bash/4025065#4025065 +is_number() { + case "$BASH_VERSION" in + 3.1.*) + PATTERN='\^\[0-9\]+\$' + ;; + *) + PATTERN='^[0-9]+$' + ;; + esac + + [[ "$1" =~ $PATTERN ]] +} + +io_compareVersions() { + if [[ $# != 2 ]] + then + echo "Usage: min_version current minimum" + return + fi + + A="${1%%.*}" + B="${2%%.*}" + + if [[ "$A" != "$1" && "$B" != "$2" && "$A" == "$B" ]] + then + io_compareVersions "${1#*.}" "${2#*.}" + else + if is_number "$A" && is_number "$B" + then + if [[ "$A" -eq "$B" ]]; then + echo "0" + elif [[ "$A" -gt "$B" ]]; then + echo "1" + elif [[ "$A" -lt "$B" ]]; then + echo "-1" + fi + fi + fi +} + +# Reference https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable +# Strip all leading and trailing spaces +# IMPORTANT- DON'T ADD LOGGING to this method +io_trim() { + local var="$1" + # remove leading whitespace characters + var="${var#"${var%%[![:space:]]*}"}" + # remove trailing whitespace characters + var="${var%"${var##*[![:space:]]}"}" + echo -n "$var" +} + +# temporary function will be removing it ASAP +# search for string and replace text in file +replaceText_migration_hook () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + fi +} + +# search for string and replace text in file +replaceText () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + logDebug "Replaced [$regexString] with [$replaceText] in [$file]" + fi +} + +# search for string and prepend text in file +prependText () { + local regexString="$1" + local text="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + else + sed -i -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + fi +} + +# add text to beginning of the file +addText () { + local text="$1" + local file="$2" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + else + sed -i -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + fi +} + +io_replaceString () { + local value="$1" + local firstString="$2" + local secondString="$3" + local separator=${4:-"/"} + local updateValue= + if [[ $(uname) == "Darwin" ]]; then + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + else + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + fi + echo -n "${updateValue}" +} + +_findYQ() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + local parentDir="$1" + if [ -z "$parentDir" ]; then + return + fi + logDebug "Executing command [find "${parentDir}" -name third-party -type d]" + local yq=$(find "${parentDir}" -name third-party -type d) + if [ -d "${yq}/yq" ]; then + export YQ_PATH="${yq}/yq" + fi +} + + +io_setYQPath() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + if [ "$(io_commandExists 'yq')" == "yes" ]; then + return + fi + + if [ ! -z "${JF_PRODUCT_HOME}" ] && [ -d "${JF_PRODUCT_HOME}" ]; then + _findYQ "${JF_PRODUCT_HOME}" + fi + + if [ -z "${YQ_PATH}" ] && [ ! -z "${COMPOSE_HOME}" ] && [ -d "${COMPOSE_HOME}" ]; then + _findYQ "${COMPOSE_HOME}" + fi + # TODO We can remove this block after all the code is restructured. + if [ -z "${YQ_PATH}" ] && [ ! -z "${SCRIPT_HOME}" ] && [ -d "${SCRIPT_HOME}" ]; then + _findYQ "${SCRIPT_HOME}" + fi + +} + +io_getLinuxDistribution() { + LINUX_DISTRIBUTION= + + # Make sure running on Linux + [ $(uname -s) != "Linux" ] && return + + # Find out what Linux distribution we are on + + cat /etc/*-release | grep -i Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 6.x + cat /etc/issue.net | grep Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 7.x + cat /etc/*-release | grep -i centos >/dev/null 2>&1 && LINUX_DISTRIBUTION=CentOS && LINUX_DISTRIBUTION_VER="7" || true + + # OS 8.x + grep -q -i "release 8" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="8" || true + + # OS 7.x + grep -q -i "release 7" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="7" || true + + # OS 6.x + grep -q -i "release 6" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="6" || true + + cat /etc/*-release | grep -i Red | grep -i 'VERSION=7' >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat && LINUX_DISTRIBUTION_VER="7" || true + + cat /etc/*-release | grep -i debian >/dev/null 2>&1 && LINUX_DISTRIBUTION=Debian || true + + cat /etc/*-release | grep -i ubuntu >/dev/null 2>&1 && LINUX_DISTRIBUTION=Ubuntu || true +} + +## Utility method to check ownership of folders/files +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If file is not owned by the user & group +## Parameters: + ## user + ## group + ## folder to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac +io_checkOwner () { + logSilly "Method ${FUNCNAME[0]}" + local osType=$(uname) + + if [ "${osType}" != "Linux" ]; then + logDebug "Unsupported OS. Skipping check" + return 0 + fi + + local file_to_check=$1 + local user_id_to_check=$2 + + + if [ -z "$user_id_to_check" ] || [ -z "$file_to_check" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group_id_to_check=${3:-$user_id_to_check} + local check_user_name=${4:-"no"} + + logDebug "Checking permissions on [$file_to_check] for user [$user_id_to_check] & group [$group_id_to_check]" + + local stat= + + if [ "${check_user_name}" == "yes" ]; then + stat=( $(stat -Lc "%U %G" ${file_to_check}) ) + else + stat=( $(stat -Lc "%u %g" ${file_to_check}) ) + fi + + local user_id=${stat[0]} + local group_id=${stat[1]} + + if [[ "${user_id}" != "${user_id_to_check}" ]] || [[ "${group_id}" != "${group_id_to_check}" ]] ; then + logDebug "Ownership mismatch. [${file_to_check}] is not owned by [${user_id_to_check}:${group_id_to_check}]" + return 1 + else + return 0 + fi +} + +## Utility method to change ownership of a file/folder - NON recursive +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnershipNonRecursive() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown ${user}:${group} ${targetFile}]" + chown ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to change ownership of a file. +## IMPORTANT +## If being called on a folder, should ONLY be called for fresh folders or may cause performance issues +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnership() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown -R ${user}:${group} ${targetFile}]" + chown -R ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to create third party folder structure necessary for Postgres +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## POSTGRESQL_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createPostgresDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${POSTGRESQL_DATA_ROOT}" ] && return 0 + + logDebug "Property [${POSTGRESQL_DATA_ROOT}] exists. Proceeding" + + createDir "${POSTGRESQL_DATA_ROOT}/data" + io_setOwnership "${POSTGRESQL_DATA_ROOT}" "${POSTGRES_USER}" "${POSTGRES_USER}" || errorExit "Setting ownership of [${POSTGRESQL_DATA_ROOT}] to [${POSTGRES_USER}:${POSTGRES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Nginx +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## NGINX_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createNginxDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${NGINX_DATA_ROOT}" ] && return 0 + + logDebug "Property [${NGINX_DATA_ROOT}] exists. Proceeding" + + createDir "${NGINX_DATA_ROOT}" + io_setOwnership "${NGINX_DATA_ROOT}" "${NGINX_USER}" "${NGINX_GROUP}" || errorExit "Setting ownership of [${NGINX_DATA_ROOT}] to [${NGINX_USER}:${NGINX_GROUP}] failed" +} + +## Utility method to create third party folder structure necessary for ElasticSearch +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## ELASTIC_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createElasticSearchDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${ELASTIC_DATA_ROOT}" ] && return 0 + + logDebug "Property [${ELASTIC_DATA_ROOT}] exists. Proceeding" + + createDir "${ELASTIC_DATA_ROOT}/data" + io_setOwnership "${ELASTIC_DATA_ROOT}" "${ES_USER}" "${ES_USER}" || errorExit "Setting ownership of [${ELASTIC_DATA_ROOT}] to [${ES_USER}:${ES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Redis +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## REDIS_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRedisDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${REDIS_DATA_ROOT}" ] && return 0 + + logDebug "Property [${REDIS_DATA_ROOT}] exists. Proceeding" + + createDir "${REDIS_DATA_ROOT}" + io_setOwnership "${REDIS_DATA_ROOT}" "${REDIS_USER}" "${REDIS_USER}" || errorExit "Setting ownership of [${REDIS_DATA_ROOT}] to [${REDIS_USER}:${REDIS_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Mongo +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## MONGODB_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createMongoDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${MONGODB_DATA_ROOT}" ] && return 0 + + logDebug "Property [${MONGODB_DATA_ROOT}] exists. Proceeding" + + createDir "${MONGODB_DATA_ROOT}/logs" + createDir "${MONGODB_DATA_ROOT}/configdb" + createDir "${MONGODB_DATA_ROOT}/db" + io_setOwnership "${MONGODB_DATA_ROOT}" "${MONGO_USER}" "${MONGO_USER}" || errorExit "Setting ownership of [${MONGODB_DATA_ROOT}] to [${MONGO_USER}:${MONGO_USER}] failed" +} + +## Utility method to create third party folder structure necessary for RabbitMQ +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## RABBITMQ_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRabbitMQDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${RABBITMQ_DATA_ROOT}" ] && return 0 + + logDebug "Property [${RABBITMQ_DATA_ROOT}] exists. Proceeding" + + createDir "${RABBITMQ_DATA_ROOT}" + io_setOwnership "${RABBITMQ_DATA_ROOT}" "${RABBITMQ_USER}" "${RABBITMQ_USER}" || errorExit "Setting ownership of [${RABBITMQ_DATA_ROOT}] to [${RABBITMQ_USER}:${RABBITMQ_USER}] failed" +} + +# Add or replace a property in provided properties file +addOrReplaceProperty() { + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + local delimiter=${4:-"="} + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}\s*${delimiter}.*$" ${propertiesPath} > /dev/null 2>&1 + [ $? -ne 0 ] && echo -e "\n${propertyName}${delimiter}${propertyValue}" >> ${propertiesPath} + sed -i -e "s|^${propertyName}\s*${delimiter}.*$|${propertyName}${delimiter}${propertyValue}|g;" ${propertiesPath} +} + +# Set property only if its not set +io_setPropertyNoOverride(){ + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}:" ${propertiesPath} > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo -e "${propertyName}: ${propertyValue}" >> ${propertiesPath} || warn "Setting property ${propertyName}: ${propertyValue} in [ ${propertiesPath} ] failed" + else + logger "Skipping update of property : ${propertyName}" >&6 + fi +} + +# Add a line to a file if it doesn't already exist +addLine() { + local line_to_add=$1 + local target_file=$2 + logger "Trying to add line $1 to $2" >&6 2>&1 + cat "$target_file" | grep -F "$line_to_add" -wq >&6 2>&1 + if [ $? != 0 ]; then + logger "Line does not exist and will be added" >&6 2>&1 + echo $line_to_add >> $target_file || errorExit "Could not update $target_file" + fi +} + +# Utility method to check if a value (first parameter) exists in an array (2nd parameter) +# 1st parameter "value to find" +# 2nd parameter "The array to search in. Please pass a string with each value separated by space" +# Example: containsElement "y" "y Y n N" +containsElement () { + local searchElement=$1 + local searchArray=($2) + local found=1 + for elementInIndex in "${searchArray[@]}";do + if [[ $elementInIndex == $searchElement ]]; then + found=0 + fi + done + return $found +} + +# Utility method to get user's choice +# 1st parameter "what to ask the user" +# 2nd parameter "what choices to accept, separated by spaces" +# 3rd parameter "what is the default choice (to use if the user simply presses Enter)" +# Example 'getUserChoice "Are you feeling lucky? Punk!" "y n Y N" "y"' +getUserChoice(){ + configureLogOutput + read_timeout=${read_timeout:-0.5} + local choice="na" + local text_to_display=$1 + local choices=$2 + local default_choice=$3 + users_choice= + + until containsElement "$choice" "$choices"; do + echo "";echo ""; + sleep $read_timeout #This ensures correct placement of the question. + read -p "$text_to_display :" choice + : ${choice:=$default_choice} + done + users_choice=$choice + echo -e "\n$text_to_display: $users_choice" >&6 + sleep $read_timeout #This ensures correct logging +} + +setFilePermission () { + local permission=$1 + local file=$2 + chmod "${permission}" "${file}" || warn "Setting permission ${permission} to file [ ${file} ] failed" +} + + +#setting required paths +setAppDir (){ + SCRIPT_DIR=$(dirname $0) + SCRIPT_HOME="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + APP_DIR="`cd "${SCRIPT_HOME}";pwd`" +} + +ZIP_TYPE="zip" +COMPOSE_TYPE="compose" +HELM_TYPE="helm" +RPM_TYPE="rpm" +DEB_TYPE="debian" + +sourceScript () { + local file="$1" + + [ ! -z "${file}" ] || errorExit "target file is not passed to source a file" + + if [ ! -f "${file}" ]; then + errorExit "${file} file is not found" + else + source "${file}" || errorExit "Unable to source ${file}, please check if the user ${USER} has permissions to perform this action" + fi +} +# Source required helpers +initHelpers () { + local systemYamlHelper="${APP_DIR}/systemYamlHelper.sh" + local thirdPartyDir=$(find ${APP_DIR}/.. -name third-party -type d) + export YQ_PATH="${thirdPartyDir}/yq" + LIBXML2_PATH="${thirdPartyDir}/libxml2/bin/xmllint" + export LD_LIBRARY_PATH="${thirdPartyDir}/libxml2/lib" + sourceScript "${systemYamlHelper}" +} +# Check migration info yaml file available in the path +checkMigrationInfoYaml () { + + if [[ -f "${APP_DIR}/migrationHelmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationHelmInfo.yaml" + INSTALLER="${HELM_TYPE}" + elif [[ -f "${APP_DIR}/migrationZipInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationZipInfo.yaml" + INSTALLER="${ZIP_TYPE}" + elif [[ -f "${APP_DIR}/migrationRpmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationRpmInfo.yaml" + INSTALLER="${RPM_TYPE}" + elif [[ -f "${APP_DIR}/migrationDebInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationDebInfo.yaml" + INSTALLER="${DEB_TYPE}" + elif [[ -f "${APP_DIR}/migrationComposeInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationComposeInfo.yaml" + INSTALLER="${COMPOSE_TYPE}" + else + errorExit "File migration Info yaml does not exist in [${APP_DIR}]" + fi +} + +retrieveYamlValue () { + local yamlPath="$1" + local value="$2" + local output="$3" + local message="$4" + + [[ -z "${yamlPath}" ]] && errorExit "yamlPath is mandatory to get value from ${MIGRATION_SYSTEM_YAML_INFO}" + + getYamlValue "${yamlPath}" "${MIGRATION_SYSTEM_YAML_INFO}" "false" + value="${YAML_VALUE}" + if [[ -z "${value}" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "Empty value for ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + elif [[ "${output}" == "Skip" ]]; then + return + else + errorExit "${message}" + fi + fi +} + +checkEnv () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + # check Environment JF_PRODUCT_HOME is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_PRODUCT_HOME")" + if [[ -z "${NEW_DATA_DIR}" ]]; then + errorExit "Environment variable JF_PRODUCT_HOME is not set, this is required to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + getCustomDataDir_hook + NEW_DATA_DIR="${OLD_DATA_DIR}" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + else + # check Environment JF_ROOT_DATA_DIR is set before migration + OLD_DATA_DIR="$(evalVariable "OLD_DATA_DIR" "JF_ROOT_DATA_DIR")" + # check Environment JF_ROOT_DATA_DIR is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_ROOT_DATA_DIR")" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi + +} + +getDataDir () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}"|| "${INSTALLER}" == "${HELM_TYPE}" ]]; then + checkEnv + else + getCustomDataDir_hook + NEW_DATA_DIR="`cd "${APP_DIR}"/../../;pwd`" + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi +} + +# Retrieve Product name from MIGRATION_SYSTEM_YAML_INFO +getProduct () { + retrieveYamlValue "migration.product" "${YAML_VALUE}" "Fail" "Empty value under ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + PRODUCT="${YAML_VALUE}" + PRODUCT=$(echo "${PRODUCT}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + if [[ "${PRODUCT}" != "artifactory" && "${PRODUCT}" != "distribution" && "${PRODUCT}" != "xray" ]]; then + errorExit "migration.product in [${MIGRATION_SYSTEM_YAML_INFO}] is not correct, please set based on product as ARTIFACTORY or DISTRIBUTION" + fi + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + JF_USER="${PRODUCT}" + fi +} +# Compare product version with minProductVersion and maxProductVersion +migrateCheckVersion () { + local productVersion="$1" + local minProductVersion="$2" + local maxProductVersion="$3" + local productVersion618="6.18.0" + local unSupportedProductVersions7=("7.2.0 7.2.1") + + if [[ "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 1 ]]; then + logger "Migration not necessary. ${PRODUCT} is already ${productVersion}" + exit 11 + elif [[ "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 1 ]]; then + if [[ ("$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 1) && " ${unSupportedProductVersions7[@]} " =~ " ${CURRENT_VERSION} " ]]; then + touch /tmp/error; + errorExit "Current ${PRODUCT} version (${productVersion}) does not support migration to ${CURRENT_VERSION}" + else + bannerStart "Detected ${PRODUCT} ${productVersion}, initiating migration" + fi + else + logger "Current ${PRODUCT} ${productVersion} version is not supported for migration" + exit 1 + fi +} + +getProductVersion () { + local minProductVersion="$1" + local maxProductVersion="$2" + local newfilePath="$3" + local oldfilePath="$4" + local propertyInDocker="$5" + local property="$6" + local productVersion= + local status= + + if [[ "$INSTALLER" == "${COMPOSE_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + elif [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${propertyInDocker}" "${newfilePath}")" + status="fail" + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + exit 0 + fi + elif [[ "$INSTALLER" == "${HELM_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + else + productVersion="${CURRENT_VERSION}" + [[ -z "${productVersion}" || "${productVersion}" == "" ]] && logger "${PRODUCT} CURRENT_VERSION is not set" && exit 0 + fi + else + if [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${property}" "${newfilePath}")" + status="fail" + elif [[ -f "${oldfilePath}" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + status="success" + else + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + logger "File [${newfilePath}] not found to get current version." + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + fi + exit 0 + fi + fi + if [[ -z "${productVersion}" || "${productVersion}" == "" ]]; then + [[ "${status}" == "success" ]] && logger "No version found in file [${oldfilePath}]." + [[ "${status}" == "fail" ]] && logger "No version found in file [${newfilePath}]." + exit 0 + fi + + migrateCheckVersion "${productVersion}" "${minProductVersion}" "${maxProductVersion}" +} + +readKey () { + local property="$1" + local file="$2" + local version= + + while IFS='=' read -r key value || [ -n "${key}" ]; + do + [[ ! "${key}" =~ \#.* && ! -z "${key}" && ! -z "${value}" ]] + key="$(io_trim "${key}")" + if [[ "${key}" == "${property}" ]]; then + version="${value}" && check=true && break + else + check=false + fi + done < "${file}" + if [[ "${check}" == "false" ]]; then + return + fi + echo "${version}" +} + +# create Log directory +createLogDir () { + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" + fi +} + +# Creating migration log file +creationMigrateLog () { + local LOG_FILE_NAME="migration.log" + createLogDir + local MIGRATION_LOG_FILE="${NEW_DATA_DIR}/log/${LOG_FILE_NAME}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + MIGRATION_LOG_FILE="${SCRIPT_HOME}/${LOG_FILE_NAME}" + fi + touch "${MIGRATION_LOG_FILE}" + setFilePermission "${LOG_FILE_PERMISSION}" "${MIGRATION_LOG_FILE}" + exec &> >(tee -a "${MIGRATION_LOG_FILE}") +} +# Set path where system.yaml should create +setSystemYamlPath () { + SYSTEM_YAML_PATH="${NEW_DATA_DIR}/etc/system.yaml" + if [[ "${INSTALLER}" != "${HELM_TYPE}" ]]; then + logger "system.yaml will be created in path [${SYSTEM_YAML_PATH}]" + fi +} +# Create directory +createDirectory () { + local directory="$1" + local output="$2" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${directory}" + mkdir -p "${directory}" && check=true || check=false + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi + setOwnershipBasedOnInstaller "${directory}" +} + +setOwnershipBasedOnInstaller () { + local directory="$1" + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + chown -R ${USER_TO_CHECK}:${GROUP_TO_CHECK} "${directory}" || warn "Setting ownership on $directory failed" + elif [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + io_setOwnership "${directory}" "${JF_USER}" "${JF_USER}" + fi +} + +getUserAndGroup () { + local file="$1" + read uid gid <<<$(stat -c '%U %G' ${file}) + USER_TO_CHECK="${uid}" + GROUP_TO_CHECK="${gid}" +} + +# set ownership +getUserAndGroupFromFile () { + case $PRODUCT in + artifactory) + getUserAndGroup "/etc/opt/jfrog/artifactory/artifactory.properties" + ;; + distribution) + getUserAndGroup "${OLD_DATA_DIR}/etc/versions.properties" + ;; + xray) + getUserAndGroup "${OLD_DATA_DIR}/security/master.key" + ;; + esac +} + +# creating required directories +createRequiredDirs () { + bannerSubSection "CREATING REQUIRED DIRECTORIES" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${JF_USER}" "${JF_USER}" "yes" + io_setOwnership "${NEW_DATA_DIR}" "${JF_USER}" "${JF_USER}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data/postgres" "${POSTGRES_USER}" "${POSTGRES_USER}" "yes" + fi + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + fi +} + +# Check entry in map is format +checkMapEntry () { + local entry="$1" + + [[ "${entry}" != *"="* ]] && echo -n "false" || echo -n "true" +} +# Check value Empty and warn +warnIfEmpty () { + local filePath="$1" + local yamlPath="$2" + local check= + + if [[ -z "${filePath}" ]]; then + warn "Empty value in yamlpath [${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + check=false + else + check=true + fi + echo "${check}" +} + +logCopyStatus () { + local status="$1" + local logMessage="$2" + local warnMessage="$3" + + [[ "${status}" == "success" ]] && logger "${logMessage}" + [[ "${status}" == "fail" ]] && warn "${warnMessage}" +} +# copy contents from source to destination +copyCmd () { + local source="$1" + local target="$2" + local mode="$3" + local status= + + case $mode in + unique) + cp -up "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + specific) + cp -pf "${source}" "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied file [${source}] to [${target}]" "Failed to copy file [${source}] to [${target}]" + ;; + patternFiles) + cp -pf "${source}"* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied files matching [${source}*] to [${target}]" "Failed to copy files matching [${source}*] to [${target}]" + ;; + full) + cp -prf "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + esac +} +# Check contents exist in source before copying +copyOnContentExist () { + local source="$1" + local target="$2" + local mode="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + copyCmd "${source}" "${target}" "${mode}" + else + logger "No contents to copy from [${source}]" + fi +} + +# move source to destination +moveCmd () { + local source="$1" + local target="$2" + local status= + + mv -f "${source}" "${target}" && status="success" || status="fail" + [[ "${status}" == "success" ]] && logger "Successfully moved directory [${source}] to [${target}]" + [[ "${status}" == "fail" ]] && warn "Failed to move directory [${source}] to [${target}]" +} + +# symlink target to source +symlinkCmd () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + local check=false + + if [[ "${symlinkSubDir}" == "subDir" ]]; then + ln -sf "${source}"/* "${target}" && check=true || check=false + else + ln -sf "${source}" "${target}" && check=true || check=false + fi + + [[ "${check}" == "true" ]] && logger "Successfully symlinked directory [${target}] to old [${source}]" + [[ "${check}" == "false" ]] && warn "Symlink operation failed" +} +# Check contents exist in source before symlinking +symlinkOnExist () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + if [[ "${symlinkSubDir}" == "subDir" ]]; then + symlinkCmd "${source}" "${target}" "subDir" + else + symlinkCmd "${source}" "${target}" + fi + else + logger "No contents to symlink from [${source}]" + fi +} + +prependDir () { + local absolutePath="$1" + local fullPath="$2" + local sourcePath= + + if [[ "${absolutePath}" = \/* ]]; then + sourcePath="${absolutePath}" + else + sourcePath="${fullPath}" + fi + echo "${sourcePath}" +} + +getFirstEntry (){ + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $1}' +} + +getSecondEntry () { + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $2}' +} +# To get absolutePath +pathResolver () { + local directoryPath="$1" + local dataDir= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Warning" + dataDir="${YAML_VALUE}" + cd "${dataDir}" + else + cd "${OLD_DATA_DIR}" + fi + absoluteDir="`cd "${directoryPath}";pwd`" + echo "${absoluteDir}" +} + +checkPathResolver () { + local value="$1" + + if [[ "${value}" == \/* ]]; then + value="${value}" + else + value="$(pathResolver "${value}")" + fi + echo "${value}" +} + +propertyMigrate () { + local entry="$1" + local filePath="$2" + local fileName="$3" + local check=false + + local yamlPath="$(getFirstEntry "${entry}")" + local property="$(getSecondEntry "${entry}")" + if [[ -z "${property}" ]]; then + warn "Property is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${property}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + local keyValues=$(cat "${NEW_DATA_DIR}/${filePath}/${fileName}" | grep "^[^#]" | grep "[*=*]") + for i in ${keyValues}; do + key=$(echo "${i}" | awk -F"=" '{print $1}') + value=$(echo "${i}" | cut -f 2- -d '=') + [ -z "${key}" ] && continue + [ -z "${value}" ] && continue + if [[ "${key}" == "${property}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + value="$(migrateResolveDerbyPath "${key}" "${value}")" + value="$(migrateResolveHaDirPath "${key}" "${value}")" + if [[ "${INSTALLER}" != "${DOCKER_TYPE}" ]]; then + value="$(updatePostgresUrlString_Hook "${yamlPath}" "${value}")" + fi + fi + if [[ "${key}" == "context.url" ]]; then + local ip=$(echo "${value}" | awk -F/ '{print $3}' | sed 's/:.*//') + setSystemValue "shared.node.ip" "${ip}" "${SYSTEM_YAML_PATH}" + logger "Setting [shared.node.ip] with [${ip}] in system.yaml" + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" && logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" && check=true && break || check=false + fi + done + [[ "${check}" == "false" ]] && logger "Property [${property}] not found in file [${fileName}]" +} + +setHaEnabled_hook () { + echo "" +} + +migratePropertiesFiles () { + local fileList= + local filePath= + local fileName= + local map= + + retrieveYamlValue "migration.propertyFiles.files" "fileList" "Skip" + fileList="${YAML_VALUE}" + if [[ -z "${fileList}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF PROPERTY FILES" + for file in ${fileList}; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.propertyFiles.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.propertyFiles.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + if [[ "$(checkFileExists "${NEW_DATA_DIR}/${filePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + # setting haEnabled with true only if ha-node.properties is present + setHaEnabled_hook "${filePath}" + retrieveYamlValue "migration.propertyFiles.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + propertyMigrate "${entry}" "${filePath}" "${fileName}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=property" + fi + done + else + logger "File [${fileName}] was not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} + +createTargetDir () { + local mountDir="$1" + local target="$2" + + logger "Target directory not found [${mountDir}/${target}], creating it" + createDirectoryRecursive "${mountDir}" "${target}" "Warning" +} + +createDirectoryRecursive () { + local mountDir="$1" + local target="$2" + local output="$3" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${mountDir}/${target}" + local directory=$(echo "${target}" | tr '/' ' ' ) + local targetDir="${mountDir}" + for dir in ${directory}; + do + targetDir="${targetDir}/${dir}" + mkdir -p "${targetDir}" && check=true || check=false + setOwnershipBasedOnInstaller "${targetDir}" + done + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi +} + +copyOperation () { + local source="$1" + local target="$2" + local mode="$3" + local check=false + local targetDataDir= + local targetLink= + local date= + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + #remove source if it is a symlink + if [[ -L "${source}" ]]; then + targetLink=$(readlink -f "${source}") + logger "Removing the symlink [${source}] pointing to [${targetLink}]" + rm -f "${source}" + source=${targetLink} + fi + if [[ "$(checkDirExists "${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path" + return + fi + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + logger "No contents to copy from [${source}]" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyOnContentExist "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copySpecificFiles () { + local source="$1" + local target="$2" + local mode="$3" + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkFileExists "${source}")" != "true" ]]; then + logger "Source file [${source}] does not exist in path" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copyPatternMatchingFiles () { + local source="$1" + local target="$2" + local mode="$3" + local sourcePath="${4}" + + # prepend OLD_DATA_DIR only if source is relative path + sourcePath="$(prependDir "${sourcePath}" "${OLD_DATA_DIR}/${sourcePath}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkDirExists "${sourcePath}")" != "true" ]]; then + logger "Source [${sourcePath}] directory not found in path" + return + fi + if ls "${sourcePath}/${source}"* 1> /dev/null 2>&1; then + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${sourcePath}/${source}" "${targetDataDir}/${target}" "${mode}" + else + logger "Source file [${sourcePath}/${source}*] does not exist in path" + fi +} + +copyLogMessage () { + local mode="$1" + case $mode in + specific) + logger "Copy file [${source}] to target [${targetDataDir}/${target}]" + ;; + patternFiles) + logger "Copy files matching [${sourcePath}/${source}*] to target [${targetDataDir}/${target}]" + ;; + full) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + unique) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + esac +} + +copyBannerMessages () { + local mode="$1" + local textMode="$2" + case $mode in + specific) + bannerSection "COPY ${textMode} FILES" + ;; + patternFiles) + bannerSection "COPY MATCHING ${textMode}" + ;; + full) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + unique) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + esac +} + +invokeCopyFunctions () { + local mode="$1" + local source="$2" + local target="$3" + + case $mode in + specific) + copySpecificFiles "${source}" "${target}" "${mode}" + ;; + patternFiles) + retrieveYamlValue "migration.${copyFormat}.sourcePath" "map" "Warning" + local sourcePath="${YAML_VALUE}" + copyPatternMatchingFiles "${source}" "${target}" "${mode}" "${sourcePath}" + ;; + full) + copyOperation "${source}" "${target}" "${mode}" + ;; + unique) + copyOperation "${source}" "${target}" "${mode}" + ;; + esac +} +# Copies contents from source directory and target directory +copyDataDirectories () { + local copyFormat="$1" + local mode="$2" + local map= + local source= + local target= + local textMode= + local targetDataDir= + local copyFormatValue= + + retrieveYamlValue "migration.${copyFormat}" "${copyFormat}" "Skip" + copyFormatValue="${YAML_VALUE}" + if [[ -z "${copyFormatValue}" ]]; then + return + fi + textMode=$(echo "${mode}" | tr '[:lower:]' '[:upper:]' 2>/dev/null) + copyBannerMessages "${mode}" "${textMode}" + retrieveYamlValue "migration.${copyFormat}.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeCopyFunctions "${mode}" "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +invokeMoveFunctions () { + local source="$1" + local target="$2" + local sourceDataDir= + local targetBasename= + # prepend OLD_DATA_DIR only if source is relative path + sourceDataDir=$(prependDir "${source}" "${OLD_DATA_DIR}/${source}") + targetBasename=$(dirname "${target}") + logger "Moving directory source [${sourceDataDir}] to target [${NEW_DATA_DIR}/${target}]" + if [[ "$(checkDirExists "${sourceDataDir}")" != "true" ]]; then + logger "Directory [${sourceDataDir}] not found in path to move" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${targetBasename}")" != "true" ]]; then + createTargetDir "${NEW_DATA_DIR}" "${targetBasename}" + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/${target}" + else + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/tempDir" + moveCmd "${NEW_DATA_DIR}/tempDir" "${NEW_DATA_DIR}/${target}" + fi +} + +# Move source directory and target directory +moveDirectories () { + local moveDataDirectories= + local map= + local source= + local target= + + retrieveYamlValue "migration.moveDirectories" "moveDirectories" "Skip" + moveDirectories="${YAML_VALUE}" + if [[ -z "${moveDirectories}" ]]; then + return + fi + bannerSection "MOVE DIRECTORIES" + retrieveYamlValue "migration.moveDirectories.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeMoveFunctions "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +# Trim masterKey if its generated using hex 32 +trimMasterKey () { + local masterKeyDir=/opt/jfrog/artifactory/var/etc/security + local oldMasterKey=$(<${masterKeyDir}/master.key) + local oldMasterKey_Length=$(echo ${#oldMasterKey}) + local newMasterKey= + if [[ ${oldMasterKey_Length} -gt 32 ]]; then + bannerSection "TRIM MASTERKEY" + newMasterKey=$(echo ${oldMasterKey:0:32}) + cp ${masterKeyDir}/master.key ${masterKeyDir}/backup_master.key + logger "Original masterKey is backed up : ${masterKeyDir}/backup_master.key" + rm -rf ${masterKeyDir}/master.key + echo ${newMasterKey} > ${masterKeyDir}/master.key + logger "masterKey is trimmed : ${masterKeyDir}/master.key" + fi +} + +copyDirectories () { + + copyDataDirectories "copyFiles" "full" + copyDataDirectories "copyUniqueFiles" "unique" + copyDataDirectories "copySpecificFiles" "specific" + copyDataDirectories "copyPatternMatchingFiles" "patternFiles" +} + +symlinkDir () { + local source="$1" + local target="$2" + local targetDir= + local basename= + local targetParentDir= + + targetDir="$(dirname "${target}")" + if [[ "${targetDir}" == "${source}" ]]; then + # symlink the sub directories + createDirectory "${NEW_DATA_DIR}/${target}" "Warning" + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" "subDir" + basename="$(basename "${target}")" + cd "${NEW_DATA_DIR}/${target}" && rm -f "${basename}" + fi + else + targetParentDir="$(dirname "${NEW_DATA_DIR}/${target}")" + createDirectory "${targetParentDir}" "Warning" + if [[ "$(checkDirExists "${targetParentDir}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" + fi + fi +} + +symlinkOperation () { + local source="$1" + local target="$2" + local check=false + local targetLink= + local date= + + # Check if source is a link and do symlink + if [[ -L "${OLD_DATA_DIR}/${source}" ]]; then + targetLink=$(readlink -f "${OLD_DATA_DIR}/${source}") + symlinkOnExist "${targetLink}" "${NEW_DATA_DIR}/${target}" + else + # check if source is directory and do symlink + if [[ "$(checkDirExists "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path to symlink" + return + fi + if [[ "$(checkDirContents "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "No contents found in [${OLD_DATA_DIR}/${source}] to symlink" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" != "true" ]]; then + logger "Target directory [${NEW_DATA_DIR}/${target}] does not exist to create symlink, creating it" + symlinkDir "${source}" "${target}" + else + rm -rf "${NEW_DATA_DIR}/${target}" && check=true || check=false + [[ "${check}" == "false" ]] && warn "Failed to remove contents in [${NEW_DATA_DIR}/${target}/]" + symlinkDir "${source}" "${target}" + fi + fi +} +# Creates a symlink path - Source directory to which the symbolic link should point. +symlinkDirectories () { + local linkFiles= + local map= + local source= + local target= + + retrieveYamlValue "migration.linkFiles" "linkFiles" "Skip" + linkFiles="${YAML_VALUE}" + if [[ -z "${linkFiles}" ]]; then + return + fi + bannerSection "SYMLINK DIRECTORIES" + retrieveYamlValue "migration.linkFiles.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + logger "Symlink directory [${NEW_DATA_DIR}/${target}] to old [${OLD_DATA_DIR}/${source}]" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + symlinkOperation "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +updateConnectionString () { + local yamlPath="$1" + local value="$2" + local mongoPath="shared.mongo.url" + local rabbitmqPath="shared.rabbitMq.url" + local postgresPath="shared.database.url" + local redisPath="shared.redis.connectionString" + local mongoConnectionString="mongo.connectionString" + local sourceKey= + local hostIp=$(io_getPublicHostIP) + local hostKey= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + # Replace @postgres:,@mongodb:,@rabbitmq:,@redis: to @{hostIp}: (Compose Installer) + hostKey="@${hostIp}:" + case $yamlPath in + ${postgresPath}) + sourceKey="@postgres:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoPath}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${rabbitmqPath}) + sourceKey="@rabbitmq:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${redisPath}) + sourceKey="@redis:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoConnectionString}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + esac + fi + echo -n "${value}" +} + +yamlMigrate () { + local entry="$1" + local sourceFile="$2" + local value= + local yamlPath= + local key= + yamlPath="$(getFirstEntry "${entry}")" + key="$(getSecondEntry "${entry}")" + if [[ -z "${key}" ]]; then + warn "key is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + getYamlValue "${key}" "${sourceFile}" "false" + value="${YAML_VALUE}" + if [[ ! -z "${value}" ]]; then + value=$(updateConnectionString "${yamlPath}" "${value}") + fi + if [[ -z "${value}" ]]; then + logger "No value for [${key}] in [${sourceFile}]" + else + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the key [${key}] in system.yaml" + fi +} + +migrateYamlFile () { + local files= + local filePath= + local fileName= + local sourceFile= + local map= + retrieveYamlValue "migration.yaml.files" "files" "Skip" + files="${YAML_VALUE}" + if [[ -z "${files}" ]]; then + return + fi + bannerSection "MIGRATION OF YAML FILES" + for file in $files; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.yaml.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.yaml.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + sourceFile="${NEW_DATA_DIR}/${filePath}/${fileName}" + if [[ "$(checkFileExists "${sourceFile}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + retrieveYamlValue "migration.yaml.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + yamlMigrate "${entry}" "${sourceFile}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done + else + logger "File [${fileName}] is not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} +# updates the key and value in system.yaml +updateYamlKeyValue () { + local entry="$1" + local value= + local yamlPath= + local key= + + yamlPath="$(getFirstEntry "${entry}")" + value="$(getSecondEntry "${entry}")" + if [[ -z "${value}" ]]; then + warn "value is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value [${value}] in system.yaml" +} + +updateSystemYamlFile () { + local updateYaml= + local map= + + retrieveYamlValue "migration.updateSystemYaml" "updateYaml" "Skip" + updateSystemYaml="${YAML_VALUE}" + if [[ -z "${updateSystemYaml}" ]]; then + return + fi + bannerSection "UPDATE SYSTEM YAML FILE WITH KEY AND VALUES" + retrieveYamlValue "migration.updateSystemYaml.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ -z "${map}" ]]; then + return + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + updateYamlKeyValue "${entry}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done +} + +backupFiles_hook () { + logSilly "Method ${FUNCNAME[0]}" +} + +backupDirectory () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyOnContentExist "${targetDir}" "${backupDirectory}/${dir}" "full" + fi +} + +removeOldDirectory () { + local backupDir="$1" + local entry="$2" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${entry}" "${OLD_DATA_DIR}/${entry}")" + local outputCheckDirExists="$(checkDirExists "${targetDir}")" + if [[ "${outputCheckDirExists}" != "true" ]]; then + logger "No [${targetDir}] directory found to delete" + echo ""; + return + fi + backupDirectory "${backupDir}" "${entry}" "${targetDir}" + rm -rf "${targetDir}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed directory [${targetDir}]" + [[ "${check}" == "false" ]] && warn "Failed to remove directory [${targetDir}]" + echo ""; +} + +cleanUpOldDataDirectories () { + local cleanUpOldDataDir= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldDataDir" "cleanUpOldDataDir" "Skip" + cleanUpOldDataDir="${YAML_VALUE}" + if [[ -z "${cleanUpOldDataDir}" ]]; then + return + fi + bannerSection "CLEAN UP OLD DATA DIRECTORIES" + retrieveYamlValue "migration.cleanUpOldDataDir.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old data configurations are backedup in [${backupDir}] directory ******" + backupFiles_hook "${backupDir}/${PRODUCT}" + for entry in $map; + do + removeOldDirectory "${backupDir}" "${entry}" + done +} + +backupFiles () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local fileName="$4" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyCmd "${targetDir}/${fileName}" "${backupDirectory}/${dir}" "specific" + fi +} + +removeOldFiles () { + local backupDir="$1" + local directoryName="$2" + local fileName="$3" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${directoryName}" "${OLD_DATA_DIR}/${directoryName}")" + local outputCheckFileExists="$(checkFileExists "${targetDir}/${fileName}")" + if [[ "${outputCheckFileExists}" != "true" ]]; then + logger "No [${targetDir}/${fileName}] file found to delete" + return + fi + backupFiles "${backupDir}" "${directoryName}" "${targetDir}" "${fileName}" + rm -f "${targetDir}/${fileName}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed file [${targetDir}/${fileName}]" + [[ "${check}" == "false" ]] && warn "Failed to remove file [${targetDir}/${fileName}]" + echo ""; +} + +cleanUpOldFiles () { + local cleanUpFiles= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldFiles" "cleanUpOldFiles" "Skip" + cleanUpOldFiles="${YAML_VALUE}" + if [[ -z "${cleanUpOldFiles}" ]]; then + return + fi + bannerSection "CLEAN UP OLD FILES" + retrieveYamlValue "migration.cleanUpOldFiles.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old files are backedup in [${backupDir}] directory ******" + for entry in $map; + do + local outputCheckMapEntry="$(checkMapEntry "${entry}")" + if [[ "${outputCheckMapEntry}" != "true" ]]; then + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e directoryName=fileName" + fi + local fileName="$(getSecondEntry "${entry}")" + local directoryName="$(getFirstEntry "${entry}")" + [[ -z "${fileName}" ]] && warn "File name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${directoryName}" ]] && warn "Directory name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + removeOldFiles "${backupDir}" "${directoryName}" "${fileName}" + echo ""; + done +} + +startMigration () { + bannerSection "STARTING MIGRATION" +} + +endMigration () { + bannerSection "MIGRATION COMPLETED SUCCESSFULLY" +} + +initialize () { + setAppDir + _pauseExecution "setAppDir" + initHelpers + _pauseExecution "initHelpers" + checkMigrationInfoYaml + _pauseExecution "checkMigrationInfoYaml" + getProduct + _pauseExecution "getProduct" + getDataDir + _pauseExecution "getDataDir" +} + +main () { + case $PRODUCT in + artifactory) + migrateArtifactory + ;; + distribution) + migrateDistribution + ;; + xray) + migrationXray + ;; + esac + exit 0 +} + +# Ensures meta data is logged +LOG_BEHAVIOR_ADD_META="$FLAG_Y" + + +migrateResolveDerbyPath () { + local key="$1" + local value="$2" + + if [[ "${key}" == "url" && "${value}" == *"db.home"* ]]; then + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + derbyPath="/opt/jfrog/artifactory/var/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + else + derbyPath="${NEW_DATA_DIR}/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + fi + fi + echo "${value}" +} + +migrateResolveHaDirPath () { + local key="$1" + local value="$2" + + if [[ "${INSTALLER}" == "${RPM_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" || "${INSTALLER}" == "${DEB_TYPE}" ]]; then + if [[ "${key}" == "artifactory.ha.data.dir" || "${key}" == "artifactory.ha.backup.dir" ]]; then + value=$(checkPathResolver "${value}") + fi + fi + echo "${value}" +} +updatePostgresUrlString_Hook () { + local yamlPath="$1" + local value="$2" + local hostIp=$(io_getPublicHostIP) + local sourceKey="//postgresql:" + if [[ "${yamlPath}" == "shared.database.url" ]]; then + value=$(io_replaceString "${value}" "${sourceKey}" "//${hostIp}:" "#") + fi + echo "${value}" +} +# Check Artifactory product version +checkArtifactoryVersion () { + local minProductVersion="6.0.0" + local maxProductVersion="7.0.0" + local propertyInDocker="ARTIFACTORY_VERSION" + local property="artifactory.version" + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + local newfilePath="${APP_DIR}/../.env" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + else + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="/etc/opt/jfrog/artifactory/artifactory.properties" + fi + + getProductVersion "${minProductVersion}" "${maxProductVersion}" "${newfilePath}" "${oldfilePath}" "${propertyInDocker}" "${property}" +} + +getCustomDataDir_hook () { + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Fail" + OLD_DATA_DIR="${YAML_VALUE}" +} + +# Get protocol value of connector +getXmlConnectorProtocol () { + local i="$1" + local filePath="$2" + local fileName="$3" + local protocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@protocol' ${filePath}/${fileName} 2>/dev/null |awk -F"=" '{print $2}' | tr -d '"') + echo -e "${protocolValue}" +} + +# Get all attributes of connector +getXmlConnectorAttributes () { + local i="$1" + local filePath="$2" + local fileName="$3" + local connectorAttributes=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@*' ${filePath}/${fileName} 2>/dev/null) + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + echo "${connectorAttributes}" +} + +# Get port value of connector +getXmlConnectorPort () { + local i="$1" + local filePath="$2" + local fileName="$3" + local portValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@port' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${portValue}" +} + +# Get maxThreads value of connector +getXmlConnectorMaxThreads () { + local i="$1" + local filePath="$2" + local fileName="$3" + local maxThreadValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@maxThreads' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${maxThreadValue}" +} +# Get sendReasonPhrase value of connector +getXmlConnectorSendReasonPhrase () { + local i="$1" + local filePath="$2" + local fileName="$3" + local sendReasonPhraseValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sendReasonPhrase' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${sendReasonPhraseValue}" +} +# Get relaxedPathChars value of connector +getXmlConnectorRelaxedPathChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedPathCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedPathChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedPathCharsValue=$(io_trim "${relaxedPathCharsValue}") + echo -e "${relaxedPathCharsValue}" +} +# Get relaxedQueryChars value of connector +getXmlConnectorRelaxedQueryChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedQueryCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedQueryChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedQueryCharsValue=$(io_trim "${relaxedQueryCharsValue}") + echo -e "${relaxedQueryCharsValue}" +} + +# Updating system.yaml with Connector port +setConnectorPort () { + local yamlPath="$1" + local valuePort="$2" + local portYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${valuePort}" ]]; then + warn "port value is empty, could not migrate to system.yaml" + return + fi + ## Getting port yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" portYamlPath "Warning" + portYamlPath="${YAML_VALUE}" + if [[ -z "${portYamlPath}" ]]; then + return + fi + setSystemValue "${portYamlPath}" "${valuePort}" "${SYSTEM_YAML_PATH}" + logger "Setting [${portYamlPath}] with value [${valuePort}] in system.yaml" +} + +# Updating system.yaml with Connector maxThreads +setConnectorMaxThread () { + local yamlPath="$1" + local threadValue="$2" + local maxThreadYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${threadValue}" ]]; then + return + fi + ## Getting max Threads yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" maxThreadYamlPath "Warning" + maxThreadYamlPath="${YAML_VALUE}" + if [[ -z "${maxThreadYamlPath}" ]]; then + return + fi + setSystemValue "${maxThreadYamlPath}" "${threadValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${maxThreadYamlPath}] with value [${threadValue}] in system.yaml" +} + +# Updating system.yaml with Connector sendReasonPhrase +setConnectorSendReasonPhrase () { + local yamlPath="$1" + local sendReasonPhraseValue="$2" + local sendReasonPhraseYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${sendReasonPhraseValue}" ]]; then + return + fi + ## Getting sendReasonPhrase yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" sendReasonPhraseYamlPath "Warning" + sendReasonPhraseYamlPath="${YAML_VALUE}" + if [[ -z "${sendReasonPhraseYamlPath}" ]]; then + return + fi + setSystemValue "${sendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${sendReasonPhraseYamlPath}] with value [${sendReasonPhraseValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedPathChars +setConnectorRelaxedPathChars () { + local yamlPath="$1" + local relaxedPathCharsValue="$2" + local relaxedPathCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedPathCharsValue}" ]]; then + return + fi + ## Getting relaxedPathChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedPathCharsYamlPath "Warning" + relaxedPathCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedPathCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedPathCharsYamlPath}" "${relaxedPathCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedPathCharsYamlPath}] with value [${relaxedPathCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedQueryChars +setConnectorRelaxedQueryChars () { + local yamlPath="$1" + local relaxedQueryCharsValue="$2" + local relaxedQueryCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedQueryCharsValue}" ]]; then + return + fi + ## Getting relaxedQueryChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedQueryCharsYamlPath "Warning" + relaxedQueryCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedQueryCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedQueryCharsYamlPath}" "${relaxedQueryCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedQueryCharsYamlPath}] with value [${relaxedQueryCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connectors configurations +setConnectorExtraConfig () { + local yamlPath="$1" + local connectorAttributes="$2" + local extraConfigPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${connectorAttributes}" ]]; then + return + fi + ## Getting extraConfig yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConfig "Warning" + extraConfigPath="${YAML_VALUE}" + if [[ -z "${extraConfigPath}" ]]; then + return + fi + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setSystemValue "${extraConfigPath}" "${connectorAttributes}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConfigPath}] with connector attributes in system.yaml" +} + +# Updating system.yaml with extra Connectors +setExtraConnector () { + local yamlPath="$1" + local extraConnector="$2" + local extraConnectorYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${extraConnector}" ]]; then + return + fi + ## Getting extraConnecotr yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConnectorYamlPath "Warning" + extraConnectorYamlPath="${YAML_VALUE}" + if [[ -z "${extraConnectorYamlPath}" ]]; then + return + fi + getYamlValue "${extraConnectorYamlPath}" "${SYSTEM_YAML_PATH}" "false" + local connectorExtra="${YAML_VALUE}" + if [[ -z "${connectorExtra}" ]]; then + setSystemValue "${extraConnectorYamlPath}" "${extraConnector}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + else + setSystemValue "${extraConnectorYamlPath}" "\"${connectorExtra} ${extraConnector}\"" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + fi +} + +# Migrate extra connectors to system.yaml +migrateExtraConnectors () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local excludeDefaultPort="$4" + local i="$5" + local extraConfig= + local extraConnector= + if [[ "${excludeDefaultPort}" == "yes" ]]; then + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" && "${portValue}" != "${DEFAULT_RT_PORT}" ]] || continue + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + done + else + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + fi +} + +# Migrate connector configurations +migrateConnectorConfig () { + local i="$1" + local protocolType="$2" + local portValue="$3" + local connectorPortYamlPath="$4" + local connectorMaxThreadYamlPath="$5" + local connectorAttributesYamlPath="$6" + local filePath="$7" + local fileName="$8" + local connectorSendReasonPhraseYamlPath="$9" + local connectorRelaxedPathCharsYamlPath="${10}" + local connectorRelaxedQueryCharsYamlPath="${11}" + + # migrate port + setConnectorPort "${connectorPortYamlPath}" "${portValue}" + + # migrate maxThreads + local maxThreadValue=$(getXmlConnectorMaxThreads "$i" "${filePath}" "${fileName}") + setConnectorMaxThread "${connectorMaxThreadYamlPath}" "${maxThreadValue}" + + # migrate sendReasonPhrase + local sendReasonPhraseValue=$(getXmlConnectorSendReasonPhrase "$i" "${filePath}" "${fileName}") + setConnectorSendReasonPhrase "${connectorSendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" + + # migrate relaxedPathChars + local relaxedPathCharsValue=$(getXmlConnectorRelaxedPathChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedPathChars "${connectorRelaxedPathCharsYamlPath}" "\"${relaxedPathCharsValue}\"" + # migrate relaxedQueryChars + local relaxedQueryCharsValue=$(getXmlConnectorRelaxedQueryChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedQueryChars "${connectorRelaxedQueryCharsYamlPath}" "\"${relaxedQueryCharsValue}\"" + + # migrate all attributes to extra config except port , maxThread , sendReasonPhrase ,relaxedPathChars and relaxedQueryChars + local connectorAttributes=$(getXmlConnectorAttributes "$i" "${filePath}" "${fileName}") + connectorAttributes=$(echo "${connectorAttributes}" | sed 's/port="'${portValue}'"//g' | sed 's/maxThreads="'${maxThreadValue}'"//g' | sed 's/sendReasonPhrase="'${sendReasonPhraseValue}'"//g' | sed 's/relaxedPathChars="\'${relaxedPathCharsValue}'\"//g' | sed 's/relaxedQueryChars="\'${relaxedQueryCharsValue}'\"//g') + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setConnectorExtraConfig "${connectorAttributesYamlPath}" "${connectorAttributes}" +} + +# Check for default port 8040 and 8081 in connectors and migrate +migrateConnectorPort () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + local connectorPortYamlPath="$5" + local connectorMaxThreadYamlPath="$6" + local connectorAttributesYamlPath="$7" + local connectorSendReasonPhraseYamlPath="$8" + local connectorRelaxedPathCharsYamlPath="$9" + local connectorRelaxedQueryCharsYamlPath="${10}" + local portYamlPath= + local maxThreadYamlPath= + local status= + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" == *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + RT_DEFAULTPORT_STATUS=success + else + AC_DEFAULTPORT_STATUS=success + fi + migrateConnectorConfig "${i}" "${protocolType}" "${portValue}" "${connectorPortYamlPath}" "${connectorMaxThreadYamlPath}" "${connectorAttributesYamlPath}" "${filePath}" "${fileName}" "${connectorSendReasonPhraseYamlPath}" "${connectorRelaxedPathCharsYamlPath}" "${connectorRelaxedQueryCharsYamlPath}" + done +} + +# migrate to extra, connector having default port and protocol is AJP +migrateDefaultPortIfAjp () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + done + +} + +# Comparing max threads in connectors +compareMaxThreads () { + local firstConnectorMaxThread="$1" + local firstConnectorNode="$2" + local secondConnectorMaxThread="$3" + local secondConnectorNode="$4" + local filePath="$5" + local fileName="$6" + + # choose higher maxThreads connector as Artifactory. + if [[ "${firstConnectorMaxThread}" -gt ${secondConnectorMaxThread} || "${firstConnectorMaxThread}" -eq ${secondConnectorMaxThread} ]]; then + # maxThread is higher in firstConnector, + # Taking firstConnector as Artifactory and SecondConnector as Access + # maxThread is equal in both connector,considering firstConnector as Artifactory and SecondConnector as Access + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + else + # maxThread is higher in SecondConnector, + # Taking SecondConnector as Artifactory and firstConnector as Access + local rtPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +# Check max threads exist to compare +maxThreadsExistToCompare () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local firstConnectorMaxThread= + local secondConnectorMaxThread= + local firstConnectorNode= + local secondConnectorNode= + local status=success + local firstnode=fail + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ ${protocolType} == *AJP* ]]; then + # Migrate Connectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + fi + # store maxthreads value of each connector + if [[ ${firstnode} == "fail" ]]; then + firstConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + firstConnectorNode="${i}" + firstnode=success + else + secondConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + secondConnectorNode="${i}" + fi + done + [[ -z "${firstConnectorMaxThread}" ]] && status=fail + [[ -z "${secondConnectorMaxThread}" ]] && status=fail + # maxThreads is set, now compare MaxThreads + if [[ "${status}" == "success" ]]; then + compareMaxThreads "${firstConnectorMaxThread}" "${firstConnectorNode}" "${secondConnectorMaxThread}" "${secondConnectorNode}" "${filePath}" "${fileName}" + else + # Assume first connector is RT, maxThreads is not set in both connectors + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +migrateExtraBasedOnNonAjpCount () { + local nonAjpCount="$1" + local filePath="$2" + local fileName="$3" + local connectorCount="$4" + local i="$5" + + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ "${protocolType}" == *AJP* ]]; then + if [[ "${nonAjpCount}" -eq 1 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + else + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + continue + fi + fi +} + +# find RT and AC Connector +findRtAndAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local initialAjpCount=0 + local nonAjpCount=0 + + # get the count of non AJP + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] || continue + nonAjpCount=$((initialAjpCount+1)) + initialAjpCount="${nonAjpCount}" + done + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access and artifactory connectors + # Mark port as 8040 for access + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + done + elif [[ "${nonAjpCount}" -eq 2 ]]; then + # compare maxThreads in both connectors + maxThreadsExistToCompare "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${nonAjpCount}" -gt 2 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # setting with default port in system.yaml + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# get the count of non AJP +getCountOfNonAjp () { + local port="$1" + local connectorCount="$2" + local filePath=$3 + local fileName=$4 + local initialNonAjpCount=0 + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${port}" ]] || continue + [[ "${protocolType}" != *AJP* ]] || continue + local nonAjpCount=$((initialNonAjpCount+1)) + initialNonAjpCount="${nonAjpCount}" + done + echo -e "${nonAjpCount}" +} + +# Find for access connector +findAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_RT_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access connector and mark port as that of connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take RT properties into access with 8040 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add RT connector details as access connector and mark port as 8040 + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# Find for artifactory connector +findRtConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_ACCESS_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as RT connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take access properties into artifactory with 8081 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add access connector details as RT connector and mark as ${DEFAULT_RT_PORT} + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +checkForTlsConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + local sslProtocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sslProtocol' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${sslProtocolValue}" == "TLS" ]]; then + bannerImportant "NOTE: Ignoring TLS connector during migration, modify the system yaml to enable TLS. Original server.xml is saved in path [${filePath}/${fileName}]" + TLS_CONNECTOR_EXISTS=${FLAG_Y} + continue + fi + done +} + +# set custom tomcat server Listeners to system.yaml +setListenerConnector () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + for ((i = 1 ; i <= "${listenerCount}" ; i++)) + do + local listenerConnector=$($LIBXML2_PATH --xpath '//Server/Listener['$i']' ${filePath}/${fileName} 2>/dev/null) + local listenerClassName=$($LIBXML2_PATH --xpath '//Server/Listener['$i']/@className' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${listenerClassName}" == *Apr* ]]; then + setExtraConnector "${EXTRA_LISTENER_CONFIG_YAMLPATH}" "${listenerConnector}" + fi + done +} +# add custom tomcat server Listeners +addTomcatServerListeners () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + if [[ "${listenerCount}" == "0" ]]; then + logger "No listener connectors found in the [${filePath}/${fileName}],skipping migration of listener connectors" + else + setListenerConnector "${filePath}" "${fileName}" "${listenerCount}" + setSystemValue "${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}" "true" "${SYSTEM_YAML_PATH}" + logger "Setting [${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}] with value [true] in system.yaml" + fi +} + +# server.xml migration operations +xmlMigrateOperation () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local listenerCount="$4" + RT_DEFAULTPORT_STATUS=fail + AC_DEFAULTPORT_STATUS=fail + TLS_CONNECTOR_EXISTS=${FLAG_N} + + # Check for connector with TLS , if found ignore migrating it + checkForTlsConnector "${filePath}" "${fileName}" "${connectorCount}" + if [[ "${TLS_CONNECTOR_EXISTS}" == "${FLAG_Y}" ]]; then + return + fi + addTomcatServerListeners "${filePath}" "${fileName}" "${listenerCount}" + # Migrate RT default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + # Migrate to extra if RT default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" + # Migrate AC default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + # Migrate to extra if access default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" + + if [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # RT and AC default port found + logger "Artifactory 8081 and Access 8040 default port are found" + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # Only AC default port found,find RT connector + logger "Found Access default 8040 port" + findRtConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # Only RT default port found,find AC connector + logger "Found Artifactory default 8081 port" + findAcConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # RT and AC default port not found, find connector + logger "Artifactory 8081 and Access 8040 default port are not found" + findRtAndAcConnector "${filePath}" "${fileName}" "${connectorCount}" + fi +} + +# get count of connectors +getXmlConnectorCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Service/Connector)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# get count of listener connectors +getTomcatServerListenersCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Listener)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# Migrate server.xml configuration to system.yaml +migrateXmlFile () { + local xmlFiles= + local fileName= + local filePath= + local sourceFilePath= + DEFAULT_ACCESS_PORT="8040" + DEFAULT_RT_PORT="8081" + AC_PORT_YAMLPATH="migration.xmlFiles.serverXml.access.port" + AC_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.access.maxThreads" + AC_SENDREASONPHRASE_YAMLPATH="migration.xmlFiles.serverXml.access.sendReasonPhrase" + AC_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.access.extraConfig" + RT_PORT_YAMLPATH="migration.xmlFiles.serverXml.artifactory.port" + RT_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.artifactory.maxThreads" + RT_SENDREASONPHRASE_YAMLPATH='migration.xmlFiles.serverXml.artifactory.sendReasonPhrase' + RT_RELAXEDPATHCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedPathChars' + RT_RELAXEDQUERYCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedQueryChars' + RT_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.artifactory.extraConfig" + ROUTER_PORT_YAMLPATH="migration.xmlFiles.serverXml.router.port" + EXTRA_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.config" + EXTRA_LISTENER_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.listener" + RT_TOMCAT_HTTPSCONNECTOR_ENABLED="artifactory.tomcat.httpsConnector.enabled" + + retrieveYamlValue "migration.xmlFiles" "xmlFiles" "Skip" + xmlFiles="${YAML_VALUE}" + if [[ -z "${xmlFiles}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF XML FILES" + retrieveYamlValue "migration.xmlFiles.serverXml.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + if [[ -z "${fileName}" ]]; then + return + fi + bannerSubSection "Processing Migration of $fileName" + retrieveYamlValue "migration.xmlFiles.serverXml.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + if [[ -z "${filePath}" ]]; then + return + fi + # prepend NEW_DATA_DIR only if filePath is relative path + sourceFilePath=$(prependDir "${filePath}" "${NEW_DATA_DIR}/${filePath}") + if [[ "$(checkFileExists "${sourceFilePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] is found in path [${sourceFilePath}]" + local connectorCount=$(getXmlConnectorCount "${sourceFilePath}" "${fileName}") + if [[ "${connectorCount}" == "0" ]]; then + logger "No connectors found in the [${filePath}/${fileName}],skipping migration of xml configuration" + return + fi + local listenerCount=$(getTomcatServerListenersCount "${sourceFilePath}" "${fileName}") + xmlMigrateOperation "${sourceFilePath}" "${fileName}" "${connectorCount}" "${listenerCount}" + else + logger "File [${fileName}] is not found in path [${sourceFilePath}] to migrate" + fi +} + +compareArtifactoryUser () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + + if [[ "${oldPropertyValue}" != "${newPropertyValue}" ]]; then + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" + else + logger "No change in property [${property}] value in [${sourceFile}] to migrate" + fi +} + +migrateReplicator () { + local property="$1" + local oldPropertyValue="$2" + local yamlPath="$3" + + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" +} + +compareJavaOptions () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + local oldJavaOption= + local newJavaOption= + local extraJavaOption= + local check=false + local success=true + local status=true + + oldJavaOption=$(echo "${oldPropertyValue}" | awk 'BEGIN{FS=OFS="\""}{for(i=2;i.+)\.{{ include "artifactory-ha.fullname" . }} {{ include "artifactory-ha.fullname" . }} +{{ tpl (include "artifactory.nginx.hosts" .) . }}; + +if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; +} +set $host_port {{ .Values.nginx.https.externalPort }}; +if ( $scheme = "http" ) { + set $host_port {{ .Values.nginx.http.externalPort }}; +} +## Application specific logs +## access_log /var/log/nginx/artifactory-access.log timing; +## error_log /var/log/nginx/artifactory-error.log; +rewrite ^/artifactory/?$ / redirect; +if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; +} +chunked_transfer_encoding on; +client_max_body_size 0; + +location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory-ha.scheme" . }}://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$host_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + {{- if .Values.nginx.disableProxyBuffering}} + proxy_http_version 1.1; + proxy_request_buffering off; + proxy_buffering off; + {{- end }} + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + location /pipelines/ { + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $http_host; + {{- if .Values.router.tlsEnabled }} + proxy_pass https://{{ include "artifactory-ha.fullname" . }}:{{ .Values.router.internalPort }}; + {{- else }} + proxy_pass http://{{ include "artifactory-ha.fullname" . }}:{{ .Values.router.internalPort }}; + {{- end }} + } +} +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/files/nginx-main-conf.yaml b/charts/jfrog/artifactory-ha/107.90.14/files/nginx-main-conf.yaml new file mode 100644 index 0000000000..78cecea6a1 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/files/nginx-main-conf.yaml @@ -0,0 +1,83 @@ +# Main Nginx configuration file +worker_processes 4; + +{{- if .Values.nginx.logs.stderr }} +error_log stderr {{ .Values.nginx.logs.level }}; +{{- else -}} +error_log {{ .Values.nginx.persistence.mountPath }}/logs/error.log {{ .Values.nginx.logs.level }}; +{{- end }} +pid /var/run/nginx.pid; + +{{- if .Values.artifactory.ssh.enabled }} +## SSH Server Configuration +stream { + server { + {{- if .Values.nginx.singleStackIPv6Cluster }} + listen [::]:{{ .Values.nginx.ssh.internalPort }}; + {{- else -}} + listen {{ .Values.nginx.ssh.internalPort }}; + {{- end }} + proxy_pass {{ include "artifactory-ha.fullname" . }}:{{ .Values.artifactory.ssh.externalPort }}; + } +} +{{- end }} + +events { + worker_connections 1024; +} + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + variables_hash_max_size 1024; + variables_hash_bucket_size 64; + server_names_hash_max_size 4096; + server_names_hash_bucket_size 128; + types_hash_max_size 2048; + types_hash_bucket_size 64; + proxy_read_timeout 2400s; + client_header_timeout 2400s; + client_body_timeout 2400s; + proxy_connect_timeout 75s; + proxy_send_timeout 2400s; + proxy_buffer_size 128k; + proxy_buffers 40 128k; + proxy_busy_buffers_size 128k; + proxy_temp_file_write_size 250m; + proxy_http_version 1.1; + client_body_buffer_size 128k; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + log_format timing 'ip = $remote_addr ' + 'user = \"$remote_user\" ' + 'local_time = \"$time_local\" ' + 'host = $host ' + 'request = \"$request\" ' + 'status = $status ' + 'bytes = $body_bytes_sent ' + 'upstream = \"$upstream_addr\" ' + 'upstream_time = $upstream_response_time ' + 'request_time = $request_time ' + 'referer = \"$http_referer\" ' + 'UA = \"$http_user_agent\"'; + + {{- if .Values.nginx.logs.stdout }} + access_log /dev/stdout timing; + {{- else -}} + access_log {{ .Values.nginx.persistence.mountPath }}/logs/access.log timing; + {{- end }} + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + #gzip on; + + include /etc/nginx/conf.d/*.conf; + +} diff --git a/charts/jfrog/artifactory-ha/107.90.14/files/system.yaml b/charts/jfrog/artifactory-ha/107.90.14/files/system.yaml new file mode 100644 index 0000000000..3a1d93269d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/files/system.yaml @@ -0,0 +1,163 @@ +router: + serviceRegistry: + insecure: {{ .Values.router.serviceRegistry.insecure }} +shared: +{{- if .Values.artifactory.coldStorage.enabled }} + jfrogColdStorage: + coldInstanceEnabled: true +{{- end }} +{{ tpl (include "artifactory.metrics" .) . }} + logging: + consoleLog: + enabled: {{ .Values.artifactory.consoleLog }} + extraJavaOpts: > + -Dartifactory.graceful.shutdown.max.request.duration.millis={{ mul .Values.artifactory.terminationGracePeriodSeconds 1000 }} + -Dartifactory.access.client.max.connections={{ .Values.access.tomcat.connector.maxThreads }} + {{- with .Values.artifactory.primary.javaOpts }} + {{- if .corePoolSize }} + -Dartifactory.async.corePoolSize={{ .corePoolSize }} + {{- end }} + {{- if .xms }} + -Xms{{ .xms }} + {{- end }} + {{- if .xmx }} + -Xmx{{ .xmx }} + {{- end }} + {{- if .jmx.enabled }} + -Dcom.sun.management.jmxremote + -Dcom.sun.management.jmxremote.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.rmi.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.ssl={{ .jmx.ssl }} + {{- if .jmx.host }} + -Djava.rmi.server.hostname={{ tpl .jmx.host $ }} + {{- else }} + -Djava.rmi.server.hostname={{ template "artifactory-ha.fullname" $ }} + {{- end }} + {{- if .jmx.authenticate }} + -Dcom.sun.management.jmxremote.authenticate=true + -Dcom.sun.management.jmxremote.access.file={{ .jmx.accessFile }} + -Dcom.sun.management.jmxremote.password.file={{ .jmx.passwordFile }} + {{- else }} + -Dcom.sun.management.jmxremote.authenticate=false + {{- end }} + {{- end }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + database: + allowNonPostgresql: {{ .Values.database.allowNonPostgresql }} + {{- if .Values.postgresql.enabled }} + type: postgresql + url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}" + host: "" + driver: org.postgresql.Driver + username: "{{ .Values.postgresql.postgresqlUsername }}" + {{ else }} + type: "{{ .Values.database.type }}" + driver: "{{ .Values.database.driver }}" + {{- end }} +artifactory: +{{- if or .Values.artifactory.haDataDir.enabled .Values.artifactory.haBackupDir.enabled }} + node: + {{- if .Values.artifactory.haDataDir.path }} + haDataDir: {{ .Values.artifactory.haDataDir.path }} + {{- end }} + {{- if .Values.artifactory.haBackupDir.path }} + haBackupDir: {{ .Values.artifactory.haBackupDir.path }} + {{- end }} +{{- end }} + database: + maxOpenConnections: {{ .Values.artifactory.database.maxOpenConnections }} + tomcat: + maintenanceConnector: + port: {{ .Values.artifactory.tomcat.maintenanceConnector.port }} + connector: + maxThreads: {{ .Values.artifactory.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.artifactory.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.artifactory.tomcat.connector.extraConfig }} +frontend: + session: + timeMinutes: {{ .Values.frontend.session.timeoutMinutes | quote }} +access: + runOnArtifactoryTomcat: {{ .Values.access.runOnArtifactoryTomcat | default false }} + database: + maxOpenConnections: {{ .Values.access.database.maxOpenConnections }} + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + extraJavaOpts: > + {{- if .Values.splitServicesToContainers }} + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=70 + {{- end }} + {{- with .Values.access.javaOpts }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- end }} + tomcat: + connector: + maxThreads: {{ .Values.access.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.access.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.access.tomcat.connector.extraConfig }} + {{- if .Values.access.database.enabled }} + type: "{{ .Values.access.database.type }}" + url: "{{ .Values.access.database.url }}" + driver: "{{ .Values.access.database.driver }}" + username: "{{ .Values.access.database.user }}" + password: "{{ .Values.access.database.password }}" + {{- end }} +{{- if .Values.mc.enabled }} +mc: + enabled: true + database: + maxOpenConnections: {{ .Values.mc.database.maxOpenConnections }} + idgenerator: + maxOpenConnections: {{ .Values.mc.idgenerator.maxOpenConnections }} + tomcat: + connector: + maxThreads: {{ .Values.mc.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.mc.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.mc.tomcat.connector.extraConfig }} +{{- end }} +metadata: + database: + maxOpenConnections: {{ .Values.metadata.database.maxOpenConnections }} +{{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +jfconnect: + enabled: true +{{- else }} +jfconnect: + enabled: false +jfconnect_service: + enabled: false +{{- end }} + +{{- if and .Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +federation: + enabled: true + embedded: {{ .Values.federation.embedded }} + extraJavaOpts: {{ .Values.federation.extraJavaOpts }} + port: {{ .Values.federation.internalPort }} +rtfs: + database: + driver: org.postgresql.Driver + type: postgresql + username: {{ .Values.federation.database.username }} + password: {{ .Values.federation.database.password }} + url: "jdbc:postgresql://{{ .Values.federation.database.host }}:{{ .Values.federation.database.port }}/{{ .Values.federation.database.name }}" +{{- else }} +federation: + enabled: false +{{- end }} +{{- if .Values.event.webhooks }} +event: + webhooks: {{ toYaml .Values.event.webhooks | nindent 6 }} +{{- end }} +{{- if .Values.evidence.enabled }} +evidence: + enabled: true +{{- else }} +evidence: + enabled: false +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/logo/artifactory-logo.png b/charts/jfrog/artifactory-ha/107.90.14/logo/artifactory-logo.png new file mode 100644 index 0000000000..fe6c23c5a7 Binary files /dev/null and b/charts/jfrog/artifactory-ha/107.90.14/logo/artifactory-logo.png differ diff --git a/charts/jfrog/artifactory-ha/107.90.14/questions.yml b/charts/jfrog/artifactory-ha/107.90.14/questions.yml new file mode 100644 index 0000000000..14e9024e6d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/questions.yml @@ -0,0 +1,424 @@ +questions: +# Advance Settings +- variable: artifactory.masterKey + default: "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" + description: "Artifactory master key. For security reasons, we strongly recommend you generate your own master key using this command: 'openssl rand -hex 32'" + type: string + label: Artifactory master key + group: "Security Settings" + +# Container Images +- variable: defaultImage + default: true + description: "Use default Docker image" + label: Use Default Image + type: boolean + show_subquestion_if: false + group: "Container Images" + subquestions: + - variable: initContainerImage + default: "docker.bintray.io/alpine:3.12" + description: "Init image name" + type: string + label: Init image name + - variable: artifactory.image.repository + default: "docker.bintray.io/jfrog/artifactory-pro" + description: "Artifactory image name" + type: string + label: Artifactory Image Name + - variable: artifactory.image.version + default: "7.6.3" + description: "Artifactory image tag" + type: string + label: Artifactory Image Tag + - variable: nginx.image.repository + default: "docker.bintray.io/jfrog/nginx-artifactory-pro" + description: "Nginx image name" + type: string + label: Nginx Image Name + - variable: nginx.image.version + default: "7.6.3" + description: "Nginx image tag" + type: string + label: Nginx Image Tag + - variable: imagePullSecrets + description: "Image Pull Secret" + type: string + label: Image Pull Secret + +# Services and LoadBalancing Settings +- variable: artifactory.node.replicaCount + default: "2" + description: "Number of Secondary Nodes" + type: string + label: Number of Secondary Nodes + show_subquestion_if: true + group: "Services and Load Balancing" +- variable: ingress.enabled + default: false + description: "Expose app using Layer 7 Load Balancer - ingress" + type: boolean + label: Expose app using Layer 7 Load Balancer + show_subquestion_if: true + group: "Services and Load Balancing" + required: true + subquestions: + - variable: ingress.hosts[0] + default: "xip.io" + description: "Hostname to your artifactory installation" + type: hostname + required: true + label: Hostname + +# Nginx Settings +- variable: nginx.enabled + default: true + description: "Enable nginx server" + type: boolean + label: Enable Nginx Server + group: "Services and Load Balancing" + required: true + show_if: "ingress.enabled=false" +- variable: nginx.service.type + default: "LoadBalancer" + description: "Nginx service type" + type: enum + required: true + label: Nginx Service Type + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" + options: + - "ClusterIP" + - "NodePort" + - "LoadBalancer" +- variable: nginx.service.loadBalancerIP + default: "" + description: "Provide Static IP to configure with Nginx" + type: string + label: Config Nginx LoadBalancer IP + show_if: "nginx.enabled=true&&nginx.service.type=LoadBalancer&&ingress.enabled=false" + group: "Services and Load Balancing" +- variable: nginx.tlsSecretName + default: "" + description: "Provide SSL Secret name to configure with Nginx" + type: string + label: Config Nginx SSL Secret + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" +- variable: nginx.customArtifactoryConfigMap + default: "" + description: "Provide configMap name to configure Nginx with custom `artifactory.conf`" + type: string + label: ConfigMap for Nginx Artifactory Config + show_if: "nginx.enabled=true&&ingress.enabled=false" + group: "Services and Load Balancing" + +# Artifactory Storage Settings +- variable: artifactory.persistence.size + default: "50Gi" + description: "Artifactory persistent volume size" + type: string + label: Artifactory Persistent Volume Size + required: true + group: "Artifactory Storage" +- variable: artifactory.persistence.type + default: "file-system" + description: "Artifactory persistent volume size" + type: enum + label: Artifactory Persistent Storage Type + required: true + options: + - "file-system" + - "nfs" + - "google-storage" + - "aws-s3" + group: "Artifactory Storage" + +#Storage Type Settings +- variable: artifactory.persistence.nfs.ip + default: "" + type: string + group: "Artifactory Storage" + label: NFS Server IP + description: "NFS server IP" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.haDataMount + default: "/data" + type: string + label: NFS Data Directory + description: "NFS data directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.haBackupMount + default: "/backup" + type: string + label: NFS Backup Directory + description: "NFS backup directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.dataDir + default: "/var/opt/jfrog/artifactory-ha" + type: string + label: HA Data Directory + description: "HA data directory" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.backupDir + default: "/var/opt/jfrog/artifactory-backup" + type: string + label: HA Backup Directory + description: "HA backup directory " + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" +- variable: artifactory.persistence.nfs.capacity + default: "200Gi" + type: string + label: NFS PVC Size + description: "NFS PVC size " + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=nfs" + +#Google storage settings +- variable: artifactory.persistence.googleStorage.bucketName + default: "artifactory-ha-gcp" + type: string + label: Google Storage Bucket Name + description: "Google storage bucket name" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.identity + default: "" + type: string + label: Google Storage Service Account ID + description: "Google Storage service account id" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.credential + default: "" + type: string + label: Google Storage Service Account Key + description: "Google Storage service account key" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +- variable: artifactory.persistence.googleStorage.path + default: "artifactory-ha/filestore" + type: string + label: Google Storage Path In Bucket + description: "Google Storage path in bucket" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=google-storage" +# awsS3 storage settings +- variable: artifactory.persistence.awsS3.bucketName + default: "artifactory-ha-aws" + type: string + label: AWS S3 Bucket Name + description: "AWS S3 bucket name" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.region + default: "" + type: string + label: AWS S3 Bucket Region + description: "AWS S3 bucket region" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.identity + default: "" + type: string + label: AWS S3 AWS_ACCESS_KEY_ID + description: "AWS S3 AWS_ACCESS_KEY_ID" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.credential + default: "" + type: string + label: AWS S3 AWS_SECRET_ACCESS_KEY + description: "AWS S3 AWS_SECRET_ACCESS_KEY" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" +- variable: artifactory.persistence.awsS3.path + default: "artifactory-ha/filestore" + type: string + label: AWS S3 Path In Bucket + description: "AWS S3 path in bucket" + group: "Artifactory Storage" + show_if: "artifactory.persistence.type=aws-s3" + +# Database Settings +- variable: postgresql.enabled + default: true + description: "Enable PostgreSQL" + type: boolean + required: true + label: Enable PostgreSQL + group: "Database Settings" + show_subquestion_if: true + subquestions: + - variable: postgresql.postgresqlPassword + default: "" + description: "PostgreSQL password" + type: password + required: true + label: PostgreSQL Password + group: "Database Settings" + show_if: "postgresql.enabled=true" + - variable: postgresql.persistence.size + default: 20Gi + description: "PostgreSQL persistent volume size" + type: string + label: PostgreSQL Persistent Volume Size + show_if: "postgresql.enabled=true" + - variable: postgresql.persistence.storageClass + default: "" + description: "If undefined or null, uses the default StorageClass. Default to null" + type: storageclass + label: Default StorageClass for PostgreSQL + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.requests.cpu + default: "200m" + description: "PostgreSQL initial cpu request" + type: string + label: PostgreSQL Initial CPU Request + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.requests.memory + default: "500Mi" + description: "PostgreSQL initial memory request" + type: string + label: PostgreSQL Initial Memory Request + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.limits.cpu + default: "1" + description: "PostgreSQL cpu limit" + type: string + label: PostgreSQL CPU Limit + show_if: "postgresql.enabled=true" + - variable: postgresql.resources.limits.memory + default: "1Gi" + description: "PostgreSQL memory limit" + type: string + label: PostgreSQL Memory Limit + show_if: "postgresql.enabled=true" +- variable: database.type + default: "postgresql" + description: "xternal database type (postgresql, mysql, oracle or mssql)" + type: enum + required: true + label: External Database Type + group: "Database Settings" + show_if: "postgresql.enabled=false" + options: + - "postgresql" + - "mysql" + - "oracle" + - "mssql" +- variable: database.url + default: "" + description: "External database URL. If you set the url, leave host and port empty" + type: string + label: External Database URL + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.host + default: "" + description: "External database hostname" + type: string + label: External Database Hostname + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.port + default: "" + description: "External database port" + type: string + label: External Database Port + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.user + default: "" + description: "External database username" + type: string + label: External Database Username + group: "Database Settings" + show_if: "postgresql.enabled=false" +- variable: database.password + default: "" + description: "External database password" + type: password + label: External Database Password + group: "Database Settings" + show_if: "postgresql.enabled=false" + +# Advance Settings +- variable: advancedOptions + default: false + description: "Show advanced configurations" + label: Show Advanced Configurations + type: boolean + show_subquestion_if: true + group: "Advanced Options" + subquestions: + - variable: artifactory.primary.resources.requests.cpu + default: "500m" + description: "Artifactory primary node initial cpu request" + type: string + label: Artifactory Primary Node Initial CPU Request + - variable: artifactory.primary.resources.requests.memory + default: "1Gi" + description: "Artifactory primary node initial memory request" + type: string + label: Artifactory Primary Node Initial Memory Request + - variable: artifactory.primary.javaOpts.xms + default: "1g" + description: "Artifactory primary node java Xms size" + type: string + label: Artifactory Primary Node Java Xms Size + - variable: artifactory.primary.resources.limits.cpu + default: "2" + description: "Artifactory primary node cpu limit" + type: string + label: Artifactory Primary Node CPU Limit + - variable: artifactory.primary.resources.limits.memory + default: "4Gi" + description: "Artifactory primary node memory limit" + type: string + label: Artifactory Primary Node Memory Limit + - variable: artifactory.primary.javaOpts.xmx + default: "4g" + description: "Artifactory primary node java Xmx size" + type: string + label: Artifactory Primary Node Java Xmx Size + - variable: artifactory.node.resources.requests.cpu + default: "500m" + description: "Artifactory member node initial cpu request" + type: string + label: Artifactory Member Node Initial CPU Request + - variable: artifactory.node.resources.requests.memory + default: "2Gi" + description: "Artifactory member node initial memory request" + type: string + label: Artifactory Member Node Initial Memory Request + - variable: artifactory.node.javaOpts.xms + default: "1g" + description: "Artifactory member node java Xms size" + type: string + label: Artifactory Member Node Java Xms Size + - variable: artifactory.node.resources.limits.cpu + default: "2" + description: "Artifactory member node cpu limit" + type: string + label: Artifactory Member Node CPU Limit + - variable: artifactory.node.resources.limits.memory + default: "4Gi" + description: "Artifactory member node memory limit" + type: string + label: Artifactory Member Node Memory Limit + - variable: artifactory.node.javaOpts.xmx + default: "4g" + description: "Artifactory member node java Xmx size" + type: string + label: Artifactory Member Node Java Xmx Size + +# Internal Settings +- variable: installerInfo + default: '\{\"productId\": \"RancherHelm_artifactory-ha/7.17.5\", \"features\": \[\{\"featureId\": \"Partner/ACC-007246\"\}\]\}' + type: string + group: "Internal Settings (Do not modify)" diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge-extra-config.yaml new file mode 100644 index 0000000000..6afc491dc8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge-extra-config.yaml @@ -0,0 +1,44 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=200 + -Dartifactory.async.poolMaxQueueSize=100000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=200 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + + tomcat: + connector: + maxThreads: 800 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 200 + +access: + tomcat: + connector: + maxThreads: 200 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 200 + +metadata: + database: + maxOpenConnections: 200 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge.yaml new file mode 100644 index 0000000000..02cf7f94e4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-2xlarge.yaml @@ -0,0 +1,127 @@ +############################################################## +# The 2xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 6 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "4" + memory: 20Gi + limits: + # cpu: "20" + memory: 24Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: "1" + memory: 1Gi + limits: + # cpu: "6" + memory: 2Gi + +frontend: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 1Gi + +metadata: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 2Gi + +event: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +observability: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +jfconnect: + resources: + requests: + cpu: 100m + memory: 100Mi + limits: + # cpu: "1" + memory: 250Mi + +nginx: + replicaCount: 3 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "6Gi" + limits: + # cpu: "14" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "5000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 256Gi + cpu: "64" + limits: + memory: 256Gi + # cpu: "128" diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large-extra-config.yaml new file mode 100644 index 0000000000..fac24ad687 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large-extra-config.yaml @@ -0,0 +1,44 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=80 + -Dartifactory.async.poolMaxQueueSize=20000 + -Dartifactory.http.client.max.total.connections=100 + -Dartifactory.http.client.max.connections.per.route=100 + -Dartifactory.access.client.max.connections=125 + -Dartifactory.metadata.event.operator.threads=4 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=524288 + -XX:MaxDirectMemorySize=512m + + tomcat: + connector: + maxThreads: 500 + extraConfig: 'acceptCount="800" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 100 + +access: + tomcat: + connector: + maxThreads: 125 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 100 + +metadata: + database: + maxOpenConnections: 100 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large.yaml new file mode 100644 index 0000000000..504edf1ed8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-large.yaml @@ -0,0 +1,127 @@ +############################################################## +# The large sizing +# This size is intended for large organizations. It can be increased with adding replicas or moving to the xlarge sizing +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 3 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 10Gi + limits: + # cpu: "14" + memory: 12Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "8" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 3Gi + +router: + resources: + requests: + cpu: 200m + memory: 400Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "1" + memory: "500Mi" + limits: + # cpu: "4" + memory: "1Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "600" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 64Gi + cpu: "16" + limits: + memory: 64Gi + # cpu: "32" diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium-extra-config.yaml new file mode 100644 index 0000000000..b2b20b198b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium-extra-config.yaml @@ -0,0 +1,45 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium.yaml new file mode 100644 index 0000000000..93b79788df --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-medium.yaml @@ -0,0 +1,127 @@ +############################################################## +# The medium sizing +# This size is just 2 replicas of the small size. Vertical sizing of all services is not changed +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 2 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +access: + resources: + requests: + cpu: 1 + memory: 1.5Gi + limits: + # cpu: 1.5 + memory: 2Gi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "200" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 32Gi + cpu: "8" + limits: + memory: 32Gi + # cpu: "16" \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small-extra-config.yaml new file mode 100644 index 0000000000..e8329f1a3e --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small-extra-config.yaml @@ -0,0 +1,43 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small.yaml new file mode 100644 index 0000000000..b75a22323f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-small.yaml @@ -0,0 +1,127 @@ +############################################################## +# The small sizing +# This is the size recommended for running Artifactory for small teams +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "100" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 16Gi + cpu: "4" + limits: + memory: 16Gi + # cpu: "10" diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge-extra-config.yaml new file mode 100644 index 0000000000..8d04850ad8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge-extra-config.yaml @@ -0,0 +1,42 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=160 + -Dartifactory.async.poolMaxQueueSize=50000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=150 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 600 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 150 + +access: + tomcat: + connector: + maxThreads: 150 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 150 + +metadata: + database: + maxOpenConnections: 150 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge.yaml new file mode 100644 index 0000000000..550bd051d6 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xlarge.yaml @@ -0,0 +1,127 @@ +############################################################## +# The xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 4 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 14Gi + limits: + # cpu: "14" + memory: 16Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +router: + resources: + requests: + cpu: 200m + memory: 500Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "4Gi" + limits: + # cpu: "12" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "2000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 128Gi + cpu: "32" + limits: + memory: 128Gi + # cpu: "64" diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall-extra-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall-extra-config.yaml new file mode 100644 index 0000000000..1371e87b8d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall-extra-config.yaml @@ -0,0 +1,43 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + primary: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=10 + -Dartifactory.async.poolMaxQueueSize=2000 + -Dartifactory.http.client.max.total.connections=20 + -Dartifactory.http.client.max.connections.per.route=20 + -Dartifactory.access.client.max.connections=15 + -Dartifactory.metadata.event.operator.threads=2 + -XX:MaxMetaspaceSize=400m + -XX:CompressedClassSpaceSize=96m + -Djdk.nio.maxCachedBufferSize=131072 + -XX:MaxDirectMemorySize=128m + tomcat: + connector: + maxThreads: 50 + extraConfig: 'acceptCount="200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 15 + +access: + tomcat: + connector: + maxThreads: 15 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 15 + +metadata: + database: + maxOpenConnections: 15 + diff --git a/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall.yaml b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall.yaml new file mode 100644 index 0000000000..3f7b07138b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/sizing/artifactory-xsmall.yaml @@ -0,0 +1,127 @@ +############################################################## +# The xsmall sizing +# This is the minimum size recommended for running Artifactory +############################################################## +splitServicesToContainers: true +artifactory: + primary: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 3Gi + limits: + # cpu: "10" + memory: 4Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 50m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "50m" + memory: "50Mi" + limits: + # cpu: "1" + memory: "250Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "50" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 8Gi + cpu: "2" + limits: + memory: 8Gi + # cpu: "8" \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/NOTES.txt b/charts/jfrog/artifactory-ha/107.90.14/templates/NOTES.txt new file mode 100644 index 0000000000..30dfab8b8a --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/NOTES.txt @@ -0,0 +1,149 @@ +Congratulations. You have just deployed JFrog Artifactory HA! + +{{- if .Values.artifactory.masterKey }} +{{- if and (not .Values.artifactory.masterKeySecretName) (eq .Values.artifactory.masterKey "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF") }} + + +***************************************** WARNING ****************************************** +* Your Artifactory master key is still set to the provided example: * +* artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF * +* * +* You should change this to your own generated key: * +* $ export MASTER_KEY=$(openssl rand -hex 32) * +* $ echo ${MASTER_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.masterKey=${MASTER_KEY}' * +* * +* Alternatively, you can use a pre-existing secret with a key called master-key with * +* '--set artifactory.masterKeySecretName=${SECRET_NAME}' * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.joinKey }} +{{- if eq .Values.artifactory.joinKey "EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE" }} + + +***************************************** WARNING ****************************************** +* Your Artifactory join key is still set to the provided example: * +* artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE * +* * +* You should change this to your own generated key: * +* $ export JOIN_KEY=$(openssl rand -hex 32) * +* $ echo ${JOIN_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.joinKey=${JOIN_KEY}' * +* * +******************************************************************************************** +{{- end }} +{{- end }} + + +{{- if .Values.artifactory.setSecurityContext }} +****************************************** WARNING ********************************************** +* From chart version 107.84.x, `setSecurityContext` has been renamed to `podSecurityContext`, * + please change your values.yaml before upgrade , For more Info , refer to 107.84.x changelog * +************************************************************************************************* +{{- end }} + +{{- if and (or (or (or (or (or ( or ( or ( or (or (or ( or (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) .Values.systemYamlOverride.existingSecret) (or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled)) .Values.aws.licenseConfigSecretName) .Values.artifactory.persistence.customBinarystoreXmlSecret) .Values.access.customCertificatesSecretName) .Values.systemYamlOverride.existingSecret) .Values.artifactory.license.secret) .Values.artifactory.userPluginSecrets) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey)) (and .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName)) (or .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName)) .Values.artifactory.unifiedSecretInstallation }} +****************************************** WARNING ************************************************************************************************** +* The unifiedSecretInstallation flag is currently enabled, which creates the unified secret. The existing secrets will continue as separate secrets.* +* Update the values.yaml with the existing secrets to add them to the unified secret. * +***************************************************************************************************************************************************** +{{- end }} + +{{- if .Values.postgresql.enabled }} + +DATABASE: +To extract the database password, run the following +export DB_PASSWORD=$(kubectl get --namespace {{ .Release.Namespace }} $(kubectl get secret --namespace {{ .Release.Namespace }} -o name | grep postgresql) -o jsonpath="{.data.postgresql-password}" | base64 --decode) +echo ${DB_PASSWORD} +{{- end }} + +SETUP: +1. Get the Artifactory IP and URL + + {{- if contains "NodePort" .Values.nginx.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "artifactory-ha.nginx.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT/ + + {{- else if contains "LoadBalancer" .Values.nginx.service.type }} + NOTE: It may take a few minutes for the LoadBalancer public IP to be available! + + You can watch the status of the service by running 'kubectl get svc -w {{ template "artifactory-ha.nginx.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.nginx.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP/ + + {{- else if contains "ClusterIP" .Values.nginx.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Values.nginx.name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME 8080:80 + echo http://127.0.0.1:8080 + + {{- end }} + +2. Open Artifactory in your browser + Default credential for Artifactory: + user: admin + password: password + + {{- if .Values.artifactory.license.secret }} + +3. Artifactory license(s) is deployed as a Kubernetes secret. This method is relevant for initial deployment only! + Updating the license should be done via Artifactory UI or REST API. If you want to keep managing the artifactory license using the same method, you can use artifactory.copyOnEveryStartup in values.yaml. + + {{- else }} + +3. Add HA licenses to activate Artifactory HA through the Artifactory UI + NOTE: Each Artifactory node requires a valid license. See https://www.jfrog.com/confluence/display/RTF/HA+Installation+and+Setup for more details. + + {{- end }} + +{{ if or .Values.artifactory.primary.javaOpts.jmx.enabled .Values.artifactory.node.javaOpts.jmx.enabled }} +JMX configuration: +{{- if not (contains "LoadBalancer" .Values.artifactory.service.type) }} +If you want to access JMX from you computer with jconsole, you should set ".Values.artifactory.service.type=LoadBalancer" !!! +{{ end }} + +1. Get the Artifactory service IP: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +export PRIMARY_SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.primary.name" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +export MEMBER_SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory-ha.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') +{{- end }} + +2. Map the service name to the service IP in /etc/hosts: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +sudo sh -c "echo \"${PRIMARY_SERVICE_IP} {{ template "artifactory-ha.primary.name" . }}\" >> /etc/hosts" +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +sudo sh -c "echo \"${MEMBER_SERVICE_IP} {{ template "artifactory-ha.fullname" . }}\" >> /etc/hosts" +{{- end }} + +3. Launch jconsole: +{{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} +jconsole {{ template "artifactory-ha.primary.name" . }}:{{ .Values.artifactory.primary.javaOpts.jmx.port }} +{{- end }} +{{- if .Values.artifactory.node.javaOpts.jmx.enabled }} +jconsole {{ template "artifactory-ha.fullname" . }}:{{ .Values.artifactory.node.javaOpts.jmx.port }} +{{- end }} +{{- end }} + + +{{- if ge (.Values.artifactory.node.replicaCount | int) 1 }} +***************************************** WARNING ***************************************************************************** +* Currently member node(s) are enabled, will be deprecated in upcoming releases * +* It is recommended to upgrade from primary-members to primary-only. * +* It can be done by deploying the chart ( >=107.59.x) with the new values. Also, please refer to changelog of 107.59.x chart * +* More Info: https://jfrog.com/help/r/jfrog-installation-setup-documentation/cloud-native-high-availability * +******************************************************************************************************************************* +{{- end }} + +{{- if and .Values.nginx.enabled .Values.ingress.hosts }} +***************************************** WARNING ***************************************************************************** +* when nginx is enabled , .Values.ingress.hosts will be deprecated in upcoming releases * +* It is recommended to use nginx.hosts instead ingress.hosts +******************************************************************************************************************************* +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/_helpers.tpl b/charts/jfrog/artifactory-ha/107.90.14/templates/_helpers.tpl new file mode 100644 index 0000000000..d6fb229fed --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/_helpers.tpl @@ -0,0 +1,563 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "artifactory-ha.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +The primary node name +*/}} +{{- define "artifactory-ha.primary.name" -}} +{{- if .Values.nameOverride -}} +{{- printf "%s-primary" .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := .Release.Name | trunc 29 -}} +{{- printf "%s-%s-primary" $name .Chart.Name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +The member node name +*/}} +{{- define "artifactory-ha.node.name" -}} +{{- if .Values.nameOverride -}} +{{- printf "%s-member" .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := .Release.Name | trunc 29 -}} +{{- printf "%s-%s-member" $name .Chart.Name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Expand the name nginx service. +*/}} +{{- define "artifactory-ha.nginx.name" -}} +{{- default .Values.nginx.name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "artifactory-ha.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "artifactory-ha.nginx.fullname" -}} +{{- if .Values.nginx.fullnameOverride -}} +{{- .Values.nginx.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nginx.name -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "artifactory-ha.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} +{{ default (include "artifactory-ha.fullname" .) .Values.serviceAccount.name }} +{{- else -}} +{{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "artifactory-ha.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate SSL certificates +*/}} +{{- define "artifactory-ha.gen-certs" -}} +{{- $altNames := list ( printf "%s.%s" (include "artifactory-ha.fullname" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "artifactory-ha.fullname" .) .Release.Namespace ) -}} +{{- $ca := genCA "artifactory-ca" 365 -}} +{{- $cert := genSignedCert ( include "artifactory-ha.fullname" . ) nil $altNames 365 $ca -}} +tls.crt: {{ $cert.Cert | b64enc }} +tls.key: {{ $cert.Key | b64enc }} +{{- end -}} + +{{/* +Scheme (http/https) based on Access or Router TLS enabled/disabled +*/}} +{{- define "artifactory-ha.scheme" -}} +{{- if or .Values.access.accessConfig.security.tls .Values.router.tlsEnabled -}} +{{- printf "%s" "https" -}} +{{- else -}} +{{- printf "%s" "http" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKey value +*/}} +{{- define "artifactory-ha.joinKey" -}} +{{- if .Values.global.joinKey -}} +{{- .Values.global.joinKey -}} +{{- else if .Values.artifactory.joinKey -}} +{{- .Values.artifactory.joinKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectToken value +*/}} +{{- define "artifactory-ha.jfConnectToken" -}} +{{- .Values.artifactory.jfConnectToken -}} +{{- end -}} + +{{/* +Resolve masterKey value +*/}} +{{- define "artifactory-ha.masterKey" -}} +{{- if .Values.global.masterKey -}} +{{- .Values.global.masterKey -}} +{{- else if .Values.artifactory.masterKey -}} +{{- .Values.artifactory.masterKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKeySecretName value +*/}} +{{- define "artifactory-ha.joinKeySecretName" -}} +{{- if .Values.global.joinKeySecretName -}} +{{- .Values.global.joinKeySecretName -}} +{{- else if .Values.artifactory.joinKeySecretName -}} +{{- .Values.artifactory.joinKeySecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectTokenSecretName value +*/}} +{{- define "artifactory-ha.jfConnectTokenSecretName" -}} +{{- if .Values.artifactory.jfConnectTokenSecretName -}} +{{- .Values.artifactory.jfConnectTokenSecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve masterKeySecretName value +*/}} +{{- define "artifactory-ha.masterKeySecretName" -}} +{{- if .Values.global.masterKeySecretName -}} +{{- .Values.global.masterKeySecretName -}} +{{- else if .Values.artifactory.masterKeySecretName -}} +{{- .Values.artifactory.masterKeySecretName -}} +{{- else -}} +{{ include "artifactory-ha.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve imagePullSecrets value +*/}} +{{- define "artifactory-ha.imagePullSecrets" -}} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.global.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- else if .Values.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainersBegin value +*/}} +{{- define "artifactory-ha.customInitContainersBegin" -}} +{{- if .Values.global.customInitContainersBegin -}} +{{- .Values.global.customInitContainersBegin -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainersBegin -}} +{{- .Values.artifactory.customInitContainersBegin -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory-ha.customInitContainers" -}} +{{- if .Values.global.customInitContainers -}} +{{- .Values.global.customInitContainers -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainers -}} +{{- .Values.artifactory.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory-ha.customVolumes" -}} +{{- if .Values.global.customVolumes -}} +{{- .Values.global.customVolumes -}} +{{- end -}} +{{- if .Values.artifactory.customVolumes -}} +{{- .Values.artifactory.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve unifiedCustomSecretVolumeName value +*/}} +{{- define "artifactory-ha.unifiedCustomSecretVolumeName" -}} +{{- printf "%s-%s" (include "artifactory-ha.name" .) ("unified-secret-volume") | trunc 63 -}} +{{- end -}} + +{{/* +Check the Duplication of volume names for secrets. If unifiedSecretInstallation is enabled then the method is checking for volume names, +if the volume exists in customVolume then an extra volume with the same name will not be getting added in unifiedSecretInstallation case.*/}} +{{- define "artifactory-ha.checkDuplicateUnifiedCustomVolume" -}} +{{- if or .Values.global.customVolumes .Values.artifactory.customVolumes -}} +{{- $val := (tpl (include "artifactory-ha.customVolumes" .) .) | toJson -}} +{{- contains (include "artifactory-ha.unifiedCustomSecretVolumeName" .) $val | toString -}} +{{- else -}} +{{- printf "%s" "false" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts value +*/}} +{{- define "artifactory-ha.customVolumeMounts" -}} +{{- if .Values.global.customVolumeMounts -}} +{{- .Values.global.customVolumeMounts -}} +{{- end -}} +{{- if .Values.artifactory.customVolumeMounts -}} +{{- .Values.artifactory.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory-ha.customSidecarContainers" -}} +{{- if .Values.global.customSidecarContainers -}} +{{- .Values.global.customSidecarContainers -}} +{{- end -}} +{{- if .Values.artifactory.customSidecarContainers -}} +{{- .Values.artifactory.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory chart image names +*/}} +{{- define "artifactory-ha.getImageInfoByValue" -}} +{{- $dot := index . 0 }} +{{- $indexReference := index . 1 }} +{{- $registryName := index $dot.Values $indexReference "image" "registry" -}} +{{- $repositoryName := index $dot.Values $indexReference "image" "repository" -}} +{{- $tag := "" -}} +{{- if and (eq $indexReference "artifactory") (hasKey $dot.Values "artifactoryService") }} + {{- if default false $dot.Values.artifactoryService.enabled }} + {{- $indexReference = "artifactoryService" -}} + {{- $tag = default $dot.Chart.Annotations.artifactoryServiceVersion (index $dot.Values $indexReference "image" "tag") | toString -}} + {{- $repositoryName = index $dot.Values $indexReference "image" "repository" -}} + {{- else -}} + {{- $tag = default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} + {{- end -}} +{{- else -}} + {{- $tag = default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} +{{- end -}} +{{- if $dot.Values.global }} + {{- if and $dot.Values.splitServicesToContainers $dot.Values.global.versions.router (eq $indexReference "router") }} + {{- $tag = $dot.Values.global.versions.router | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.initContainers (eq $indexReference "initContainers") }} + {{- $tag = $dot.Values.global.versions.initContainers | toString -}} + {{- end -}} + {{- if $dot.Values.global.versions.artifactory }} + {{- if or (eq $indexReference "artifactory") (eq $indexReference "metadata") (eq $indexReference "nginx") (eq $indexReference "observability") }} + {{- $tag = $dot.Values.global.versions.artifactory | toString -}} + {{- end -}} + {{- end -}} + {{- if $dot.Values.global.imageRegistry }} + {{- printf "%s/%s:%s" $dot.Values.global.imageRegistry $repositoryName $tag -}} + {{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} + {{- end -}} +{{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory app version +*/}} +{{- define "artifactory-ha.app.version" -}} +{{- $tag := (splitList ":" ((include "artifactory-ha.getImageInfoByValue" (list . "artifactory" )))) | last | toString -}} +{{- printf "%s" $tag -}} +{{- end -}} + +{{/* +Custom certificate copy command +*/}} +{{- define "artifactory-ha.copyCustomCerts" -}} +echo "Copy custom certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; +for file in $(ls -1 /tmp/certs/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; fi done; +if [ -f {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt ]; then mv -v {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/ca.crt; fi; +{{- end -}} + +{{/* +Circle of trust certificates copy command +*/}} +{{- define "artifactory.copyCircleOfTrustCertsCerts" -}} +echo "Copy circle of trust certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; +for file in $(ls -1 /tmp/circleoftrustcerts/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; fi done; +{{- end -}} + +{{/* +Resolve requiredServiceTypes value +*/}} +{{- define "artifactory-ha.router.requiredServiceTypes" -}} +{{- $requiredTypes := "jfrt,jfac" -}} +{{- if not .Values.access.enabled -}} + {{- $requiredTypes = "jfrt" -}} +{{- end -}} +{{- if .Values.observability.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfob" -}} +{{- end -}} +{{- if .Values.metadata.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmd" -}} +{{- end -}} +{{- if .Values.event.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevt" -}} +{{- end -}} +{{- if .Values.frontend.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jffe" -}} +{{- end -}} +{{- if .Values.jfconnect.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfcon" -}} +{{- end -}} +{{- if .Values.evidence.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevd" -}} +{{- end -}} +{{- if .Values.mc.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmc" -}} +{{- end -}} +{{- $requiredTypes -}} +{{- end -}} + +{{/* +nginx scheme (http/https) +*/}} +{{- define "nginx.scheme" -}} +{{- if .Values.nginx.http.enabled -}} +{{- printf "%s" "http" -}} +{{- else -}} +{{- printf "%s" "https" -}} +{{- end -}} +{{- end -}} + + +{{/* +nginx command +*/}} +{{- define "nginx.command" -}} +{{- if .Values.nginx.customCommand }} +{{ toYaml .Values.nginx.customCommand }} +{{- end }} +{{- end -}} + +{{/* +nginx port (8080/8443) based on http/https enabled +*/}} +{{- define "nginx.port" -}} +{{- if .Values.nginx.http.enabled -}} +{{- .Values.nginx.http.internalPort -}} +{{- else -}} +{{- .Values.nginx.https.internalPort -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.nginx.customInitContainers" -}} +{{- if .Values.nginx.customInitContainers -}} +{{- .Values.nginx.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.nginx.customVolumes" -}} +{{- if .Values.nginx.customVolumes -}} +{{- .Values.nginx.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts nginx value +*/}} +{{- define "artifactory.nginx.customVolumeMounts" -}} +{{- if .Values.nginx.customVolumeMounts -}} +{{- .Values.nginx.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.nginx.customSidecarContainers" -}} +{{- if .Values.nginx.customSidecarContainers -}} +{{- .Values.nginx.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod primary node selector value +*/}} +{{- define "artifactory.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.primary.nodeSelector }} +{{ toYaml .Values.artifactory.primary.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod node nodeselector value +*/}} +{{- define "artifactory.node.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.node.nodeSelector }} +{{ toYaml .Values.artifactory.node.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Nginx pods node selector value +*/}} +{{- define "nginx.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.nginx.nodeSelector }} +{{ toYaml .Values.nginx.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Calculate the systemYaml from structured and unstructured text input +*/}} +{{- define "artifactory.finalSystemYaml" -}} +{{ tpl (mergeOverwrite (include "artifactory.systemYaml" . | fromYaml) .Values.artifactory.extraSystemYaml | toYaml) . }} +{{- end -}} + +{{/* +Calculate the systemYaml from the unstructured text input +*/}} +{{- define "artifactory.systemYaml" -}} +{{ include (print $.Template.BasePath "/_system-yaml-render.tpl") . }} +{{- end -}} + +{{/* +Metrics enabled +*/}} +{{- define "metrics.enabled" -}} +shared: + metrics: + enabled: true +{{- end }} + +{{/* +Resolve artifactory metrics +*/}} +{{- define "artifactory.metrics" -}} +{{- if .Values.artifactory.openMetrics -}} +{{- if .Values.artifactory.openMetrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.openMetrics.filebeat }} +{{- if .Values.artifactory.openMetrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.openMetrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- else if .Values.artifactory.metrics -}} +{{- if .Values.artifactory.metrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.metrics.filebeat }} +{{- if .Values.artifactory.metrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.metrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve unified secret prepend release name +*/}} +{{- define "artifactory.unifiedSecretPrependReleaseName" -}} +{{- if .Values.artifactory.unifiedSecretPrependReleaseName }} +{{- printf "%s" (include "artifactory-ha.fullname" .) -}} +{{- else }} +{{- printf "%s" (include "artifactory-ha.name" .) -}} +{{- end }} +{{- end }} + +{{/* +Resolve nginx hosts value +*/}} +{{- define "artifactory.nginx.hosts" -}} +{{- if .Values.ingress.hosts }} +{{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- else if .Values.nginx.hosts }} +{{- range .Values.nginx.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/_system-yaml-render.tpl b/charts/jfrog/artifactory-ha/107.90.14/templates/_system-yaml-render.tpl new file mode 100644 index 0000000000..deaa773ea9 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/_system-yaml-render.tpl @@ -0,0 +1,5 @@ +{{- if .Values.artifactory.systemYaml -}} +{{- tpl .Values.artifactory.systemYaml . -}} +{{- else -}} +{{ (tpl ( $.Files.Get "files/system.yaml" ) .) }} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/additional-resources.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/additional-resources.yaml new file mode 100644 index 0000000000..c4d06f08ad --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/additional-resources.yaml @@ -0,0 +1,3 @@ +{{ if .Values.additionalResources }} +{{ tpl .Values.additionalResources . }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/admin-bootstrap-creds.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/admin-bootstrap-creds.yaml new file mode 100644 index 0000000000..40d91f75e9 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/admin-bootstrap-creds.yaml @@ -0,0 +1,15 @@ +{{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} +{{- if and .Values.artifactory.admin.password (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-bootstrap-creds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + bootstrap.creds: {{ (printf "%s@%s=%s" .Values.artifactory.admin.username .Values.artifactory.admin.ip .Values.artifactory.admin.password) | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-access-config.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-access-config.yaml new file mode 100644 index 0000000000..0b96a337d4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-access-config.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.access.accessConfig (not .Values.artifactory.unifiedSecretInstallation) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" . }}-access-config + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +stringData: + access.config.patch.yml: | +{{ tpl (toYaml .Values.access.accessConfig) . | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-binarystore-secret.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-binarystore-secret.yaml new file mode 100644 index 0000000000..6824fe90f8 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-binarystore-secret.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.artifactory.persistence.customBinarystoreXmlSecret) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-binarystore + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + binarystore.xml: |- +{{- if .Values.artifactory.persistence.binarystoreXml }} +{{ tpl .Values.artifactory.persistence.binarystoreXml . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/binarystore.xml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-configmaps.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-configmaps.yaml new file mode 100644 index 0000000000..1385bc5782 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-configmaps.yaml @@ -0,0 +1,13 @@ +{{ if .Values.artifactory.configMaps }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-configmaps + labels: + app: {{ template "artifactory-ha.fullname" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ tpl .Values.artifactory.configMaps . | indent 2 }} +{{ end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-custom-secrets.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-custom-secrets.yaml new file mode 100644 index 0000000000..8065fe6865 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-custom-secrets.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.artifactory.customSecrets (not .Values.artifactory.unifiedSecretInstallation) }} +{{- range .Values.artifactory.customSecrets }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-{{ .name }} + labels: + app: "{{ template "artifactory-ha.name" $ }}" + chart: "{{ template "artifactory-ha.chart" $ }}" + component: "{{ $.Values.artifactory.name }}" + heritage: {{ $.Release.Service | quote }} + release: {{ $.Release.Name | quote }} +type: Opaque +stringData: + {{ .key }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-database-secrets.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-database-secrets.yaml new file mode 100644 index 0000000000..6daf5db7bc --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-database-secrets.yaml @@ -0,0 +1,24 @@ +{{- if and (not .Values.database.secrets) (not .Values.postgresql.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +{{- if or .Values.database.url .Values.database.user .Values.database.password }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" . }}-database-creds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +data: + {{- with .Values.database.url }} + db-url: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.user }} + db-user: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.password }} + db-password: {{ tpl . $ | b64enc | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-gcp-credentials-secret.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-gcp-credentials-secret.yaml new file mode 100644 index 0000000000..d90769595b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-gcp-credentials-secret.yaml @@ -0,0 +1,16 @@ +{{- if not .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} +{{- if and (.Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-gcpcreds + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + gcp.credentials.json: |- +{{ tpl .Values.artifactory.persistence.googleStorage.gcpServiceAccount.config . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-installer-info.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-installer-info.yaml new file mode 100644 index 0000000000..0dff9dc86f --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-installer-info.yaml @@ -0,0 +1,16 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-installer-info + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + installer-info.json: | +{{- if .Values.installerInfo -}} +{{- tpl .Values.installerInfo . | nindent 4 -}} +{{- else -}} +{{ (tpl ( .Files.Get "files/installer-info.json" | nindent 4 ) .) }} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-license-secret.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-license-secret.yaml new file mode 100644 index 0000000000..73f900863b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-license-secret.yaml @@ -0,0 +1,16 @@ +{{ if and (not .Values.artifactory.unifiedSecretInstallation) (not .Values.artifactory.license.secret) (not .Values.artifactory.license.licenseKey) }} +{{- with .Values.artifactory.license.licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-license + labels: + app: {{ template "artifactory-ha.name" $ }} + chart: {{ template "artifactory-ha.chart" $ }} + heritage: {{ $.Release.Service }} + release: {{ $.Release.Name }} +type: Opaque +data: + artifactory.lic: {{ . | b64enc | quote }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-migration-scripts.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-migration-scripts.yaml new file mode 100644 index 0000000000..fe40f980fc --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-migration-scripts.yaml @@ -0,0 +1,18 @@ +{{- if .Values.artifactory.migration.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-migration-scripts + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + migrate.sh: | +{{ .Files.Get "files/migrate.sh" | indent 4 }} + migrationHelmInfo.yaml: | +{{ .Files.Get "files/migrationHelmInfo.yaml" | indent 4 }} + migrationStatus.sh: | +{{ .Files.Get "files/migrationStatus.sh" | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-networkpolicy.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-networkpolicy.yaml new file mode 100644 index 0000000000..9924448f04 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-networkpolicy.yaml @@ -0,0 +1,34 @@ +{{- range .Values.networkpolicy }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ template "artifactory-ha.fullname" $ }}-{{ .name }}-networkpolicy + labels: + app: {{ template "artifactory-ha.name" $ }} + chart: {{ template "artifactory-ha.chart" $ }} + release: {{ $.Release.Name }} + heritage: {{ $.Release.Service }} +spec: +{{- if .podSelector }} + podSelector: +{{ .podSelector | toYaml | trimSuffix "\n" | indent 4 -}} +{{ else }} + podSelector: {} +{{- end }} + policyTypes: + {{- if .ingress }} + - Ingress + {{- end }} + {{- if .egress }} + - Egress + {{- end }} +{{- if .ingress }} + ingress: +{{ .ingress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +{{- if .egress }} + egress: +{{ .egress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +--- +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-nfs-pvc.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-nfs-pvc.yaml new file mode 100644 index 0000000000..6ed7d82f6b --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-nfs-pvc.yaml @@ -0,0 +1,101 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" }} +### Artifactory HA data +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory-ha.fullname" . }}-data-pv + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory-ha.name" . }}-data-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haDataMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-data-pvc + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory-ha.name" . }}-data-pv + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} +--- +### Artifactory HA backup +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory-ha.fullname" . }}-backup-pv + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory-ha.name" . }}-backup-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haBackupMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory-ha.fullname" . }}-backup-pvc + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory-ha.name" . }}-backup-pv + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-node-pdb.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-node-pdb.yaml new file mode 100644 index 0000000000..46c6dac21d --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/artifactory-node-pdb.yaml @@ -0,0 +1,26 @@ +{{- if gt (.Values.artifactory.node.replicaCount | int) 0 -}} +{{- if .Values.artifactory.node.minAvailable -}} +{{- if semverCompare " + mkdir -p {{ tpl .Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir . }}; + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if and .Values.artifactory.node.waitForPrimaryStartup.enabled }} + - name: "wait-for-primary" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - 'bash' + - '-c' + - > + echo "Waiting for primary node to be ready..."; + {{- if and .Values.artifactory.node.waitForPrimaryStartup.enabled .Values.artifactory.node.waitForPrimaryStartup.time }} + echo "Sleeping to allow time for primary node to come up"; + sleep {{ .Values.artifactory.node.waitForPrimaryStartup.time }}; + {{- else }} + ready=false; + while ! $ready; do echo Primary not ready. Waiting...; + timeout 2s bash -c " + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + echo "Removing join.key file"; + rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/security/join.key; + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys - load from database"; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Load custom certificates from database"; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + env: + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) }} + name: {{ include "artifactory-ha.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## SystemYaml ######################### + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: system.yaml + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## CustomCertificates ########################## + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory-ha.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if or .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory-ha.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + resources: +{{ toYaml .Values.artifactory.node.resources | indent 10 }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + {{- end }} + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + +{{- end }} + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory-ha.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} +{{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + - name : JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "metadata") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.jfconnect.enabled }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.federation.enabled .Values.federation.embedded }} + - name: {{ .Values.federation.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_RTFS_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "observability") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + ######################## Artifactory ConfigMap ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + set -e; + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- if .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.preStartCommand . }}; + {{- end }} + {{- with .Values.artifactory.node.preStartCommand }} + echo "Running member node specific custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.node.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.node.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + ######################## Artifactory ConfigMap ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.artifactory.node.resources | indent 10 }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "= 107.79.x), just set databaseUpgradeReady=true \n" .Values.databaseUpgradeReady | quote }} +{{- end }} +{{- if .Values.artifactory.postStartCommand }} + {{- fail ".Values.artifactory.postStartCommand is not supported and should be replaced with .Values.artifactory.lifecycle.postStart.exec.command" }} +{{- end }} +{{- if eq .Values.artifactory.persistence.type "aws-s3" }} + {{- fail "\nPersistence storage type 'aws-s3' is deprecated and is not supported and should be replaced with 'aws-s3-v3'" }} +{{- end }} +{{- if or .Values.artifactory.persistence.googleStorage.identity .Values.artifactory.persistence.googleStorage.credential }} + {{- fail "\nGCP Bucket Authentication with Identity and Credential is deprecated" }} +{{- end }} +{{- if (eq (.Values.artifactory.setSecurityContext | toString) "false" ) }} + {{- fail "\n You need to set security context at the pod level. .Values.artifactory.setSecurityContext is no longer supported. Replace it with .Values.artifactory.podSecurityContext" }} +{{- end }} +{{- if or .Values.artifactory.uid .Values.artifactory.gid }} +{{- if or (not (eq (.Values.artifactory.uid | toString) "1030" )) (not (eq (.Values.artifactory.gid | toString) "1030" )) }} + {{- fail "\n .Values.artifactory.uid and .Values.artifactory.gid are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.runAsUser, .Values.artifactory.podSecurityContext.runAsGroup and .Values.artifactory.podSecurityContext.fsGroup" }} +{{- end }} +{{- end }} +{{- if or .Values.artifactory.fsGroupChangePolicy .Values.artifactory.seLinuxOptions }} + {{- fail "\n .Values.artifactory.fsGroupChangePolicy and .Values.artifactory.seLinuxOptions are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.fsGroupChangePolicy and .Values.artifactory.podSecurityContext.seLinuxOptions" }} +{{- end }} +{{- if .Values.initContainerImage }} + {{- fail "\n .Values.initContainerImage is no longer supported. Replace it with .Values.initContainers.image.registry .Values.initContainers.image.repository and .Values.initContainers.image.tag" }} +{{- end }} +{{- with .Values.artifactory.statefulset.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: + serviceName: {{ template "artifactory-ha.primary.name" . }} + replicas: {{ .Values.artifactory.primary.replicaCount }} + updateStrategy: {{- toYaml .Values.artifactory.primary.updateStrategy | nindent 4}} + selector: + matchLabels: + app: {{ template "artifactory-ha.name" . }} + role: {{ template "artifactory-ha.primary.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + role: {{ template "artifactory-ha.primary.name" . }} + component: {{ .Values.artifactory.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + {{- with .Values.artifactory.primary.labels }} +{{ toYaml . | indent 8 }} + {{- end }} + annotations: + {{- if not .Values.artifactory.unifiedSecretInstallation }} + checksum/database-secrets: {{ include (print $.Template.BasePath "/artifactory-database-secrets.yaml") . | sha256sum }} + checksum/binarystore: {{ include (print $.Template.BasePath "/artifactory-binarystore-secret.yaml") . | sha256sum }} + checksum/systemyaml: {{ include (print $.Template.BasePath "/artifactory-system-yaml.yaml") . | sha256sum }} + {{- if .Values.access.accessConfig }} + checksum/access-config: {{ include (print $.Template.BasePath "/artifactory-access-config.yaml") . | sha256sum }} + {{- end }} + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + checksum/gcpcredentials: {{ include (print $.Template.BasePath "/artifactory-gcp-credentials-secret.yaml") . | sha256sum }} + {{- end }} + {{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + checksum/admin-creds: {{ include (print $.Template.BasePath "/admin-bootstrap-creds.yaml") . | sha256sum }} + {{- end }} + {{- else }} + checksum/artifactory-unified-secret: {{ include (print $.Template.BasePath "/artifactory-unified-secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.artifactory.annotations }} +{{ toYaml . | indent 8 }} + {{- end }} + spec: + {{- if .Values.artifactory.schedulerName }} + schedulerName: {{ .Values.artifactory.schedulerName | quote }} + {{- end }} + {{- if .Values.artifactory.priorityClass.existingPriorityClass }} + priorityClassName: {{ .Values.artifactory.priorityClass.existingPriorityClass }} + {{- else -}} + {{- if .Values.artifactory.priorityClass.create }} + priorityClassName: {{ default (include "artifactory-ha.fullname" .) .Values.artifactory.priorityClass.name }} + {{- end }} + {{- end }} + serviceAccountName: {{ template "artifactory-ha.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ add .Values.artifactory.terminationGracePeriodSeconds 10 }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory-ha.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.podSecurityContext.enabled }} + securityContext: {{- omit .Values.artifactory.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.artifactory.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.artifactory.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if or .Values.artifactory.customInitContainersBegin .Values.global.customInitContainersBegin }} +{{ tpl (include "artifactory-ha.customInitContainersBegin" .) . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.persistence.enabled }} + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + - name: "create-artifactory-data-dir" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + mkdir -p {{ tpl .Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir . }}; + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- end }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + volumeMounts: + - mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + name: volume + {{- end }} + {{- if or (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) .Values.artifactory.admin.password }} + - name: "access-bootstrap-creds" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + echo "Preparing {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -Lrf /tmp/access/bootstrap.creds {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + chmod 600 {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + volumeMounts: + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + - name: access-bootstrap-creds + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/access/bootstrap.creds" + {{- if and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey }} + subPath: {{ .Values.artifactory.admin.dataKey }} + {{- else }} + subPath: bootstrap.creds + {{- end }} + {{- end }} + {{- end }} + - name: 'copy-system-configurations' + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - '/bin/bash' + - '-c' + - > + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + {{- if .Values.access.accessConfig }} + echo "Copy access.config.patch.yml to {{ .Values.artifactory.persistence.mountPath }}/etc/access"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -fv /tmp/etc/access.config.patch.yml {{ .Values.artifactory.persistence.mountPath }}/etc/access/access.config.patch.yml; + {{- end }} + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + touch {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/reset_ca_keys; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Copying custom certificates to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + cp -fv /tmp/etc/tls.crt {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.crt; + cp -fv /tmp/etc/tls.key {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.private.key; + {{- end }} + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + echo "Copy joinKey to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security; + echo -n ${ARTIFACTORY_JOIN_KEY} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security/join.key; + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + echo "Copy jfConnectToken to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/; + echo -n ${ARTIFACTORY_JFCONNECT_TOKEN} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + {{- end }} + env: + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + - name: ARTIFACTORY_JOIN_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + name: {{ include "artifactory-ha.joinKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: join-key + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + - name: ARTIFACTORY_JFCONNECT_TOKEN + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.jfConnectTokenSecretName }} + name: {{ include "artifactory-ha.jfConnectTokenSecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: jfconnect-token + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + name: {{ include "artifactory-ha.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + + ######################## Volume Mounts For copy-system-configurations ########################## + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## SystemYaml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: system.yaml + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Access config ########################## + {{- if .Values.access.accessConfig }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + - name: access-config + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/access.config.patch.yml" + subPath: access.config.patch.yml + {{- end }} + + ######################## Access certs external secret ########################## + {{- if .Values.access.customCertificatesSecretName }} + - name: access-certs + mountPath: "/tmp/etc/tls.crt" + subPath: tls.crt + - name: access-certs + mountPath: "/tmp/etc/tls.key" + subPath: tls.key + {{- end }} + + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory-ha.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if or .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory-ha.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## CustomVolumeMounts ########################## + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + +{{- end }} + + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh; + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory-ha.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + - name : JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "metadata") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.jfconnect.enabled }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.federation.enabled .Values.federation.embedded }} + - name: {{ .Values.federation.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + # TODO - Password,Url,Username - should be derived from env variable +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "observability") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + set -e; + if [ -d /artifactory_extra_conf ] && [ -d /artifactory_bootstrap ]; then + echo "Copying bootstrap config from /artifactory_extra_conf to /artifactory_bootstrap"; + cp -Lrfv /artifactory_extra_conf/ /artifactory_bootstrap/; + fi; + {{- if .Values.artifactory.configMapName }} + echo "Copying bootstrap configs"; + cp -Lrf /bootstrap/* /artifactory_bootstrap/; + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + echo "Copying plugins"; + cp -Lrf /tmp/plugin/*/* /artifactory_bootstrap/plugins; + {{- end }} + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- with .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + {{- with .Values.artifactory.primary.preStartCommand }} + echo "Running primary specific custom preStartCommand command"; + {{ tpl . $ }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory-ha.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} + - name: JF_SHARED_NODE_HAENABLED + value: "true" +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.primary.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.primary.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.artifactory.customPersistentPodVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentPodVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentPodVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + - name: bootstrap-plugins + mountPath: "/artifactory_bootstrap/plugins/" + {{- range .Values.artifactory.userPluginSecrets }} + - name: {{ tpl . $ }} + mountPath: "/tmp/plugin/{{ tpl . $ }}" + {{- end }} + {{- end }} + - name: volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + + ######################## Artifactory persistence fs ########################## + {{- if eq .Values.artifactory.persistence.type "file-system" }} + {{- if .Values.artifactory.persistence.fileSystem.existingSharedClaim.enabled }} + {{- range $sharedClaimNumber, $e := until (.Values.artifactory.persistence.fileSystem.existingSharedClaim.numberOfExistingClaims|int) }} + - name: artifactory-ha-data-{{ $sharedClaimNumber }} + mountPath: "{{ tpl $.Values.artifactory.persistence.fileSystem.existingSharedClaim.dataDir $ }}/filestore{{ $sharedClaimNumber }}" + {{- end }} + - name: artifactory-ha-backup + mountPath: "{{ $.Values.artifactory.persistence.fileSystem.existingSharedClaim.backupDir }}" + {{- end }} + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-ha-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-ha-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystoreXml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## Artifactory configMapName ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory-ha.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory-ha.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.artifactory.primary.resources | indent 10 }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "=1.18.0-0" .Capabilities.KubeVersion.GitVersion) }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.defaultBackend.enabled }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + defaultBackend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end }} + rules: +{{- if .Values.ingress.hosts }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $artifactoryServicePort }} + {{- end }} + {{- if and $.Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" $.Values.artifactory.image.repository)) }} + - path: {{ $.Values.ingress.rtfsPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $.Values.federation.internalPort }} + {{- end }} + {{- end }} + {{- else }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + - path: {{ $.Values.ingress.artifactoryPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $artifactoryServicePort }} + {{- end }} + {{- end }} +{{- end -}} + {{- with .Values.ingress.additionalRules }} +{{ tpl . $ | indent 2 }} + {{- end }} + {{- if .Values.ingress.tls }} + tls: +{{ toYaml .Values.ingress.tls | indent 4 }} + {{- end -}} + +{{- if .Values.customIngress }} +--- +{{ .Values.customIngress | toYaml | trimSuffix "\n" }} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/logger-configmap.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/logger-configmap.yaml new file mode 100644 index 0000000000..d3597905d9 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/logger-configmap.yaml @@ -0,0 +1,63 @@ +{{- if or .Values.artifactory.loggers .Values.artifactory.catalinaLoggers }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-logger + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + tail-log.sh: | + #!/bin/sh + + LOG_DIR=$1 + LOG_NAME=$2 + PID= + + # Wait for log dir to appear + while [ ! -d ${LOG_DIR} ]; do + sleep 1 + done + + cd ${LOG_DIR} + + LOG_PREFIX=$(echo ${LOG_NAME} | sed 's/.log$//g') + + # Find the log to tail + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + + # Wait for the log file + while [ -z "${LOG_FILE}" ]; do + sleep 1 + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + done + + echo "Log file ${LOG_FILE} is ready!" + + # Get inode number + INODE_ID=$(ls -i ${LOG_FILE}) + + # echo "Tailing ${LOG_FILE}" + tail -F ${LOG_FILE} & + PID=$! + + # Loop forever to see if a new log was created + while true; do + # Check inode number + NEW_INODE_ID=$(ls -i ${LOG_FILE}) + + # If inode number changed, this means log was rotated and need to start a new tail + if [ "${INODE_ID}" != "${NEW_INODE_ID}" ]; then + kill -9 ${PID} 2>/dev/null + INODE_ID="${NEW_INODE_ID}" + + # Start a new tail + tail -F ${LOG_FILE} & + PID=$! + fi + sleep 1 + done + +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-artifactory-conf.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-artifactory-conf.yaml new file mode 100644 index 0000000000..97ae5f27be --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-artifactory-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customArtifactoryConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-artifactory-conf + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + artifactory.conf: | +{{- if .Values.nginx.artifactoryConf }} +{{ tpl .Values.nginx.artifactoryConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-artifactory-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-certificate-secret.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-certificate-secret.yaml new file mode 100644 index 0000000000..29c77ad5a4 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-certificate-secret.yaml @@ -0,0 +1,14 @@ +{{- if and (not .Values.nginx.tlsSecretName) .Values.nginx.enabled .Values.nginx.https.enabled }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-certificate + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ ( include "artifactory-ha.gen-certs" . ) | indent 2 }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-conf.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-conf.yaml new file mode 100644 index 0000000000..4f0d65f25c --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory-ha.fullname" . }}-nginx-conf + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + nginx.conf: | +{{- if .Values.nginx.mainConf }} +{{ tpl .Values.nginx.mainConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-main-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-deployment.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-deployment.yaml new file mode 100644 index 0000000000..d43689b8cd --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-deployment.yaml @@ -0,0 +1,221 @@ +{{- if .Values.nginx.enabled -}} +{{- $serviceName := include "artifactory-ha.fullname" . -}} +{{- $servicePort := .Values.artifactory.externalPort -}} +apiVersion: apps/v1 +kind: {{ .Values.nginx.kind }} +metadata: + name: {{ template "artifactory-ha.nginx.fullname" . }} + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 4 }} +{{- end }} +{{- with .Values.nginx.deployment.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: +{{- if ne .Values.nginx.kind "DaemonSet" }} + replicas: {{ .Values.nginx.replicaCount }} +{{- end }} + selector: + matchLabels: + app: {{ template "artifactory-ha.name" . }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} + template: + metadata: + annotations: + checksum/nginx-conf: {{ include (print $.Template.BasePath "/nginx-conf.yaml") . | sha256sum }} + checksum/nginx-artifactory-conf: {{ include (print $.Template.BasePath "/nginx-artifactory-conf.yaml") . | sha256sum }} + {{- range $key, $value := .Values.nginx.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + app: {{ template "artifactory-ha.name" . }} + chart: {{ template "artifactory-ha.chart" . }} + component: {{ .Values.nginx.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 8 }} +{{- end }} + spec: + {{- if .Values.nginx.podSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + serviceAccountName: {{ template "artifactory-ha.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ .Values.nginx.terminationGracePeriodSeconds }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory-ha.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.nginx.priorityClassName }} + priorityClassName: {{ .Values.nginx.priorityClassName | quote }} + {{- end }} + {{- if .Values.nginx.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.nginx.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if .Values.nginx.customInitContainers }} +{{ tpl (include "artifactory.nginx.customInitContainers" .) . | indent 6 }} + {{- end }} + - name: "setup" + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.imagePullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/sh' + - '-c' + - > + rm -rfv {{ .Values.nginx.persistence.mountPath }}/lost+found; + mkdir -p {{ .Values.nginx.persistence.mountPath }}/logs; + resources: + {{- toYaml .Values.initContainers.resources | nindent 10 }} + volumeMounts: + - mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + name: nginx-volume + containers: + - name: {{ .Values.nginx.name }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list . "nginx") }} + imagePullPolicy: {{ .Values.nginx.image.pullPolicy }} + {{- if .Values.nginx.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.nginx.customCommand }} + command: +{{- tpl (include "nginx.command" .) . | indent 10 }} + {{- end }} + ports: +{{ if .Values.nginx.customPorts }} +{{ toYaml .Values.nginx.customPorts | indent 8 }} +{{ end }} + # DEPRECATION NOTE: The following is to maintain support for values pre 1.3.1 and + # will be cleaned up in a later version + {{- if .Values.nginx.http }} + {{- if .Values.nginx.http.enabled }} + - containerPort: {{ .Values.nginx.http.internalPort }} + name: http + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttp }} + name: http-internal + {{- end }} + {{- if .Values.nginx.https }} + {{- if .Values.nginx.https.enabled }} + - containerPort: {{ .Values.nginx.https.internalPort }} + name: https + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttps }} + name: https-internal + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.nginx.ssh.internalPort }} + name: tcp-ssh + {{- end }} + {{- with .Values.nginx.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + volumeMounts: + - name: nginx-conf + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + - name: nginx-artifactory-conf + mountPath: "{{ .Values.nginx.persistence.mountPath }}/conf.d/" + - name: nginx-volume + mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + mountPath: "{{ .Values.nginx.persistence.mountPath }}/ssl" + {{- end }} + {{- if .Values.nginx.customVolumeMounts }} +{{ tpl (include "artifactory.nginx.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.nginx.resources | indent 10 }} + {{- if .Values.nginx.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.nginx.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.nginx.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.nginx.livenessProbe.config . | indent 10 }} + {{- end }} + {{- $mountPath := .Values.nginx.persistence.mountPath }} + {{- range .Values.nginx.loggers }} + - name: {{ . | replace "_" "-" | replace "." "-" }} + image: {{ include "artifactory-ha.getImageInfoByValue" (list $ "initContainers") }} + imagePullPolicy: {{ $.Values.initContainers.image.pullPolicy }} + command: + - tail + args: + - '-F' + - '{{ $mountPath }}/logs/{{ . }}' + volumeMounts: + - name: nginx-volume + mountPath: {{ $mountPath }} + resources: +{{ toYaml $.Values.nginx.loggersResources | indent 10 }} + {{- end }} + {{- if .Values.nginx.customSidecarContainers }} +{{ tpl (include "artifactory.nginx.customSidecarContainers" .) . | indent 6 }} + {{- end }} + {{- if or .Values.nginx.nodeSelector .Values.global.nodeSelector }} +{{ tpl (include "nginx.nodeSelector" .) . | indent 6 }} + {{- end }} + {{- with .Values.nginx.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.nginx.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} + volumes: + {{- if .Values.nginx.customVolumes }} +{{ tpl (include "artifactory.nginx.customVolumes" .) . | indent 6 }} + {{- end }} + - name: nginx-conf + configMap: + {{- if .Values.nginx.customConfigMap }} + name: {{ .Values.nginx.customConfigMap }} + {{- else }} + name: {{ template "artifactory-ha.fullname" . }}-nginx-conf + {{- end }} + - name: nginx-artifactory-conf + configMap: + {{- if .Values.nginx.customArtifactoryConfigMap }} + name: {{ .Values.nginx.customArtifactoryConfigMap }} + {{- else }} + name: {{ template "artifactory-ha.fullname" . }}-nginx-artifactory-conf + {{- end }} + + - name: nginx-volume + {{- if .Values.nginx.persistence.enabled }} + persistentVolumeClaim: + claimName: {{ .Values.nginx.persistence.existingClaim | default (include "artifactory-ha.nginx.fullname" .) }} + {{- else }} + emptyDir: {} + {{- end }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + secret: + {{- if .Values.nginx.tlsSecretName }} + secretName: {{ .Values.nginx.tlsSecretName }} + {{- else }} + secretName: {{ template "artifactory-ha.fullname" . }}-nginx-certificate + {{- end }} + {{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-pdb.yaml b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-pdb.yaml new file mode 100644 index 0000000000..0aed993682 --- /dev/null +++ b/charts/jfrog/artifactory-ha/107.90.14/templates/nginx-pdb.yaml @@ -0,0 +1,23 @@ +{{- if .Values.nginx.enabled -}} +{{- if semverCompare " --from-literal=license_token=${TOKEN} --from-literal=iam_role=${ROLE_ARN}` +aws: + license: + enabled: false + licenseConfigSecretName: + region: us-east-1 +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## @param containerSecurityContext.enabled Enabled containers' Security Context +## @param containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot +## @param containerSecurityContext.privileged Set container's Security Context privileged +## @param containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation +## @param containerSecurityContext.capabilities.drop List of capabilities to be dropped +## @param containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile +## +containerSecurityContext: + enabled: true + runAsNonRoot: true + privileged: false + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL +## The following router settings are to configure only when splitServicesToContainers set to true +router: + name: router + image: + registry: releases-docker.jfrog.io + repository: jfrog/router + tag: 7.118.3 + pullPolicy: IfNotPresent + serviceRegistry: + ## Service registry (Access) TLS verification skipped if enabled + insecure: false + internalPort: 8082 + externalPort: 8082 + tlsEnabled: false + ## Extra environment variables that can be used to tune router to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for router container + lifecycle: + ## From Artifactory versions 7.52.x, Wait for Artifactory to complete any open uploads or downloads before terminating + preStop: + exec: + command: ["sh", "-c", "while [[ $(curl --fail --silent --connect-timeout 2 http://localhost:8081/artifactory/api/v1/system/liveness) =~ OK ]]; do echo Artifactory is still alive; sleep 2; done"] + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: /scripts/script.sh + # subPath: script.sh + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "artifactory-ha.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " prepended. + unifiedSecretPrependReleaseName: true + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-pro + # tag: + pullPolicy: IfNotPresent + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + schedulerName: + ## Create a priority class for the Artifactory pods or use an existing one + ## NOTE - Maximum allowed value of a user defined priority is 1000000000 + priorityClass: + create: false + value: 1000000000 + ## Override default name + # name: + ## Use an existing priority class + # existingPriorityClass: + ## Delete the db.properties file in ARTIFACTORY_HOME/etc/db.properties + deleteDBPropertiesOnStartup: true + database: + maxOpenConnections: 80 + tomcat: + maintenanceConnector: + port: 8091 + connector: + maxThreads: 200 + sendReasonPhrase: false + extraConfig: 'acceptCount="400"' + ## certificates added to this secret will be copied to $JFROG_HOME/artifactory/var/etc/security/keys/trusted directory + customCertificates: + enabled: false + # certificateSecretName: + ## Support for metrics is only available for Artifactory 7.7.x (appVersions) and above. + ## To enable set `.Values.artifactory.metrics.enabled` to `true` + ## Note: Depricated `openMetrics` as part of 7.87.x and renamed to `metrics` + ## Refer - https://www.jfrog.com/confluence/display/JFROG/Open+Metrics + metrics: + enabled: false + ## Settings for pushing metrics to Insight - enable filebeat to true + filebeat: + enabled: false + log: + enabled: false + ## Log level for filebeat. Possible values: debug, info, warning, or error. + level: "info" + ## Elasticsearch details for filebeat to connect + elasticsearch: + url: "Elasticsearch url where JFrog Insight is installed For example, http://:8082" + username: "" + password: "" + ## Support for Cold Artifact Storage + ## set 'coldStorage.enabled' to 'true' only for Artifactory instance that you are designating as the Cold instance + ## Refer - https://jfrog.com/help/r/jfrog-platform-administration-documentation/setting-up-cold-artifact-storage + coldStorage: + enabled: false + ## This directory is intended for use with NFS eventual configuration for HA + ## When enabling this section, The system.yaml will include haDataDir section. + ## The location of Artifactory Data directory and Artifactory Filestore will be modified accordingly and will be shared among all nodes. + ## It's recommended to leave haDataDir disabled, and the default BinarystoreXml will set the Filestore location as configured in artifactory.persistence.nfs.dataDir. + haDataDir: + enabled: false + path: + haBackupDir: + enabled: false + path: + ## Files to copy to ARTIFACTORY_HOME/ on each Artifactory startup + ## Note : From 107.46.x chart versions, copyOnEveryStartup is not needed for binarystore.xml, it is always copied via initContainers + copyOnEveryStartup: + ## Absolute path + # - source: /artifactory_bootstrap/artifactory.cluster.license + ## Relative to ARTIFACTORY_HOME/ + # target: etc/artifactory/ + + ## Sidecar containers for tailing Artifactory logs + loggers: [] + # - access-audit.log + # - access-request.log + # - access-security-audit.log + # - access-service.log + # - artifactory-access.log + # - artifactory-event.log + # - artifactory-import-export.log + # - artifactory-request.log + # - artifactory-service.log + # - frontend-request.log + # - frontend-service.log + # - metadata-request.log + # - metadata-service.log + # - router-request.log + # - router-service.log + # - router-traefik.log + # - derby.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Sidecar containers for tailing Tomcat (catalina) logs + catalinaLoggers: [] + # - tomcat-catalina.log + # - tomcat-localhost.log + + ## Tomcat (catalina) loggers resources + catalinaLoggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Migration support from 6.x to 7.x. + migration: + enabled: false + timeoutSeconds: 3600 + ## Extra pre-start command in migration Init Container to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + ## Add custom init containers execution before predefined init containers + customInitContainersBegin: | + # - name: "custom-setup" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'touch {{ .Values.artifactory.persistence.mountPath }}/example-custom-setup' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + ## Add custom init containers + ## Add custom init containers execution after predefined init containers + customInitContainers: | + # - name: "custom-systemyaml-setup" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'curl -o {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml https:///systemyaml' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + ## Add custom sidecar containers + ## - The provided example uses a custom volume (customVolumes) + ## - The provided example shows running container as root (id 0) + customSidecarContainers: | + # - name: "sidecar-list-etc" + # image: {{ include "artifactory-ha.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'sh /scripts/script.sh' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: volume + # - mountPath: "/scripts/script.sh" + # name: custom-script + # subPath: script.sh + # resources: + # requests: + # memory: "32Mi" + # cpu: "50m" + # limits: + # memory: "128Mi" + # cpu: "100m" + ## Add custom volumes + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret'. + customVolumes: | + # - name: custom-script + # configMap: + # name: custom-script + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: "/scripts/script.sh" + # subPath: script.sh + # - name: posthook-start + # mountPath: "/scripts/posthoook-start.sh" + # subPath: posthoook-start.sh + # - name: prehook-start + # mountPath: "/scripts/prehook-start.sh" + # subPath: prehook-start.sh + ## Add custom persistent volume mounts - Available to the entire namespace + customPersistentVolumeClaim: {} + # name: + # mountPath: + # accessModes: + # - "-" + # size: + # storageClassName: + + ## Artifactory HA requires a unique master key. Each Artifactory node must have the same master key! + ## You can generate one with the command: "openssl rand -hex 32" + ## Pass it to helm with '--set artifactory.masterKey=${MASTER_KEY}' + ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName + ## IMPORTANT: You should NOT use the example masterKey for a production deployment! + ## IMPORTANT: This is a mandatory for fresh Install of 7.x (App version) + # masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + # masterKeySecretName: + + ## Join Key to connect to other services to Artifactory. + ## IMPORTANT: Setting this value overrides the existing joinKey + ## IMPORTANT: You should NOT use the example joinKey for a production deployment! + # joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + ## Alternatively, you can use a pre-existing secret with a key called join-key by specifying joinKeySecretName + # joinKeySecretName: + + ## Registration Token for JFConnect + # jfConnectToken: + ## Alternatively, you can use a pre-existing secret with a key called jfconnect-token by specifying jfConnectTokenSecretName + # jfConnectTokenSecretName: + + ## Add custom secrets - secret per file + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' common to all secrets + customSecrets: + # - name: custom-secret + # key: custom-secret.yaml + # data: > + # custom_secret_config: + # parameter1: value1 + # parameter2: value2 + # - name: custom-secret2 + # key: custom-secret2.config + # data: | + # here the custom secret 2 config + + ## If false, all service console logs will not redirect to a common console.log + consoleLog: false + ## admin allows to set the password for the default admin user. + ## See: https://www.jfrog.com/confluence/display/JFROG/Users+and+Groups#UsersandGroups-RecreatingtheDefaultAdminUserrecreate + admin: + ip: "127.0.0.1" + username: "admin" + password: + secret: + dataKey: + ## Artifactory license. + license: + ## licenseKey is the license key in plain text. Use either this or the license.secret setting + licenseKey: + ## If artifactory.license.secret is passed, it will be mounted as + ## ARTIFACTORY_HOME/etc/artifactory.cluster.license and loaded at run time. + secret: + ## The dataKey should be the name of the secret data key created. + dataKey: + ## Create configMap with artifactory.config.import.xml and security.import.xml and pass name of configMap in following parameter + configMapName: + ## Add any list of configmaps to Artifactory + configMaps: | + # posthook-start.sh: |- + # echo "This is a post start script" + # posthook-end.sh: |- + # echo "This is a post end script" + ## List of secrets for Artifactory user plugins. + ## One Secret per plugin's files. + userPluginSecrets: + # - archive-old-artifacts + # - build-cleanup + # - webhook + # - '{{ template "my-chart.fullname" . }}' + + ## Extra pre-start command to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + + ## Add lifecycle hooks for artifactory container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Extra environment variables that can be used to tune Artifactory to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: SERVER_XML_ARTIFACTORY_PORT + # value: "8081" + # - name: SERVER_XML_ARTIFACTORY_MAX_THREADS + # value: "200" + # - name: SERVER_XML_ACCESS_MAX_THREADS + # value: "50" + # - name: SERVER_XML_ARTIFACTORY_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_ACCESS_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_EXTRA_CONNECTOR + # value: "" + # - name: DB_POOL_MAX_ACTIVE + # value: "100" + # - name: DB_POOL_MAX_IDLE + # value: "10" + # - name: MY_SECRET_ENV_VAR + # valueFrom: + # secretKeyRef: + # name: my-secret-name + # key: my-secret-key + + ## System YAML entries now reside under files/system.yaml. + ## You can provide the specific values that you want to add or override under 'artifactory.extraSystemYaml'. + ## For example: + ## extraSystemYaml: + ## shared: + ## node: + ## id: my-instance + ## The entries provided under 'artifactory.extraSystemYaml' are merged with files/system.yaml to create the final system.yaml. + ## If you have already provided system.yaml under, 'artifactory.systemYaml', the values in that entry take precedence over files/system.yaml + ## You can modify specific entries with your own value under `artifactory.extraSystemYaml`, The values under extraSystemYaml overrides the values under 'artifactory.systemYaml' and files/system.yaml + extraSystemYaml: {} + ## systemYaml is intentionally commented and the previous content has been moved under files/system.yaml. + ## You have to add the all entries of the system.yaml file here, and it overrides the values in files/system.yaml. + # systemYaml: + + ## IMPORTANT: If overriding artifactory.internalPort: + ## DO NOT use port lower than 1024 as Artifactory runs as non-root and cannot bind to ports lower than 1024! + externalPort: 8082 + internalPort: 8082 + externalArtifactoryPort: 8081 + internalArtifactoryPort: 8081 + terminationGracePeriodSeconds: 30 + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param artifactory.podSecurityContext.enabled Enable security context + ## @param artifactory.podSecurityContext.runAsNonRoot Set pod's Security Context runAsNonRoot + ## @param artifactory.podSecurityContext.runAsUser User ID for the pod + ## @param artifactory.podSecurityContext.runASGroup Group ID for the pod + ## @param artifactory.podSecurityContext.fsGroup Group ID for the pod + ## + podSecurityContext: + enabled: true + runAsNonRoot: true + runAsUser: 1030 + runAsGroup: 1030 + fsGroup: 1030 + # fsGroupChangePolicy: "Always" + # seLinuxOptions: {} + ## The following settings are to configure the frequency of the liveness and startup probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.artifactory.tomcat.maintenanceConnector.port }}/artifactory/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + + ## Set the persistence storage type. This will apply the matching binarystore.xml to Artifactory config + ## Supported types are: + ## file-system (default) + ## nfs + ## google-storage + ## google-storage-v2 + ## google-storage-v2-direct (Recommended for GCS - Google Cloud Storage) + ## aws-s3-v3 + ## s3-storage-v3-direct (Recommended for AWS S3) + ## s3-storage-v3-archive + ## azure-blob + ## azure-blob-storage-direct + ## azure-blob-storage-v2-direct (Recommended for Azure Blob Storage) + type: file-system + ## Use binarystoreXml to provide a custom binarystore.xml + ## This is intentionally commented and below previous content of binarystoreXml is moved under files/binarystore.xml + ## binarystoreXml: + + ## For artifactory.persistence.type file-system + fileSystem: + ## Need to have the following set + existingSharedClaim: + enabled: false + numberOfExistingClaims: 1 + ## Should be a child directory of {{ .Values.artifactory.persistence.mountPath }} + dataDir: "{{ .Values.artifactory.persistence.mountPath }}/artifactory-data" + backupDir: "/var/opt/jfrog/artifactory-backup" + ## You may also use existing shared claims for the data and backup storage. This allows storage (NAS for example) to be used for Data and Backup dirs which are safe to share across multiple artifactory nodes. + ## You may specify numberOfExistingClaims to indicate how many of these existing shared claims to mount. (Default = 1) + ## Create PVCs with ReadWriteMany that match the naming convetions: + ## {{ template "artifactory-ha.fullname" . }}-data-pvc- + ## {{ template "artifactory-ha.fullname" . }}-backup-pvc + ## Example (using numberOfExistingClaims: 2) + ## myexample-data-pvc-0 + ## myexample-data-pvc-1 + ## myexample-backup-pvc + ## Note: While you need two PVC fronting two PVs, multiple PVs can be attached to the same storage in many cases allowing you to share an underlying drive. + ## For artifactory.persistence.type nfs + ## If using NFS as the shared storage, you must have a running NFS server that is accessible by your Kubernetes + ## cluster nodes. + ## Need to have the following set + nfs: + ## Must pass actual IP of NFS server with '--set For artifactory.persistence.nfs.ip=${NFS_IP}' + ip: + haDataMount: "/data" + haBackupMount: "/backup" + dataDir: "/var/opt/jfrog/artifactory-ha" + backupDir: "/var/opt/jfrog/artifactory-backup" + capacity: 200Gi + mountOptions: [] + ## For artifactory.persistence.type google-storage, google-storage-v2, google-storage-v2-direct + googleStorage: + ## When using GCP buckets as your binary store (Available with enterprise license only) + gcpServiceAccount: + enabled: false + ## Use either an existing secret prepared in advance or put the config (replace the content) in the values + ## ref: https://github.com/jfrog/charts/blob/master/stable/artifactory-ha/README.md#google-storage + # customSecretName: + # config: | + # { + # "type": "service_account", + # "project_id": "", + # "private_key_id": "?????", + # "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", + # "client_email": "???@j.iam.gserviceaccount.com", + # "client_id": "???????", + # "auth_uri": "https://accounts.google.com/o/oauth2/auth", + # "token_uri": "https://oauth2.googleapis.com/token", + # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + # "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." + # } + endpoint: commondatastorage.googleapis.com + httpsOnly: false + ## Set a unique bucket name + bucketName: "artifactory-ha-gcp" + ## GCP Bucket Authentication with Identity and Credential is deprecated. + ## identity: + ## credential: + path: "artifactory-ha/filestore" + bucketExists: false + useInstanceCredentials: false + enableSignedUrlRedirect: false + ## For artifactory.persistence.type aws-s3-v3, s3-storage-v3-direct, s3-storage-v3-archive + awsS3V3: + testConnection: false + identity: + credential: + region: + bucketName: artifactory-aws + path: artifactory/filestore + endpoint: + port: + useHttp: + maxConnections: 50 + connectionTimeout: + socketTimeout: + kmsServerSideEncryptionKeyId: + kmsKeyRegion: + kmsCryptoMode: + useInstanceCredentials: true + usePresigning: false + signatureExpirySeconds: 300 + signedUrlExpirySeconds: 30 + cloudFrontDomainName: + cloudFrontKeyPairId: + cloudFrontPrivateKey: + enableSignedUrlRedirect: false + enablePathStyleAccess: false + multiPartLimit: + multipartElementSize: + ## For artifactory.persistence.type azure-blob, azure-blob-storage-direct, azure-blob-storage-v2-direct + azureBlob: + accountName: + accountKey: + endpoint: + containerName: + multiPartLimit: 100000000 + multipartElementSize: 50000000 + testConnection: false + service: + name: artifactory + type: ClusterIP + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Artifactory service (useful if setting service.type=LoadBalancer) + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## Which nodes in the cluster should be in the external load balancer pool (have external traffic routed to them) + ## Supported pool values + ## members + ## all + pool: members + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + statefulset: + annotations: {} + ssh: + enabled: false + internalPort: 1339 + externalPort: 1339 + annotations: {} + ## Spread Artifactory pods evenly across your nodes or some other topology + ## Note this applies to both the primary and replicas + topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: kubernetes.io/hostname + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app: '{{ template "artifactory-ha.name" . }}' + # role: '{{ template "artifactory-ha.name" . }}' + # release: "{{ .Release.Name }}" + + ## Type specific configurations. + ## There is a difference between the primary and the member nodes. + ## Customising their resources and java parameters is done here. + primary: + name: artifactory-ha-primary + ## preStartCommand specific to the primary node, to be run after artifactory.preStartCommand + # preStartCommand: + labels: {} + persistence: + ## Set existingClaim to true or false + ## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-primary-0` + existingClaim: false + replicaCount: 3 + # minAvailable: 1 + + updateStrategy: + type: RollingUpdate + ## Resources for the primary node + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory primary node. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + # corePoolSize: 24 + jmx: + enabled: false + port: 9010 + host: + ssl: false + # When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # other: "" + nodeSelector: {} + tolerations: [] + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" + node: + name: artifactory-ha-member + ## preStartCommand specific to the member node, to be run after artifactory.preStartCommand + # preStartCommand: + labels: {} + persistence: + ## Set existingClaim to true or false + ## If true, you must prepare a PVC with the name e.g `volume-myrelease-artifactory-ha-member-0` + existingClaim: false + replicaCount: 0 + updateStrategy: + type: RollingUpdate + minAvailable: 1 + ## Resources for the member nodes + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory member nodes. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + # corePoolSize: 24 + jmx: + enabled: false + port: 9010 + host: + ssl: false + # When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # other: "" + nodeSelector: {} + ## Wait for Artifactory primary + waitForPrimaryStartup: + enabled: true + ## Setting time will override the built in test and will just wait the set time + time: + tolerations: [] + ## Complete specification of the "affinity" of the member nodes; if this is non-empty, + ## "podAntiAffinity" values are not used. + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" +frontend: + name: frontend + enabled: true + internalPort: 8070 + ## Extra environment variables that can be used to tune frontend to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + ## Session settings + session: + ## Time in minutes after which the frontend token will need to be refreshed + timeoutMinutes: '30' + ## Add lifecycle hooks for frontend container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## The following settings are to configure the frequency of the liveness and startup probes when splitServicesToContainers set to true + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.frontend.internalPort }}/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " --cert=ca.crt --key=ca.private.key` + # customCertificatesSecretName: + + ## When resetAccessCAKeys is true, Access will regenerate the CA certificate and matching private key + # resetAccessCAKeys: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 + sendReasonPhrase: false + extraConfig: 'acceptCount="100"' + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:8040/access/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " /var/opt/jfrog/nginx/message"] + # preStop: + # exec: + # command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] + + ## Sidecar containers for tailing Nginx logs + loggers: [] + # - access.log + # - error.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "64Mi" + # cpu: "25m" + # limits: + # memory: "128Mi" + # cpu: "50m" + + ## Logs options + logs: + stderr: false + stdout: false + level: warn + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: [] + # - containerPort: 8066 + # name: docker + + ## The nginx main conf was moved to files/nginx-main-conf.yaml. This key is commented out to keep support for the old configuration + # mainConf: | + + ## The nginx artifactory conf was moved to files/nginx-artifactory-conf.yaml. This key is commented out to keep support for the old configuration + # artifactoryConf: | + customInitContainers: "" + customSidecarContainers: "" + customVolumes: "" + customVolumeMounts: "" + customCommand: + ## allows overwriting the command for the nginx container. + ## defaults to [ 'nginx', '-g', 'daemon off;' ] + + service: + ## For minikube, set this to NodePort, elsewhere use LoadBalancer + type: LoadBalancer + ssloffload: false + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Nginx LoadBalancer service + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + ## Provide static ip address + loadBalancerIP: + ## There are two available options: "Cluster" (default) and "Local". + externalTrafficPolicy: Cluster + labels: {} + # label-key: label-value + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + ## A list of custom ports to be exposed on nginx service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: [] + # - port: 8066 + # targetPort: 8066 + # protocol: TCP + # name: docker + + annotations: {} + ## Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + http: + enabled: true + externalPort: 80 + internalPort: 8080 + https: + enabled: true + externalPort: 443 + internalPort: 8443 + ## DEPRECATED: The following will be replaced by L1065-L1076 in a future release + # externalPortHttp: 80 + # internalPortHttp: 8080 + # externalPortHttps: 443 + # internalPortHttps: 8443 + + ssh: + internalPort: 1339 + externalPort: 1339 + ## The following settings are to configure the frequency of the liveness and readiness probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "nginx.scheme" . }}://localhost:{{ include "nginx.port" . }}/ + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + resources: {} + # requests: + # memory: "250Mi" + # cpu: "100m" + # limits: + # memory: "250Mi" + # cpu: "500m" + + nodeSelector: {} + tolerations: [] + affinity: {} +## Filebeat Sidecar container +## The provided filebeat configuration is for Artifactory logs. It assumes you have a logstash installed and configured properly. +filebeat: + enabled: false + name: artifactory-filebeat + image: + repository: "docker.elastic.co/beats/filebeat" + version: 7.16.2 + logstashUrl: "logstash:5044" + terminationGracePeriod: 10 + livenessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + filebeat test output + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "100Mi" + # cpu: "100m" + + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output: + logstash: + hosts: ["{{ .Values.filebeat.logstashUrl }}"] +## Allows to add additional kubernetes resources +## Use --- as a separator between multiple resources +## For an example, refer - https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-ha-values.yaml +additionalResources: "" +## Adding entries to a Pod's /etc/hosts file +## For an example, refer - https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases +hostAliases: [] +# - ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" +# - ip: "10.1.2.3" +# hostnames: +# - "foo.remote" +# - "bar.remote" + +## Toggling this feature is seamless and requires helm upgrade +## will enable all microservices to run in different containers in a single pod (by default it is true) +splitServicesToContainers: true +## Specify common probes parameters +probes: + timeoutSeconds: 5 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/CHANGELOG.md b/charts/jfrog/artifactory-jcr/107.90.14/CHANGELOG.md new file mode 100644 index 0000000000..3a984d30f2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/CHANGELOG.md @@ -0,0 +1,206 @@ +# JFrog Container Registry Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.14] - Feb 20, 2024 +* Updated `artifactory.installerInfo` content + +## [107.80.0] - Feb 1, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.74.0] - Nov 23, 2023 +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.66.0] - Jul 20, 2023 +* Disabled federation services when splitServicesToContainers=true + +## [107.45.0] - Aug 25, 2022 +* Included event service as mandatory and remove the flag from values.yaml + +## [107.41.0] - Jul 22, 2022 +* Bumping chart version to align with app version +* Disabled jfconnect and event services when splitServicesToContainers=true + +## [107.19.4] - May 27, 2021 +* Bumping chart version to align with app version +* Update dependency Artifactory chart version to 107.19.4 + +## [4.0.0] - Apr 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible. +* Update dependency Artifactory chart version to 12.0.0 (Artifactory 7.18.3) + +## [3.8.0] - Apr 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Update dependency Artifactory chart version to 11.13.0 (Artifactory 7.17.5) + +## [3.7.0] - Mar 31, 2021 +* Update dependency Artifactory chart version to 11.12.2 (Artifactory 7.17.4) + +## [3.6.0] - Mar 15, 2021 +* Update dependency Artifactory chart version to 11.10.0 (Artifactory 7.16.3) + +## [3.5.1] - Mar 03, 2021 +* Update dependency Artifactory chart version to 11.9.3 (Artifactory 7.15.4) + +## [3.5.0] - Feb 18, 2021 +* Update dependency Artifactory chart version to 11.9.0 (Artifactory 7.15.3) + +## [3.4.1] - Feb 08, 2021 +* Update dependency Artifactory chart version to 11.8.0 (Artifactory 7.12.8) + +## [3.4.0] - Jan 4, 2020 +* Update dependency Artifactory chart version to 11.7.4 (Artifactory 7.12.5) + +## [3.3.1] - Dec 1, 2020 +* Update dependency Artifactory chart version to 11.5.4 (Artifactory 7.11.5) + +## [3.3.0] - Nov 23, 2020 +* Update dependency Artifactory chart version to 11.5.2 (Artifactory 7.11.2) + +## [3.2.2] - Nov 9, 2020 +* Update dependency Artifactory chart version to 11.4.5 (Artifactory 7.10.6) + +## [3.2.1] - Nov 2, 2020 +* Update dependency Artifactory chart version to 11.4.4 (Artifactory 7.10.5) + +## [3.2.0] - Oct 19, 2020 +* Update dependency Artifactory chart version to 11.4.0 (Artifactory 7.10.2) + +## [3.1.0] - Sep 30, 2020 +* Update dependency Artifactory chart version to 11.1.0 (Artifactory 7.9.0) + +## [3.0.2] - Sep 23, 2020 +* Updates readme + +## [3.0.1] - Sep 15, 2020 +* Update dependency Artifactory chart version to 11.0.1 (Artifactory 7.7.8) + +## [3.0.0] - Sep 14, 2020 +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Update dependency Artifactory chart version to 11.0.0 (Artifactory 7.7.3) + +## [2.5.1] - Jul 29, 2020 +* Update dependency Artifactory chart version to 10.0.12 (Artifactory 7.6.3) + +## [2.5.0] - Jul 10, 2020 +* Update dependency Artifactory chart version to 10.0.3 (Artifactory 7.6.2) +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [2.4.0] - Jun 30, 2020 +* Update dependency Artifactory chart version to 9.6.0 (Artifactory 7.6.1) + +## [2.3.1] - Jun 12, 2020 +* Update dependency Artifactory chart version to 9.5.2 (Artifactory 7.5.7) + +## [2.3.0] - Jun 1, 2020 +* Update dependency Artifactory chart version to 9.5.0 (Artifactory 7.5.5) + +## [2.2.5] - May 27, 2020 +* Update dependency Artifactory chart version to 9.4.9 (Artifactory 7.4.3) + +## [2.2.4] - May 20, 2020 +* Update dependency Artifactory chart version to 9.4.6 (Artifactory 7.4.3) + +## [2.2.3] - May 07, 2020 +* Update dependency Artifactory chart version to 9.4.5 (Artifactory 7.4.3) +* Add `installerInfo` string format + +## [2.2.2] - Apr 28, 2020 +* Update dependency Artifactory chart version to 9.4.4 (Artifactory 7.4.3) + +## [2.2.1] - Apr 27, 2020 +* Update dependency Artifactory chart version to 9.4.3 (Artifactory 7.4.1) + +## [2.2.0] - Apr 14, 2020 +* Update dependency Artifactory chart version to 9.4.0 (Artifactory 7.4.1) + +## [2.2.0] - Apr 14, 2020 +* Update dependency Artifactory chart version to 9.4.0 (Artifactory 7.4.1) + +## [2.1.6] - Apr 13, 2020 +* Update dependency Artifactory chart version to 9.3.1 (Artifactory 7.3.2) + +## [2.1.5] - Apr 8, 2020 +* Update dependency Artifactory chart version to 9.2.8 (Artifactory 7.3.2) + +## [2.1.4] - Mar 30, 2020 +* Update dependency Artifactory chart version to 9.2.3 (Artifactory 7.3.2) + +## [2.1.3] - Mar 30, 2020 +* Update dependency Artifactory chart version to 9.2.1 (Artifactory 7.3.2) + +## [2.1.2] - Mar 26, 2020 +* Update dependency Artifactory chart version to 9.1.5 (Artifactory 7.3.2) + +## [2.1.1] - Mar 25, 2020 +* Update dependency Artifactory chart version to 9.1.4 (Artifactory 7.3.2) + +## [2.1.0] - Mar 23, 2020 +* Update dependency Artifactory chart version to 9.1.3 (Artifactory 7.3.2) + +## [2.0.13] - Mar 19, 2020 +* Update dependency Artifactory chart version to 9.0.28 (Artifactory 7.2.1) + +## [2.0.12] - Mar 17, 2020 +* Update dependency Artifactory chart version to 9.0.26 (Artifactory 7.2.1) + +## [2.0.11] - Mar 11, 2020 +* Unified charts public release + +## [2.0.10] - Mar 8, 2020 +* Update dependency Artifactory chart version to 9.0.20 (Artifactory 7.2.1) + +## [2.0.9] - Feb 26, 2020 +* Update dependency Artifactory chart version to 9.0.15 (Artifactory 7.2.1) + +## [2.0.0] - Feb 12, 2020 +* Update dependency Artifactory chart version to 9.0.0 (Artifactory 7.0.0) + +## [1.1.0] - Jan 19, 2020 +* Update dependency Artifactory chart version to 8.4.1 (Artifactory 6.17.0) + +## [1.1.1] - Feb 3, 2020 +* Update dependency Artifactory chart version to 8.4.4 + +## [1.1.0] - Jan 19, 2020 +* Update dependency Artifactory chart version to 8.4.1 (Artifactory 6.17.0) + +## [1.0.1] - Dec 31, 2019 +* Update dependency Artifactory chart version to 8.3.5 + +## [1.0.0] - Dec 23, 2019 +* Update dependency Artifactory chart version to 8.3.3 + +## [0.2.1] - Dec 12, 2019 +* Update dependency Artifactory chart version to 8.3.1 + +## [0.2.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [0.1.5] - Nov 28, 2019 +* Update dependency Artifactory chart version to 8.2.6 + +## [0.1.4] - Nov 20, 2019 +* Update Readme + +## [0.1.3] - Nov 20, 2019 +* Fix JCR logo url +* Update dependency to Artifactory 8.2.2 chart + +## [0.1.2] - Nov 20, 2019 +* Update JCR logo + +## [0.1.1] - Nov 20, 2019 +* Add `appVersion` to Chart.yaml + +## [0.1.0] - Nov 20, 2019 +* Initial release of the JFrog Container Registry helm chart diff --git a/charts/jfrog/artifactory-jcr/107.90.14/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.14/Chart.yaml new file mode 100644 index 0000000000..d0da8ca665 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/Chart.yaml @@ -0,0 +1,30 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Container Registry + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-jcr +apiVersion: v2 +appVersion: 7.90.14 +dependencies: +- name: artifactory + repository: file://charts/artifactory + version: 107.90.14 +description: JFrog Container Registry +home: https://jfrog.com/container-registry/ +icon: file://assets/icons/artifactory-jcr.png +keywords: +- artifactory +- jfrog +- container +- registry +- devops +- jfrog-container-registry +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: helm@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory-jcr +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.14 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/LICENSE b/charts/jfrog/artifactory-jcr/107.90.14/LICENSE new file mode 100644 index 0000000000..8dada3edaf --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-jcr/107.90.14/README.md b/charts/jfrog/artifactory-jcr/107.90.14/README.md new file mode 100644 index 0000000000..c0051e61d1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/README.md @@ -0,0 +1,125 @@ +# JFrog Container Registry Helm Chart + +JFrog Container Registry is a free Artifactory edition with Docker and Helm repositories support. + +**Heads up: Our Helm Chart docs are moving to our main documentation site. For Artifactory installers, see [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory).** + +## Prerequisites Details + +* Kubernetes 1.19+ + +## Chart Details +This chart will do the following: + +* Deploy JFrog Container Registry +* Deploy an optional Nginx server +* Deploy an optional PostgreSQL Database +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client. + +```bash +helm repo add jfrog https://charts.jfrog.io +helm repo update +``` + +### Install Chart +To install the chart with the release name `jfrog-container-registry`: +```bash +helm upgrade --install jfrog-container-registry --set artifactory.postgresql.postgresqlPassword= jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +### Accessing JFrog Container Registry +**NOTE:** If using artifactory or nginx service type `LoadBalancer`, it might take a few minutes for JFrog Container Registry's public IP to become available. + +### Updating JFrog Container Registry +Once you have a new chart version, you can upgrade your deployment with +```bash +helm upgrade jfrog-container-registry jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +### Special Upgrade Notes +#### Artifactory upgrade from 6.x to 7.x (App Version) +Arifactory 6.x to 7.x upgrade requires a one time migration process. This is done automatically on pod startup if needed. +It's possible to configure the migration timeout with the following configuration in extreme cases. The provided default should be more than enough for completion of the migration. +```yaml +artifactory: + artifactory: + # Migration support from 6.x to 7.x + migration: + enabled: true + timeoutSeconds: 3600 +``` +* Note: If you are upgrading from 1.x to 3.x and above chart versions, please delete the existing statefulset of postgresql before upgrading the chart due to breaking changes in postgresql subchart. +```bash +kubectl delete statefulsets -postgresql +``` +* For more details about artifactory chart upgrades refer [here](https://github.com/jfrog/charts/blob/master/stable/artifactory/UPGRADE_NOTES.md) + +### Deleting JFrog Container Registry + +```bash +helm delete jfrog-container-registry --namespace artifactory-jcr +``` + +This will delete your JFrog Container Registry deployment.
+**NOTE:** You might have left behind persistent volumes. You should explicitly delete them with +```bash +kubectl delete pvc ... +kubectl delete pv ... +``` + +## Database +The JFrog Container Registry chart comes with PostgreSQL deployed by default.
+For details on the PostgreSQL configuration or customising the database, Look at the options described in the [Artifactory helm chart](https://github.com/jfrog/charts/tree/master/stable/artifactory). + +### Ingress and TLS +To get Helm to create an ingress object with a hostname, add these two lines to your Helm command: +```bash +helm upgrade --install jfrog-container-registry \ + --set artifactory.nginx.enabled=false \ + --set artifactory.ingress.enabled=true \ + --set artifactory.ingress.hosts[0]="artifactory.company.com" \ + --set artifactory.artifactory.service.type=NodePort \ + jfrog/artifactory-jcr --namespace artifactory-jcr --create-namespace +``` + +To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace: + +```bash +kubectl create secret tls artifactory-tls --cert=path/to/tls.cert --key=path/to/tls.key +``` + +Include the secret's name, along with the desired hostnames, in the Artifactory Ingress TLS section of your custom `values.yaml` file: + +```yaml +artifactory: + artifactory: + ingress: + ## If true, Artifactory Ingress will be created + ## + enabled: true + + ## Artifactory Ingress hostnames + ## Must be provided if Ingress is enabled + ## + hosts: + - jfrog-container-registry.domain.com + annotations: + kubernetes.io/tls-acme: "true" + ## Artifactory Ingress TLS configuration + ## Secrets must be manually created in the namespace + ## + tls: + - secretName: artifactory-tls + hosts: + - jfrog-container-registry.domain.com +``` + +## Useful links +https://www.jfrog.com +https://www.jfrog.com/confluence/ diff --git a/charts/jfrog/artifactory-jcr/107.90.14/app-readme.md b/charts/jfrog/artifactory-jcr/107.90.14/app-readme.md new file mode 100644 index 0000000000..9d9b7d85fc --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/app-readme.md @@ -0,0 +1,18 @@ +# JFrog Container Registry Helm Chart + +Universal Repository Manager supporting all major packaging formats, build tools and CI servers. + +## Chart Details +This chart will do the following: + +* Deploy JFrog Container Registry +* Deploy an optional Nginx server +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + + +## Useful links +Blog: [Herd Trust Into Your Rancher Labs Multi-Cloud Strategy with Artifactory](https://jfrog.com/blog/herd-trust-into-your-rancher-labs-multi-cloud-strategy-with-artifactory/) + +## Activate Your Artifactory Instance +Don't have a license? Please send an email to [rancher-jfrog-licenses@jfrog.com](mailto:rancher-jfrog-licenses@jfrog.com) to get it. + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/.helmignore b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/.helmignore new file mode 100644 index 0000000000..b6e97f07fb --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/.helmignore @@ -0,0 +1,24 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +OWNERS + +tests/ \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/CHANGELOG.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/CHANGELOG.md new file mode 100644 index 0000000000..61f0d3d9a2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/CHANGELOG.md @@ -0,0 +1,1365 @@ +# JFrog Artifactory Chart Changelog +All changes to this chart will be documented in this file. + +## [107.90.14] - July 18, 2024 +* Fixed #adding colon in image registry which breaks deployment [GH-1892](https://github.com/jfrog/charts/pull/1892) +* Added new `nginx.hosts` to use Nginx server_name directive instead of `ingress.hosts` +* Added a deprecation notice of ingress.hosts when `ngnix.enabled` is true +* Added new evidence service +* Corrected database connection values based on sizing +* **IMPORTANT** +* Separate access from artifactory tomcat to run on its own dedicated tomcat + * With this change access will be running in its own dedicated container + * This will give the ability to control resources and java options specific to access + Can be done by passing the following, + `access.javaOpts.other` + `access.resources` + `access.extraEnvironmentVariables` +* Updating the example link for downloading the DB driver +* Added Binary Provider recommendations + +## [107.89.0] - June 7, 2024 +* Fix the indentation of the commented-out sections in the values.yaml file +* Fixed sizing values by removing `JF_SHARED_NODE_HAENABLED` in xsmall/small configurations + +## [107.88.0] - May 29, 2024 +* **IMPORTANT** +* Refactored `nginx.artifactoryConf` and `nginx.mainConf` configuration (moved to files/nginx-artifactory-conf.yaml and files/nginx-main-conf.yaml instead of keys in values.yaml) + +## [107.87.0] - May 29, 2024 +* Renamed `.Values.artifactory.openMetrics` to `.Values.artifactory.metrics` + +## [107.85.0] - May 29, 2024 +* Changed `migration.enabled` to false by default. For 6.x to 7.x migration, this flag needs to be set to `true` + +## [107.84.0] - May 29, 2024 +* Added image section for `initContainers` instead of `initContainerImage` +* Renamed `router.image.imagePullPolicy` to `router.image.pullPolicy` +* Removed image section for `loggers` +* Added support for `global.verisons.initContainers` to override `initContainers.image.tag` +* Fixed an issue with extraSystemYaml merge +* **IMPORTANT** +* Renamed `artifactory.setSecurityContext` to `artifactory.podSecurityContext` +* Renamed `artifactory.uid` to `artifactory.podSecurityContext.runAsUser` +* Renamed `artifactory.gid` to `artifactory.podSecurityContext.runAsGroup` and `artifactory.podSecurityContext.fsGroup` +* Renamed `artifactory.fsGroupChangePolicy` to `artifactory.podSecurityContext.fsGroupChangePolicy` +* Renamed `artifactory.seLinuxOptions` to `artifactory.podSecurityContext.seLinuxOptions` +* Added flag `allowNonPostgresql` defaults to false +* Update postgresql tag version to `15.6.0-debian-12-r5` +* Added a check if `initContainerImage` exists +* Fixed an issue to generate unified secret to support artifactory fullname [GH-1882](https://github.com/jfrog/charts/issues/1882) +* Fixed an issue template render on loggers [GH-1883](https://github.com/jfrog/charts/issues/1883) +* Fixed resource constraints for "setup" initContainer of nginx deployment [GH-962] (https://github.com/jfrog/charts/issues/962) +* Added .Values.artifactory.unifiedSecretPrependReleaseName` for unified secret to prepend release name +* Fixed maxCacheSize and cacheProviderDir mix up under azure-blob-storage-v2-direct template in binarystore.xml + +## [107.82.0] - Mar 04, 2024 +* Added `disableRouterBypass` flag as experimental feature, to disable the artifactoryPath /artifactory/ and route all traffic through the Router. +* Removed Replicator service + +## [107.81.0] - Feb 20, 2024 +* **IMPORTANT** +* Refactored systemYaml configuration (moved to files/system.yaml instead of key in values.yaml) +* Added ability to provide `extraSystemYaml` configuration in values.yaml which will merge with the existing system yaml when `systemYamlOverride` is not given [GH-1848](https://github.com/jfrog/charts/pull/1848) +* Added option to modify the new cache configs, maxFileSizeLimit and skipDuringUpload +* Added IPV4/IPV6 Dualstack flag support for Artifactory and nginx service +* Added `singleStackIPv6Cluster` flag, which manages the Nginx configuration to enable listening on IPv6 and proxying. +* Fixing broken link for creating additional kubernetes resources. Refer [here](https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-values.yaml) +* Refactored installerInfo configuration (moved to files/installer-info.json instead of key in values.yaml) + +## [107.80.0] - Feb 20, 2024 +* Updated README.md to create a namespace using `--create-namespace` as part of helm install + +## [107.79.0] - Feb 20, 2024 +* **IMPORTANT** +* Added `unifiedSecretInstallation` flag which enables single unified secret holding all internal (chart) secrets to `true` by default +* Added support for azure-blob-storage-v2-direct config +* Added option to set Nginx to write access_log to container STDOUT +* **Important change:** +* Update postgresql tag version to `15.2.0-debian-11-r23` +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default bundles PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x/13.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true + +## [107.77.0] - April 22, 2024 +* Removed integration service +* Added recommended postgresql sizing configurations under sizing directory +* Updated artifactory-federation (probes, port, embedded mode) +* Fixed - Removed duplicate keys of the sizing yaml file +* Fixing broken nginx port [GH-1860](https://github.com/jfrog/charts/issues/1860) +* Added nginx.customCommand to use custom commands for the nginx container + +## [107.76.0] - Dec 13, 2023 +* Added connectionTimeout and socketTimeout paramaters under AWSS3 binarystore section +* Reduced nginx startupProbe initialDelaySeconds + +## [107.74.0] - Nov 30, 2023 +* Added recommended sizing configurations under sizing directory, please refer [here](README.md/#apply-sizing-configurations-to-the-chart) +* **IMPORTANT** +* Added min kubeVersion ">= 1.19.0-0" in chart.yaml + +## [107.70.0] - Nov 30, 2023 +* Fixed - StatefulSet pod annotations changed from range to toYaml [GH-1828](https://github.com/jfrog/charts/issues/1828) +* Fixed - Invalid format for awsS3V3 `multiPartLimit,multipartElementSize` in binarystore.xml. +* Fixed - SecurityContext with runAsGroup in artifactory [GH-1838](https://github.com/jfrog/charts/issues/1838) +* Added support for custom labels in the Nginx pods [GH-1836](https://github.com/jfrog/charts/pull/1836) +* Added podSecurityContext and containerSecurityContext for nginx +* Added support for nginx on openshift, set `podSecurityContext` and `containerSecurityContext` to false +* Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + +## [107.69.0] - Sep 18, 2023 +* Adjust rtfs context +* Fixed - Metadata service does not respect customVolumeMounts for DB CAs [GH-1815](https://github.com/jfrog/charts/issues/1815) + +## [107.68.8] - Sep 18, 2023 +* Reverted - Enabled `unifiedSecretInstallation` by default [GH-1819](https://github.com/jfrog/charts/issues/1819) +* Removed openshift condition check from NOTES.txt + +## [107.68.7] - Aug 28, 2023 +* Enabled `unifiedSecretInstallation` by default + +## [107.67.0] - Aug 28, 2023 +* Add 'extraJavaOpts' and 'port' values to federation service + +## [107.66.0] - Aug 28, 2023 +* Added federation service container in artifactory +* Add rtfs service to ingress in artifactory + +## [107.64.0] - Aug 28, 2023 +* Added support to configure event.webhooks within generated system.yaml +* Fixed an issue to generate ssl certificate should support artifactory fullname +* Added binarystore.xml template to persistence storage type `nfs`. The default Filestore location configured according to artifactory.persistence.nfs.dataDir. +* Added 'multiPartLimit' and 'multipartElementSize' parameters to awsS3V3 binary providers. +* Increased default Artifactory Tomcat acceptCount config to 400 +* Fixed Illegal Strict-Transport-Security header in nginx config + +## [107.63.0] - Aug 28, 2023 +* Added support for Openshift by adding the securityContext in container level. +* **IMPORTANT** +* Disable securityContext in container and pod level to deploy postgres on openshift. +* Fixed support for fsGroup in non openshift environemnt and runAsGroup in openshift environment. +* Fixed - Helm Template Error when using artifactory.loggers [GH-1791](https://github.com/jfrog/charts/issues/1791) +* Removed the nginx disable condition for openshift +* Fixed jfconnect disabling as micro-service on splitcontainers [GH-1806](https://github.com/jfrog/charts/issues/1806) + +## [107.62.0] - Jun 5, 2023 +* Upgraded to autoscaling/v2 +* Added support for 'port' and 'useHttp' parameters for s3-storage-v3 binary provider [GH-1767](https://github.com/jfrog/charts/issues/1767) + +## [107.61.0] - May 31, 2023 +* Added new binary provider `google-storage-v2-direct` +* Added missing parameter 'enableSignedUrlRedirect' to 'googleStorage' + +## [107.60.0] - May 31, 2023 +* Enabled `splitServicesToContainers` to true by default +* Updated the recommended values for small, medium and large installations to support the 'splitServicesToContainers' + +## [107.59.0] - May 31, 2023 +* Fixed reference of `terminationGracePeriodSeconds` +* Added Support for Cold Artifact Storage as part of the systemYaml configuration (disabled by default) +* Added new binary provider `s3-storage-v3-archive` +* Fixed jfconnect disabling as micro-service on non-splitcontainers +* Fixed wrong cache-fs provider ID of cluster-s3-storage-v3 in the binarystore.xml [GH-1772](https://github.com/jfrog/charts/issues/1772) + +## [107.58.0] - Mar 23, 2023 +* Updated postgresql multi-arch tag version to `13.10.0-debian-11-r14` +* Removed obselete remove-lost-found initContainer` +* Added env JF_SHARED_NODE_HAENABLED under frontend when running in the container split mode + +## [107.57.0] - Mar 02, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1793` + +## [107.55.0] - Jan 31, 2023 +* Updated initContainerImage and logger image to `ubi9/ubi-minimal:9.1.0.1760` +* Adding a custom preStop to Artifactory router for allowing graceful termination to complete + +## [107.53.0] - Jan 20, 2023 +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.50.0] - Jan 20, 2023 +* Updated postgresql tag version to `13.9.0-debian-11-11` +* Fixed an issue for capabilities check of ingress +* Updated jfrogUrl text path in migrate.sh file +* Added a note that from 107.46.x chart versions, `copyOnEveryStartup` is not needed for binarystore.xml, it is always copied via initContainers. For more Info, Refer [GH-1723](https://github.com/jfrog/charts/issues/1723) + +## [107.49.0] - Jan 16, 2023 +* Added support for setting `seLinuxOptions` in `securityContext` [GH-1699](https://github.com/jfrog/charts/pull/1699) +* Added option to enable/disable proxy_request_buffering and proxy_buffering_off [GH-1686](https://github.com/jfrog/charts/pull/1686) +* Updated initContainerImage and logger image to `ubi8/ubi-minimal:8.7.1049` + +## [107.48.0] - Oct 27, 2022 +* Updated router version to `7.51.0` + +## [107.47.0] - Sep 29, 2022 +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-941` +* Added support for annotations for artifactory statefulset and nginx deployment [GH-1665](https://github.com/jfrog/charts/pull/1665) +* Updated router version to `7.49.0` + +## [107.46.0] - Sep 14, 2022 +* **IMPORTANT** +* Added support for lifecycle hooks for all containers, changed `artifactory.postStartCommand` to `.Values.artifactory.lifecycle.postStart.exec.command` +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-902` +* Update nginx configuration to allow websocket requests when using pipelines +* Fixed an issue to allow artifactory to make direct API calls to store instead via jfconnect service when `splitServicesToContainers=true` +* Refactor binarystore.xml configuration (moved to `files/binarystore.xml` instead of key in values.yaml) +* Added new binary providers `cluster-s3-storage-v3`, `s3-storage-v3-direct`, `azure-blob-storage-direct`, `google-storage-v2` +* Deprecated (removed) `aws-s3` binary provider [JetS3t library](https://www.jfrog.com/confluence/display/JFROG/Configuring+the+Filestore#ConfiguringtheFilestore-BinaryProvider) +* Deprecated (removed) `google-storage` binary provider and force persistence storage type `google-storage` to work with `google-storage-v2` only +* Copy binarystore.xml in init Container to fix existing persistence on file system in clear text +* Removed obselete `.Values.artifactory.binarystore.enabled` key +* Removed `newProbes.enabled`, default to new probes +* Added nginx.customCommand using inotifyd to reload nginx's config upon ssl secret or configmap changes [GH-1640](https://github.com/jfrog/charts/pull/1640) + +## [107.43.0] - Aug 25, 2022 +* Added flag `artifactory.replicator.ingress.enabled` to enable/disable ingress for replicator +* Updated initContainerImage to `ubi8/ubi-minimal:8.6-854` +* Updated router version to `7.45.0` +* Added flag `artifactory.schedulerName` to set for the pods the value of schedulerName field [GH-1606](https://github.com/jfrog/charts/issues/1606) +* Enabled TLS based on access or router in values.yaml + +## [107.42.0] - Aug 25, 2022 +* Enabled database creds secret to use from unified secret +* Updated router version to `7.42.0` +* Fix duplicate volumes for userPluginSecrets [GH-1650] (https://github.com/jfrog/charts/issues/1650) +* Added support to truncate (> 63 chars) for unifiedCustomSecretVolumeName + +## [107.41.0] - June 27, 2022 +* Added support for nginx.terminationGracePeriodSeconds [GH-1645](https://github.com/jfrog/charts/issues/1645) +* Use an alternate command for `find` to copy custom certificates +* Added support for circle of trust using `circleOfTrustCertificatesSecret` secret name [GH-1623](https://github.com/jfrog/charts/pull/1623) + +## [107.40.0] - June 16, 2022 +* Added support for PodDisruptionBudget [GH-1618](https://github.com/jfrog/charts/issues/1618) +* From artifactory 7.38.x, joinKey can be retrived from Admin > User Management > Settings in UI +* Allow templating for pod annotations [GH-1634](https://github.com/jfrog/charts/pull/1634) +* Fixed `customPersistentPodVolumeClaim` name to `customPersistentVolumeClaim` +* Added flags to control enable/disable infra services in splitServicesToContainers + +## [107.39.0] - May 31, 2022 +* Fix default `artifactory.async.corePoolSize` [GH-1612](https://github.com/jfrog/charts/issues/1612) +* Added support of nginx annotations +* Reduce startupProbe `initialDelaySeconds` +* Align all liveness and readiness probes failureThreshold to `5` seconds +* Added new flag `unifiedSecretInstallation` to enables single unified secret holding all the artifactory secrets +* Updated router version to `7.38.0` +* Add support for NFS config with directories `haBackupDir` and `haDataDir` +* Fixed - disable jfconnect on oss/jcr/cpp flavours [GH-1630](https://github.com/jfrog/charts/issues/1630) + +## [107.38.0] - May 04, 2022 +* Added support for `global.nodeSelector` to artifactory and nginx pods +* Updated router version to `7.36.1` +* Added support for custom global probes timeout +* Updated frontend container command +* Added topologySpreadConstraints to artifactory and nginx, and add lifecycle hooks to nginx [GH-1596](https://github.com/jfrog/charts/pull/1596) +* Added support of extraEnvironmentVariables for all infra services containers +* Enabled the consumption (jfconnect) flag by default +* Fix jfconnect disabling on non-splitcontainers + +## [107.37.0] - Mar 08, 2022 +* Added support for customPorts in nginx deployment +* Bugfix - Wrong proxy_pass configurations for /artifactory/ in the default artifactory.conf +* Added signedUrlExpirySeconds option to artifactory.persistence.type aws-S3-V3 +* Updated router version to `7.35.0` +* Added useInstanceCredentials,enableSignedUrlRedirect option to google-storage-v2 +* Changed dependency charts repo to `charts.jfrog.io` + +## [107.36.0] - Mar 03, 2022 +* Remove pdn tracker which starts replicator service +* Added silent option for curl probes +* Added readiness health check for the artifactory container for k8s version < 1.20 +* Fix property file migration issue to system.yaml 6.x to 7.x + +## [107.35.0] - Feb 08, 2022 +* Updated router version to `7.32.1` + +## [107.33.0] - Jan 11, 2022 +* Add more user friendly support for anti-affinity +* Pod anti-affinity is now enabled by default (soft rule) +* Readme fixes +* Added support for setting `fsGroupChangePolicy` +* Added nginx customInitContainers, customVolumes, customSidecarContainers [GH-1565](https://github.com/jfrog/charts/pull/1565) +* Updated router version to `7.30.0` + +## [107.32.0] - Dec 22, 2021 +* Updated logger image to `jfrog/ubi-minimal:8.5-204` +* Added default `8091` as `artifactory.tomcat.maintenanceConnector.port` for probes check +* Refactored probes to replace httpGet probes with basic exec + curl +* Refactored `database-creds` secret to create only when database values are passed +* Added new endpoints for probes `/artifactory/api/v1/system/liveness` and `/artifactory/api/v1/system/readiness` +* Enabled `newProbes:true` by default to use these endpoints +* Fix filebeat sidecar spool file permissions +* Updated filebeat sidecar container to `7.16.2` + +## [107.31.0] - Dec 17, 2021 +* Added support for HorizontalPodAutoscaler apiVersion `autoscaling/v2beta2` +* Remove integration service feature flag to make it mandatory service +* Update postgresql tag version to `13.4.0-debian-10-r39` +* Fixed `artifactory.resources` indentation in `migration-artifactory` init container [GH-1562](https://github.com/jfrog/charts/issues/1562) +* Refactored `router.requiredServiceTypes` to support platform chart + +## [107.30.0] - Nov 30, 2021 +* Fixed incorrect permission for filebeat.yaml +* Updated healthcheck (liveness/readiness) api for integration service +* Disable readiness health check for the artifactory container when running in the container split mode +* Ability to start replicator on enabling pdn tracker + +## [107.29.0] - Nov 26, 2021 +* Added integration service container in artifactory +* Add support for Ingress Class Name in Ingress Spec [GH-1516](https://github.com/jfrog/charts/pull/1516) +* Fixed chart values to use curl instead of wget [GH-1529](https://github.com/jfrog/charts/issues/1529) +* Updated nginx config to allow websockets when pipelines is enabled +* Moved router.topology.local.requireqservicetypes from system.yaml to router as environment variable +* Added jfconnect in system.yaml +* Updated artifactory container’s health probes to use artifactory api on rt-split +* Updated initContainerImage to `jfrog/ubi-minimal:8.5-204` +* Updated router version to `7.28.2` +* Set Jfconnect enabled to `false` in the artifactory container when running in the container split mode + +## [107.28.0] - Nov 11, 2021 +* Added default values cpu and memeory in initContainers +* Updated router version to `7.26.0` +* Updated (`rbac.create` and `serviceAccount.create` to false by default) for least privileges +* Fixed incorrect data type for `Values.router.serviceRegistry.insecure` in default values.yaml [GH-1514](https://github.com/jfrog/charts/pull/1514/files) +* **IMPORTANT** +* Changed init-container images from `alpine` to `ubi8/ubi-minimal` +* Added support for AWS License Manager using `.Values.aws.licenseConfigSecretName` + +## [107.27.0] - Oct 6, 2021 +* **Breaking change** +* Aligned probe structure (moved probes variables under config block) +* Added support for new probes(set to false by default) +* Bugfix - Invalid format for `multiPartLimit,multipartElementSize,maxCacheSize` in binarystore.xml [GH-1466](https://github.com/jfrog/charts/issues/1466) +* Added missioncontrol container in artifactory +* Dropped NET_RAW capability for the containers +* Added resources to migration-artifactory init container +* Added resources to all rt split containers +* Updated router version to `7.25.1` +* Added support for Ingress networking.k8s.io/v1/Ingress for k8s >=1.22 [GH-1487](https://github.com/jfrog/charts/pull/1487) +* Added min kubeVersion ">= 1.14.0-0" in chart.yaml +* Update alpine tag version to `3.14.2` +* Update busybox tag version to `1.33.1` +* Artifactory chart support for cluster license + +## [107.26.0] - Aug 23, 2021 +* Added Observability container (only when `splitServicesToContainers` is enabled) +* Support for high availability (when replicaCount > 1) +* Added min kubeVersion ">= 1.12.0-0" in chart.yaml + +## [107.25.0] - Aug 13, 2021 +* Updated readme of chart to point to wiki. Refer [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory) +* Added startupProbe and livenessProbe for RT-split containers +* Updated router version to 7.24.1 +* Added security hardening fixes +* Enabled startup probes for k8s >= 1.20.x +* Changed network policy to allow all ingress and egress traffic +* Added Observability changes +* Added support for global.versions.router (only when `splitServicesToContainers` is enabled) + +## [107.24.0] - July 27, 2021 +* Support global and product specific tags at the same time +* Added support for artifactory containers split + +## [107.23.0] - July 8, 2021 +* Bug fix - logger sideCar picks up Wrong File in helm +* Allow filebeat metrics configuration in values.yaml + +## [107.22.0] - July 6, 2021 +* Update alpine tag version to `3.14.0` +* Added `nodePort` support to artifactory-service and nginx-service templates +* Removed redundant `terminationGracePeriodSeconds` in statefulset +* Increased `startupProbe.failureThreshold` time + +## [107.21.3] - July 2, 2021 +* Added ability to change sendreasonphrase value in server.xml via system yaml + +## [107.19.3] - May 20, 2021 +* Fix broken support for startupProbe for k8s < 1.18.x +* Added support for `nameOverride` and `fullnameOverride` in values.yaml + +## [107.18.6] - April 29, 2021 +* Bumping chart version to align with app version +* Add `securityContext` option on nginx container + +## [12.0.0] - April 22, 2021 +* **Breaking change:** +* Increased default postgresql persistence size to `200Gi` +* Update postgresql tag version to `13.2.0-debian-10-r55` +* Update postgresql chart version to `10.3.18` in chart.yaml - [10.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#to-1000) +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x/12.x's postgresql.image.tag, previous postgresql.persistence.size and databaseUpgradeReady=true +* **IMPORTANT** +* This chart is only helm v3 compatible. +* Fixed filebeat-configmap naming +* Explicitly set ServiceAccount `automountServiceAccountToken` to 'true' +* Update alpine tag version to `3.13.5` + +## [11.13.2] - April 15, 2021 +* Updated Artifactory version to 7.17.9 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.9) + +## [11.13.1] - April 6, 2021 +* Updated Artifactory version to 7.17.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.6) +* Update alpine tag version to `3.13.4` + +## [11.13.0] - April 5, 2021 +* **IMPORTANT** +* Added `charts.jfrog.io` as default JFrog Helm repository +* Updated Artifactory version to 7.17.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.5) + +## [11.12.2] - Mar 31, 2021 +* Updated Artifactory version to 7.17.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.17.4) + +## [11.12.1] - Mar 30, 2021 +* Updated Artifactory version to 7.17.3 +* Add `timeoutSeconds` to all exec probes - Please refer [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes) + +## [11.12.0] - Mar 24, 2021 +* Updated Artifactory version to 7.17.2 +* Optimized startupProbe time + +## [11.11.0] - Mar 18, 2021 +* Add support to startupProbe + +## [11.10.0] - Mar 15, 2021 +* Updated Artifactory version to 7.16.3 + +## [11.9.5] - Mar 09, 2021 +* Added HSTS header to nginx conf + +## [11.9.4] - Mar 9, 2021 +* Removed bintray URL references in the chart + +## [11.9.3] - Mar 04, 2021 +* Updated Artifactory version to 7.15.4 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.4) + +## [11.9.2] - Mar 04, 2021 +* Fixed creation of nginx-certificate-secret when Nginx is disabled + +## [11.9.1] - Feb 19, 2021 +* Update busybox tag version to `1.32.1` + +## [11.9.0] - Feb 18, 2021 +* Updated Artifactory version to 7.15.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.15.3) +* Add option to specify update strategy for Artifactory statefulset + +## [11.8.1] - Feb 11, 2021 +* Exposed "multiPartLimit" and "multipartElementSize" for the Azure Blob Storage Binary Provider + +## [11.8.0] - Feb 08, 2021 +* Updated Artifactory version to 7.12.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.8) +* Support for custom certificates using secrets +* **Important:** Switched docker images download from `docker.bintray.io` to `releases-docker.jfrog.io` +* Update alpine tag version to `3.13.1` + +## [11.7.8] - Jan 25, 2021 +* Add support for hostAliases + +## [11.7.7] - Jan 11, 2021 +* Fix failures when using creds file for configurating google storage + +## [11.7.6] - Jan 11, 2021 +* Updated Artifactory version to 7.12.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.12.6) + +## [11.7.5] - Jan 07, 2021 +* Added support for optional tracker dedicated ingress `.Values.artifactory.replicator.trackerIngress.enabled` (defaults to false) + +## [11.7.4] - Jan 04, 2021 +* Fixed gid support for statefulset + +## [11.7.3] - Dec 31, 2020 +* Added gid support for statefulset +* Add setSecurityContext flag to allow securityContext block to be removed from artifactory statefulset + +## [11.7.2] - Dec 29, 2020 +* **Important:** Removed `.Values.metrics` and `.Values.fluentd` (Fluentd and Prometheus integrations) +* Add support for creating additional kubernetes resources - [refer here](https://github.com/jfrog/log-analytics-prometheus/blob/master/artifactory-values.yaml) +* Updated Artifactory version to 7.12.5 + +## [11.7.1] - Dec 21, 2020 +* Updated Artifactory version to 7.12.3 + +## [11.7.0] - Dec 18, 2020 +* Updated Artifactory version to 7.12.2 +* Added `.Values.artifactory.openMetrics.enabled` + +## [11.6.1] - Dec 11, 2020 +* Added configurable `.Values.global.versions.artifactory` in values.yaml + +## [11.6.0] - Dec 10, 2020 +* Update postgresql tag version to `12.5.0-debian-10-r25` +* Fixed `artifactory.persistence.googleStorage.endpoint` from `storage.googleapis.com` to `commondatastorage.googleapis.com` +* Updated chart maintainers email + +## [11.5.5] - Dec 4, 2020 +* **Important:** Renamed `.Values.systemYaml` to `.Values.systemYamlOverride` + +## [11.5.4] - Dec 1, 2020 +* Improve error message returned when attempting helm upgrade command + +## [11.5.3] - Nov 30, 2020 +* Updated Artifactory version to 7.11.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) + +## [11.5.2] - Nov 23, 2020 +* Updated Artifactory version to 7.11.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.11) +* Updated port namings on services and pods to allow for istio protocol discovery +* Change semverCompare checks to support hosted Kubernetes +* Add flag to disable creation of ServiceMonitor when enabling prometheus metrics +* Prevent the PostHook command to be executed if the user did not specify a command in the values file +* Fix issue with tls file generation when nginx.https.enabled is false + +## [11.5.1] - Nov 19, 2020 +* Updated Artifactory version to 7.11.2 +* Bugfix - access.config.import.xml override Access Federation configurations + +## [11.5.0] - Nov 17, 2020 +* Updated Artifactory version to 7.11.1 +* Update alpine tag version to `3.12.1` + +## [11.4.6] - Nov 10, 2020 +* Pass system.yaml via external secret for advanced usecases +* Added support for custom ingress +* Bugfix - stateful set not picking up changes to database secrets + +## [11.4.5] - Nov 9, 2020 +* Updated Artifactory version to 7.10.6 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.6) + +## [11.4.4] - Nov 2, 2020 +* Add enablePathStyleAccess property for aws-s3-v3 binary provider template + +## [11.4.3] - Nov 2, 2020 +* Updated Artifactory version to 7.10.5 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.5) + +## [11.4.2] - Oct 22, 2020 +* Chown bug fix where Linux capability cannot chown all files causing log line warnings +* Fix Frontend timeout linting issue + +## [11.4.1] - Oct 20, 2020 +* Add flag to disable prepare-custom-persistent-volume init container + +## [11.4.0] - Oct 19, 2020 +* Updated Artifactory version to 7.10.2 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.10.2) + +## [11.3.2] - Oct 15, 2020 +* Add support to specify priorityClassName for nginx deployment + +## [11.3.1] - Oct 9, 2020 +* Add support for customInitContainersBegin + +## [11.3.0] - Oct 7, 2020 +* Updated Artifactory version to 7.9.1 +* **Breaking change:** Fix `storageClass` to correct `storageClassName` in values.yaml + +## [11.2.0] - Oct 5, 2020 +* Expose Prometheus metrics via a ServiceMonitor +* Parse log files for metric data with Fluentd + +## [11.1.0] - Sep 30, 2020 +* Updated Artifactory version to 7.9.0 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.9) +* Added support for resources in init container + +## [11.0.11] - Sep 25, 2020 +* Update to use linux capability CAP_CHOWN instead of root base init container to avoid any use of root containers to pass Redhat security requirements + +## [11.0.10] - Sep 28, 2020 +* Setting chart coordinates in migitation yaml + +## [11.0.9] - Sep 25, 2020 +* Update filebeat version to `7.9.2` + +## [11.0.8] - Sep 24, 2020 +* Fixed broken issue - when setting `waitForDatabase: false` container startup still waits for DB + +## [11.0.7] - Sep 22, 2020 +* Readme updates + +## [11.0.6] - Sep 22, 2020 +* Fix lint issue in migitation yaml + +## [11.0.5] - Sep 22, 2020 +* Fix broken migitation yaml + +## [11.0.4] - Sep 21, 2020 +* Added mitigation yaml for Artifactory - [More info](https://github.com/jfrog/chartcenter/blob/master/docs/securitymitigationspec.md) + +## [11.0.3] - Sep 17, 2020 +* Added configurable session(UI) timeout in frontend microservice + +## [11.0.2] - Sep 17, 2020 +* Added proper required text to be shown while postgres upgrades + +## [11.0.1] - Sep 14, 2020 +* Updated Artifactory version to 7.7.8 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7.8) + +## [11.0.0] - Sep 2, 2020 +* **Breaking change:** Changed `imagePullSecrets`values from string to list. +* **Breaking change:** Added `image.registry` and changed `image.version` to `image.tag` for docker images +* Added support for global values +* Updated maintainers in chart.yaml +* Update postgresql tag version to `12.3.0-debian-10-r71` +* Update postgresql chart version to `9.3.4` in requirements.yaml - [9.x Upgrade Notes](https://github.com/bitnami/charts/tree/master/bitnami/postgresql#900) +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass previous 9.x/10.x's postgresql.image.tag and databaseUpgradeReady=true + +## [10.1.0] - Aug 13, 2020 +* Updated Artifactory version to 7.7.3 - [Release Notes](https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.7) + +## [10.0.15] - Aug 10, 2020 +* Added enableSignedUrlRedirect for persistent storage type aws-s3-v3. + +## [10.0.14] - Jul 31, 2020 +* Update the README section on Nginx SSL termination to reflect the actual YAML structure. + +## [10.0.13] - Jul 30, 2020 +* Added condition to disable the migration scripts. + +## [10.0.12] - Jul 28, 2020 +* Document Artifactory node affinity. + +## [10.0.11] - Jul 28, 2020 +* Added maxConnections for persistent storage type aws-s3-v3. + +## [10.0.10] - Jul 28, 2020 +* Bugfix / support for userPluginSecrets with Artifactory 7 + +## [10.0.9] - Jul 27, 2020 +* Add tpl to external database secrets +* Modified `scheme` to `artifactory.scheme` + +## [10.0.8] - Jul 23, 2020 +* Added condition to disable the migration init container. + +## [10.0.7] - Jul 21, 2020 +* Updated Artifactory Chart to add node and primary labels to pods and service objects. + +## [10.0.6] - Jul 20, 2020 +* Support custom CA and certificates + +## [10.0.5] - Jul 13, 2020 +* Updated Artifactory version to 7.6.3 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.3 +* Fixed Mysql database jar path in `preStartCommand` in README + +## [10.0.4] - Jul 10, 2020 +* Move some postgresql values to where they should be according to the subchart + +## [10.0.3] - Jul 8, 2020 +* Set Artifactory access client connections to the same value as the access threads + +## [10.0.2] - Jul 6, 2020 +* Updated Artifactory version to 7.6.2 +* **IMPORTANT** +* Added ChartCenter Helm repository in README + +## [10.0.1] - Jul 01, 2020 +* Add dedicated ingress object for Replicator service when enabled + +## [10.0.0] - Jun 30, 2020 +* Update postgresql tag version to `10.13.0-debian-10-r38` +* Update alpine tag version to `3.12` +* Update busybox tag version to `1.31.1` +* **IMPORTANT** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), you need to pass postgresql.image.tag=9.6.18-debian-10-r7 and databaseUpgradeReady=true + +## [9.6.0] - Jun 29, 2020 +* Updated Artifactory version to 7.6.1 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.6.1 +* Add tpl for external database secrets + +## [9.5.5] - Jun 25, 2020 +* Stop loading the Nginx stream module because it is now a core module + +## [9.5.4] - Jun 25, 2020 +* Notes.txt update - add --namespace parameter + +## [9.5.3] - Jun 11, 2020 +* Support list of custom secrets + +## [9.5.2] - Jun 12, 2020 +* Updated Artifactory version to 7.5.7 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5.7 + +## [9.5.1] - Jun 8, 2020 +* Readme update - configuring Artifactory with oracledb + +## [9.5.0] - Jun 1, 2020 +* Updated Artifactory version to 7.5.5 - https://www.jfrog.com/confluence/display/JFROG/Artifactory+Release+Notes#ArtifactoryReleaseNotes-Artifactory7.5 +* Fixes bootstrap configMap permission issue +* Update postgresql tag version to `9.6.18-debian-10-r7` + +## [9.4.9] - May 27, 2020 +* Added Tomcat maxThreads & acceptCount + +## [9.4.8] - May 25, 2020 +* Fixed postgresql README `image` Parameters + +## [9.4.7] - May 24, 2020 +* Fixed typo in README regarding migration timeout + +## [9.4.6] - May 19, 2020 +* Added metadata maxOpenConnections + +## [9.4.5] - May 07, 2020 +* Fix `installerInfo` string format + +## [9.4.4] - Apr 27, 2020 +* Updated Artifactory version to 7.4.3 + +## [9.4.3] - Apr 26, 2020 +* Change order of the customInitContainers to run before the "migration-artifactory" initContainer. + +## [9.4.2] - Apr 24, 2020 +* Fix `artifactory.persistence.awsS3V3.useInstanceCredentials` incorrect conditional logic +* Bump postgresql tag version to `9.6.17-debian-10-r72` in values.yaml + +## [9.4.1] - Apr 16, 2020 +* Custom volumes in migration init container. + +## [9.4.0] - Apr 14, 2020 +* Updated Artifactory version to 7.4.1 + +## [9.3.1] - April 13, 2020 +* Update README with helm v3 commands + +## [9.3.0] - April 10, 2020 +* Use dependency charts from `https://charts.bitnami.com/bitnami` +* Bump postgresql chart version to `8.7.3` in requirements.yaml +* Bump postgresql tag version to `9.6.17-debian-10-r21` in values.yaml + +## [9.2.9] - Apr 8, 2020 +* Added recommended ingress annotation to avoid 413 errors + +## [9.2.8] - Apr 8, 2020 +* Moved migration scripts under `files` directory +* Support preStartCommand in migration Init container as `artifactory.migration.preStartCommand` + +## [9.2.7] - Apr 6, 2020 +* Fix cache size (should be 5gb instead of 50gb since volume claim is only 20gb). + +## [9.2.6] - Apr 1, 2020 +* Support masterKey and joinKey as secrets + +## [9.2.5] - Apr 1, 2020 +* Fix readme use to `-hex 32` instead of `-hex 16` + +## [9.2.4] - Mar 31, 2020 +* Change the way the artifactory `command:` is set so it will properly pass a SIGTERM to java + +## [9.2.3] - Mar 29, 2020 +* Add Nginx log options: stderr as logfile and log level + +## [9.2.2] - Mar 30, 2020 +* Use the same defaulting mechanism used for the artifactory version used elsewhere in the chart + +## [9.2.1] - Mar 29, 2020 +* Fix loggers sidecars configurations to support new file system layout and new log names + +## [9.2.0] - Mar 29, 2020 +* Fix broken admin user bootstrap configuration +* **Breaking change:** renamed `artifactory.accessAdmin` to `artifactory.admin` + +## [9.1.5] - Mar 26, 2020 +* Fix volumeClaimTemplate issue + +## [9.1.4] - Mar 25, 2020 +* Fix volume name used by filebeat container + +## [9.1.3] - Mar 24, 2020 +* Use `postgresqlExtendedConf` for setting custom PostgreSQL configuration (instead of `postgresqlConfiguration`) + +## [9.1.2] - Mar 22, 2020 +* Support for SSL offload in Nginx service(LoadBalancer) layer. Introduced `nginx.service.ssloffload` field with boolean type. + +## [9.1.1] - Mar 23, 2020 +* Moved installer info to values.yaml so it is fully customizable + +## [9.1.0] - Mar 23, 2020 +* Updated Artifactory version to 7.3.2 + +## [9.0.29] - Mar 20, 2020 +* Add support for masterKey trim during 6.x to 7.x migration if 6.x masterKey is 32 hex (64 characters) + +## [9.0.28] - Mar 18, 2020 +* Increased Nginx proxy_buffers size + +## [9.0.27] - Mar 17, 2020 +* Changed all single quotes to double quotes in values files +* useInstanceCredentials variable was declared in S3 settings but not used in chart. Now it is being used. + +## [9.0.26] - Mar 17, 2020 +* Fix rendering of Service Account annotations + +## [9.0.25] - Mar 16, 2020 +* Update Artifactory readme with extra ingress annotations needed for Artifactory to be set as SSO provider + +## [9.0.24] - Mar 16, 2020 +* Add Unsupported message from 6.18 to 7.2.x (migration) + +## [9.0.23] - Mar 12, 2020 +* Fix README.md rendering issue + +## [9.0.22] - Mar 11, 2020 +* Upgrade Docs update + +## [9.0.21] - Mar 11, 2020 +* Unified charts public release + +## [9.0.20] - Mar 6, 2020 +* Fix path to `/artifactory_bootstrap` +* Add support for controlling the name of the ingress and allow to set more than one cname + +## [9.0.19] - Mar 4, 2020 +* Add support for disabling `consoleLog` in `system.yaml` file + +## [9.0.18] - Feb 28, 2020 +* Add support to process `valueFrom` for extraEnvironmentVariables + +## [9.0.17] - Feb 26, 2020 +* Fix join key secret naming + +## [9.0.16] - Feb 26, 2020 +* Store join key to secret + +## [9.0.15] - Feb 26, 2020 +* Updated Artifactory version to 7.2.1 + +## [9.0.10] - Feb 07, 2020 +* Remove protection flag `databaseUpgradeReady` which was added to check internal postgres upgrade + +## [9.0.0] - Feb 07, 2020 +* Updated Artifactory version to 7.0.0 + +## [8.4.8] - Feb 13, 2020 +* Add support for SSH authentication to Artifactory + +## [8.4.7] - Feb 11, 2020 +* Change Artifactory service port name to be hard-coded to `http` instead of using `{{ .Release.Name }}` + +## [8.4.6] - Feb 9, 2020 +* Add support for `tpl` in the `postStartCommand` + +## [8.4.5] - Feb 4, 2020 +* Support customisable Nginx kind + +## [8.4.4] - Feb 2, 2020 +* Add a comment stating that it is recommended to use an external PostgreSQL with a static password for production installations + +## [8.4.3] - Jan 30, 2020 +* Add the option to configure resources for the logger containers + +## [8.4.2] - Jan 26, 2020 +* Improve `database.user` and `database.password` logic in order to support more use cases and make the configuration less repetitive + +## [8.4.1] - Jan 19, 2020 +* Fix replicator port config in nginx replicator configmap + +## [8.4.0] - Jan 19, 2020 +* Updated Artifactory version to 6.17.0 + +## [8.3.6] - Jan 16, 2020 +* Added example for external nginx-ingress + +## [8.3.5] - Dec 30, 2019 +* Fix for nginx probes failing when launched with http disabled + +## [8.3.4] - Dec 24, 2019 +* Better support for custom `artifactory.internalPort` + +## [8.3.3] - Dec 23, 2019 +* Mark empty map values with `{}` + +## [8.3.2] - Dec 16, 2019 +* Fix for toggling nginx service ports + +## [8.3.1] - Dec 12, 2019 +* Add support for toggling nginx service ports + +## [8.3.0] - Dec 1, 2019 +* Updated Artifactory version to 6.16.0 + +## [8.2.6] - Nov 28, 2019 +* Add support for using existing PriorityClass + +## [8.2.5] - Nov 27, 2019 +* Add support for PriorityClass + +## [8.2.4] - Nov 21, 2019 +* Add an option to use a file system cache-fs with the file-system binarystore template + +## [8.2.3] - Nov 20, 2019 +* Update Artifactory Readme + +## [8.2.2] - Nov 20, 2019 +* Update Artfactory logo + +## [8.2.1] - Nov 18, 2019 +* Add the option to provide service account annotations (in order to support stuff like https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html) + +## [8.2.0] - Nov 18, 2019 +* Updated Artifactory version to 6.15.0 + +## [8.1.11] - Nov 17, 2019 +* Do not provide a default master key. Allow it to be auto generated by Artifactory on first startup + +## [8.1.10] - Nov 17, 2019 +* Fix creation of double slash in nginx artifactory configuration + +## [8.1.9] - Nov 14, 2019 +* Set explicit `postgresql.postgresqlPassword=""` to avoid helm v3 error + +## [8.1.8] - Nov 12, 2019 +* Updated Artifactory version to 6.14.1 + +## [8.1.7] - Nov 9, 2019 +* Additional documentation for masterKey + +## [8.1.6] - Nov 10, 2019 +* Update PostgreSQL chart version to 7.0.1 +* Use formal PostgreSQL configuration format + +## [8.1.5] - Nov 8, 2019 +* Add support `artifactory.service.loadBalancerSourceRanges` for whitelisting when setting `artifactory.service.type=LoadBalancer` + +## [8.1.4] - Nov 6, 2019 +* Add support for any type of environment variable by using `extraEnvironmentVariables` as-is + +## [8.1.3] - Nov 6, 2019 +* Add nodeselector support for Postgresql + +## [8.1.2] - Nov 5, 2019 +* Add support for the aws-s3-v3 filestore, which adds support for pod IAM roles + +## [8.1.1] - Nov 4, 2019 +* When using `copyOnEveryStartup`, make sure that the target base directories are created before copying the files + +## [8.1.0] - Nov 3, 2019 +* Updated Artifactory version to 6.14.0 + +## [8.0.1] - Nov 3, 2019 +* Make sure the artifactory pod exits when one of the pre-start stages fail + +## [8.0.0] - Oct 27, 2019 +**IMPORTANT - BREAKING CHANGES!**
+**DOWNTIME MIGHT BE REQUIRED FOR AN UPGRADE!** +* If this is a new deployment or you already use an external database (`postgresql.enabled=false`), these changes **do not affect you**! +* If this is an upgrade and you are using the default PostgreSQL (`postgresql.enabled=true`), must use the upgrade instructions in [UPGRADE_NOTES.md](UPGRADE_NOTES.md)! +* PostgreSQL sub chart was upgraded to version `6.5.x`. This version is **not backward compatible** with the old version (`0.9.5`)! +* Note the following **PostgreSQL** Helm chart changes + * The chart configuration has changed! See [values.yaml](values.yaml) for the new keys used + * **PostgreSQL** is deployed as a StatefulSet + * See [PostgreSQL helm chart](https://hub.helm.sh/charts/stable/postgresql) for all available configurations + +## [7.18.3] - Oct 24, 2019 +* Change the preStartCommand to support templating + +## [7.18.2] - Oct 21, 2019 +* Add support for setting `artifactory.labels` +* Add support for setting `nginx.labels` + +## [7.18.1] - Oct 10, 2019 +* Updated Artifactory version to 6.13.1 + +## [7.18.0] - Oct 7, 2019 +* Updated Artifactory version to 6.13.0 + +## [7.17.5] - Sep 24, 2019 +* Option to skip wait-for-db init container with '--set waitForDatabase=false' + +## [7.17.4] - Sep 11, 2019 +* Updated Artifactory version to 6.12.2 + +## [7.17.3] - Sep 9, 2019 +* Updated Artifactory version to 6.12.1 + +## [7.17.2] - Aug 22, 2019 +* Fix the nginx server_name directive used with ingress.hosts + +## [7.17.1] - Aug 21, 2019 +* Enable the Artifactory container's liveness and readiness probes + +## [7.17.0] - Aug 21, 2019 +* Updated Artifactory version to 6.12.0 + +## [7.16.11] - Aug 14, 2019 +* Updated Artifactory version to 6.11.6 + +## [7.16.10] - Aug 11, 2019 +* Fix Ingress routing and add an example + +## [7.16.9] - Aug 5, 2019 +* Do not mount `access/etc/bootstrap.creds` unless user specifies a custom password or secret (Access already generates a random password if not provided one) +* If custom `bootstrap.creds` is provided (using keys or custom secret), prepare it with an init container so the temp file does not persist + +## [7.16.8] - Aug 4, 2019 +* Improve binarystore config + 1. Convert to a secret + 2. Move config to values.yaml + 3. Support an external secret + +## [7.16.7] - Jul 29, 2019 +* Don't create the nginx configmaps when nginx.enabled is false + +## [7.16.6] - Jul 24, 2019 +* Simplify nginx setup and shorten initial wait for probes + +## [7.16.5] - Jul 22, 2019 +* Change Ingress API to be compatible with recent kubernetes versions + +## [7.16.4] - Jul 22, 2019 +* Updated Artifactory version to 6.11.3 + +## [7.16.3] - Jul 11, 2019 +* Add ingress.hosts to the Nginx server_name directive when ingress is enabled to help with Docker repository sub domain configuration + +## [7.16.2] - Jul 3, 2019 +* Fix values key in reverse proxy example + +## [7.16.1] - Jul 1, 2019 +* Updated Artifactory version to 6.11.1 + +## [7.16.0] - Jun 27, 2019 +* Update Artifactory version to 6.11 and add restart to Artifactory when bootstrap.creds file has been modified + +## [7.15.8] - Jun 27, 2019 +* Add the option for changing nginx config using values.yaml and remove outdated reverse proxy documentation + +## [7.15.6] - Jun 24, 2019 +* Update chart maintainers + +## [7.15.5] - Jun 24, 2019 +* Change Nginx to point to the artifactory externalPort + +## [7.15.4] - Jun 23, 2019 +* Add the option to provide an IP for the access-admin endpoints + +## [7.15.3] - Jun 23, 2019 +* Add values files for small, medium and large installations + +## [7.15.2] - Jun 20, 2019 +* Add missing terminationGracePeriodSeconds to values.yaml + +## [7.15.1] - Jun 19, 2019 +* Updated Artifactory version to 6.10.4 + +## [7.15.0] - Jun 17, 2019 +* Use configmaps for nginx configuration and remove nginx postStart command + +## [7.14.8] - Jun 18, 2019 +* Add the option to provide additional ingress rules + +## [7.14.7] - Jun 14, 2019 +* Updated readme with improved external database setup example + +## [7.14.6] - Jun 11, 2019 +* Updated Artifactory version to 6.10.3 +* Updated installer-info template + +## [7.14.5] - Jun 6, 2019 +* Updated Google Cloud Storage API URL and https settings + +## [7.14.4] - Jun 5, 2019 +* Delete the db.properties file on Artifactory startup + +## [7.14.3] - Jun 3, 2019 +* Updated Artifactory version to 6.10.2 + +## [7.14.2] - May 21, 2019 +* Updated Artifactory version to 6.10.1 + +## [7.14.1] - May 19, 2019 +* Fix missing logger image tag + +## [7.14.0] - May 7, 2019 +* Updated Artifactory version to 6.10.0 + +## [7.13.21] - May 5, 2019 +* Add support for setting `artifactory.async.corePoolSize` + +## [7.13.20] - May 2, 2019 +* Remove unused property `artifactory.releasebundle.feature.enabled` + +## [7.13.19] - May 1, 2019 +* Fix indentation issue with the replicator system property + +## [7.13.18] - Apr 30, 2019 +* Add support for JMX monitoring + +## [7.13.17] - Apr 25, 2019 +* Added support for `cacheProviderDir` + +## [7.13.16] - Apr 18, 2019 +* Changing API StatefulSet version to `v1` and permission fix for custom `artifactory.conf` for Nginx + +## [7.13.15] - Apr 16, 2019 +* Updated documentation for Reverse Proxy Configuration + +## [7.13.14] - Apr 15, 2019 +* Added support for `customVolumeMounts` + +## [7.13.13] - Aprl 12, 2019 +* Added support for `bucketExists` flag for googleStorage + +## [7.13.12] - Apr 11, 2019 +* Replace `curl` examples with `wget` due to the new base image + +## [7.13.11] - Aprl 07, 2019 +* Add support for providing the Artifactory license as a parameter + +## [7.13.10] - Apr 10, 2019 +* Updated Artifactory version to 6.9.1 + +## [7.13.9] - Aprl 04, 2019 +* Add support for templated extraEnvironmentVariables + +## [7.13.8] - Aprl 07, 2019 +* Change network policy API group + +## [7.13.7] - Aprl 04, 2019 +* Bugfix for userPluginSecrets + +## [7.13.6] - Apr 4, 2019 +* Add information about upgrading Artifactory with auto-generated postgres password + +## [7.13.5] - Aprl 03, 2019 +* Added installer info + +## [7.13.4] - Aprl 03, 2019 +* Allow secret names for user plugins to contain template language + +## [7.13.3] - Apr 02, 2019 +* Allow NetworkPolicy configurations (defaults to allow all) + +## [7.13.2] - Aprl 01, 2019 +* Add support for user plugin secret + +## [7.13.1] - Mar 27, 2019 +* Add the option to copy a list of files to ARTIFACTORY_HOME on startup + +## [7.13.0] - Mar 26, 2019 +* Updated Artifactory version to 6.9.0 + +## [7.12.18] - Mar 25, 2019 +* Add CI tests for persistence, ingress support and nginx + +## [7.12.17] - Mar 22, 2019 +* Add the option to change the default access-admin password + +## [7.12.16] - Mar 22, 2019 +* Added support for `.Probe.path` to customise the paths used for health probes + +## [7.12.15] - Mar 21, 2019 +* Added support for `artifactory.customSidecarContainers` to create custom sidecar containers +* Added support for `artifactory.customVolumes` to create custom volumes + +## [7.12.14] - Mar 21, 2019 +* Make ingress path configurable + +## [7.12.13] - Mar 19, 2019 +* Move the copy of bootstrap config from postStart to preStart + +## [7.12.12] - Mar 19, 2019 +* Fix existingClaim example + +## [7.12.11] - Mar 18, 2019 +* Add information about nginx persistence + +## [7.12.10] - Mar 15, 2019 +* Wait for nginx configuration file before using it + +## [7.12.9] - Mar 15, 2019 +* Revert securityContext changes since they were causing issues + +## [7.12.8] - Mar 15, 2019 +* Fix issue #247 (init container failing to run) + +## [7.12.7] - Mar 14, 2019 +* Updated Artifactory version to 6.8.7 +* Add support for Artifactory-CE for C++ + +## [7.12.6] - Mar 13, 2019 +* Move securityContext to container level + +## [7.12.5] - Mar 11, 2019 +* Updated Artifactory version to 6.8.6 + +## [7.12.4] - Mar 8, 2019 +* Fix existingClaim option + +## [7.12.3] - Mar 5, 2019 +* Updated Artifactory version to 6.8.4 + +## [7.12.2] - Mar 4, 2019 +* Add support for catalina logs sidecars + +## [7.12.1] - Feb 27, 2019 +* Updated Artifactory version to 6.8.3 + +## [7.12.0] - Feb 25, 2019 +* Add nginx support for tail sidecars + +## [7.11.1] - Feb 20, 2019 +* Added support for enterprise storage + +## [7.10.2] - Feb 19, 2019 +* Updated Artifactory version to 6.8.2 + +## [7.10.1] - Feb 17, 2019 +* Updated Artifactory version to 6.8.1 +* Add example of `SERVER_XML_EXTRA_CONNECTOR` usage + +## [7.10.0] - Feb 15, 2019 +* Updated Artifactory version to 6.8.0 + +## [7.9.6] - Feb 13, 2019 +* Updated Artifactory version to 6.7.3 + +## [7.9.5] - Feb 12, 2019 +* Add support for tail sidecars to view logs from k8s api + +## [7.9.4] - Feb 6, 2019 +* Fix support for customizing statefulset `terminationGracePeriodSeconds` + +## [7.9.3] - Feb 5, 2019 +* Add instructions on how to deploy Artifactory with embedded Derby database + +## [7.9.2] - Feb 5, 2019 +* Add support for customizing statefulset `terminationGracePeriodSeconds` + +## [7.9.1] - Feb 3, 2019 +* Updated Artifactory version to 6.7.2 + +## [7.9.0] - Jan 23, 2019 +* Updated Artifactory version to 6.7.0 + +## [7.8.9] - Jan 22, 2019 +* Added support for `artifactory.customInitContainers` to create custom init containers + +## [7.8.8] - Jan 17, 2019 +* Added support of values ingress.labels + +## [7.8.7] - Jan 16, 2019 +* Mount replicator.yaml (config) directly to /replicator_extra_conf + +## [7.8.6] - Jan 13, 2019 +* Fix documentation about nginx group id + +## [7.8.5] - Jan 13, 2019 +* Updated Artifactory version to 6.6.5 + +## [7.8.4] - Jan 8, 2019 +* Make artifactory.replicator.publicUrl required when the replicator is enabled + +## [7.8.3] - Jan 1, 2019 +* Updated Artifactory version to 6.6.3 +* Add support for `artifactory.extraEnvironmentVariables` to pass more environment variables to Artifactory + +## [7.8.2] - Dec 28, 2018 +* Fix location `replicator.yaml` is copied to + +## [7.8.1] - Dec 27, 2018 +* Updated Artifactory version to 6.6.1 + +## [7.8.0] - Dec 20, 2018 +* Updated Artifactory version to 6.6.0 + +## [7.7.13] - Dec 17, 2018 +* Updated Artifactory version to 6.5.13 + +## [7.7.12] - Dec 12, 2018 +* Fix documentation about Artifactory license setup using secret + +## [7.7.11] - Dec 10, 2018 +* Fix issue when using existing claim + +## [7.7.10] - Dec 5, 2018 +* Remove Distribution certificates creation. + +## [7.7.9] - Nov 30, 2018 +* Updated Artifactory version to 6.5.9 + +## [7.7.8] - Nov 29, 2018 +* Updated postgresql version to 9.6.11 + +## [7.7.7] - Nov 27, 2018 +* Updated Artifactory version to 6.5.8 + +## [7.7.6] - Nov 19, 2018 +* Added support for configMap to use custom Reverse Proxy Configuration with Nginx + +## [7.7.5] - Nov 14, 2018 +* Fix location of `nodeSelector`, `affinity` and `tolerations` + +## [7.7.4] - Nov 14, 2018 +* Updated Artifactory version to 6.5.3 + +## [7.7.3] - Nov 12, 2018 +* Support artifactory.preStartCommand for running command before entrypoint starts + +## [7.7.2] - Nov 7, 2018 +* Support database.url parameter (DB_URL) + +## [7.7.1] - Oct 29, 2018 +* Change probes port to 8040 (so they will not be blocked when all tomcat threads on 8081 are exhausted) + +## [7.7.0] - Oct 28, 2018 +* Update postgresql chart to version 0.9.5 to be able and use `postgresConfig` options + +## [7.6.8] - Oct 23, 2018 +* Fix providing external secret for database credentials + +## [7.6.7] - Oct 23, 2018 +* Allow user to configure externalTrafficPolicy for Loadbalancer + +## [7.6.6] - Oct 22, 2018 +* Updated ingress annotation support (with examples) to support docker registry v2 + +## [7.6.5] - Oct 21, 2018 +* Updated Artifactory version to 6.5.2 + +## [7.6.4] - Oct 19, 2018 +* Allow providing pre-existing secret containing master key +* Allow arbitrary annotations on primary and member node pods +* Enforce size limits when using local storage with `emptyDir` +* Allow providing pre-existing secrets containing external database credentials + +## [7.6.3] - Oct 18, 2018 +* Updated Artifactory version to 6.5.1 + +## [7.6.2] - Oct 17, 2018 +* Add Apache 2.0 license + +## [7.6.1] - Oct 11, 2018 +* Supports master-key in the secrets and stateful-set +* Allows ingress default `backend` to be enabled or disabled (defaults to enabled) + +## [7.6.0] - Oct 11, 2018 +* Updated Artifactory version to 6.5.0 + +## [7.5.4] - Oct 9, 2018 +* Quote ingress hosts to support wildcard names + +## [7.5.3] - Oct 4, 2018 +* Add PostgreSQL resources template + +## [7.5.2] - Oct 2, 2018 +* Add `helm repo add jfrog https://charts.jfrog.io` to README + +## [7.5.1] - Oct 2, 2018 +* Set Artifactory to 6.4.1 + +## [7.5.0] - Sep 27, 2018 +* Set Artifactory to 6.4.0 + +## [7.4.3] - Sep 26, 2018 +* Add ci/test-values.yaml + +## [7.4.2] - Sep 2, 2018 +* Updated Artifactory version to 6.3.2 +* Removed unused PVC + +## [7.4.0] - Aug 22, 2018 +* Added support to run as non root +* Updated Artifactory version to 6.2.0 + +## [7.3.0] - Aug 22, 2018 +* Enabled RBAC Support +* Added support for PostStartCommand (To download Database JDBC connector) +* Increased postgresql max_connections +* Added support for `nginx.conf` ConfigMap +* Updated Artifactory version to 6.1.0 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.lock b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.lock new file mode 100644 index 0000000000..8064c323b5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +digest: sha256:404ce007353baaf92a6c5f24b249d5b336c232e5fd2c29f8a0e4d0095a09fd53 +generated: "2022-03-08T08:53:16.293311+05:30" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.yaml new file mode 100644 index 0000000000..58bbbab3fe --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/Chart.yaml @@ -0,0 +1,24 @@ +apiVersion: v2 +appVersion: 7.90.14 +dependencies: +- condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 +description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. +home: https://www.jfrog.com/artifactory/ +icon: https://raw.githubusercontent.com/jfrog/charts/master/stable/artifactory/logo/artifactory-logo.png +keywords: +- artifactory +- jfrog +- devops +kubeVersion: '>= 1.19.0-0' +maintainers: +- email: installers@jfrog.com + name: Chart Maintainers at JFrog +name: artifactory +sources: +- https://github.com/jfrog/charts +type: application +version: 107.90.14 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/LICENSE b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/LICENSE new file mode 100644 index 0000000000..8dada3edaf --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/LICENSE @@ -0,0 +1,201 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright {yyyy} {name of copyright owner} + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/README.md new file mode 100644 index 0000000000..da3304ee58 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/README.md @@ -0,0 +1,59 @@ +# JFrog Artifactory Helm Chart + +**IMPORTANT!** Our Helm Chart docs have moved to our main documentation site. Below you will find the basic instructions for installing, uninstalling, and deleting Artifactory. For all other information, refer to [Installing Artifactory](https://www.jfrog.com/confluence/display/JFROG/Installing+Artifactory#InstallingArtifactory-HelmInstallation). + +## Prerequisites +* Kubernetes 1.19+ +* Artifactory Pro trial license [get one from here](https://www.jfrog.com/artifactory/free-trial/) + +## Chart Details +This chart will do the following: + +* Deploy Artifactory-Pro/Artifactory-Edge (or OSS/CE if custom image is set) +* Deploy a PostgreSQL database using the stable/postgresql chart (can be changed) **NOTE:** For production grade installations it is recommended to use an external PostgreSQL. +* Deploy an optional Nginx server +* Optionally expose Artifactory with Ingress [Ingress documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) + +## Installing the Chart + +### Add JFrog Helm repository + +Before installing JFrog helm charts, you need to add the [JFrog helm repository](https://charts.jfrog.io) to your helm client + +```bash +helm repo add jfrog https://charts.jfrog.io +helm repo update +``` + +### Install Chart +To install the chart with the release name `artifactory`: +```bash +helm upgrade --install artifactory jfrog/artifactory --namespace artifactory --create-namespace +``` + +### Apply Sizing configurations to the Chart +To apply the chart with recommended sizing configurations : +For small configurations : +```bash +helm upgrade --install artifactory jfrog/artifactory -f sizing/artifactory-small-extra-config.yaml -f sizing/artifactory-small.yaml --namespace artifactory --create-namespace +``` + +## Uninstalling Artifactory + +Uninstall is supported only on Helm v3 and on. + +Uninstall Artifactory using the following command. + +```bash +helm uninstall artifactory && sleep 90 && kubectl delete pvc -l app=artifactory +``` + +## Deleting Artifactory + +**IMPORTANT:** Deleting Artifactory will also delete your data volumes and you will lose all of your data. You must back up all this information before deletion. You do not need to uninstall Artifactory before deleting it. + +To delete Artifactory use the following command. + +```bash +helm delete artifactory --namespace artifactory +``` diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/.helmignore b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/.helmignore new file mode 100644 index 0000000000..f0c1319444 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.lock b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.lock new file mode 100644 index 0000000000..3687f52df5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.4.2 +digest: sha256:dce0349883107e3ff103f4f17d3af4ad1ea3c7993551b1c28865867d3e53d37c +generated: "2021-03-30T09:13:28.360322819Z" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.yaml new file mode 100644 index 0000000000..4b197b2071 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/Chart.yaml @@ -0,0 +1,29 @@ +annotations: + category: Database +apiVersion: v2 +appVersion: 11.11.0 +dependencies: +- name: common + repository: https://charts.bitnami.com/bitnami + version: 1.x.x +description: Chart for PostgreSQL, an object-relational database management system + (ORDBMS) with an emphasis on extensibility and on standards-compliance. +home: https://github.com/bitnami/charts/tree/master/bitnami/postgresql +icon: https://bitnami.com/assets/stacks/postgresql/img/postgresql-stack-220x234.png +keywords: +- postgresql +- postgres +- database +- sql +- replication +- cluster +maintainers: +- email: containers@bitnami.com + name: Bitnami +- email: cedric@desaintmartin.fr + name: desaintmartin +name: postgresql +sources: +- https://github.com/bitnami/bitnami-docker-postgresql +- https://www.postgresql.org/ +version: 10.3.18 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/README.md new file mode 100644 index 0000000000..63d3605bb8 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/README.md @@ -0,0 +1,770 @@ +# PostgreSQL + +[PostgreSQL](https://www.postgresql.org/) is an object-relational database management system (ORDBMS) with an emphasis on extensibility and on standards-compliance. + +For HA, please see [this repo](https://github.com/bitnami/charts/tree/master/bitnami/postgresql-ha) + +## TL;DR + +```console +$ helm repo add bitnami https://charts.bitnami.com/bitnami +$ helm install my-release bitnami/postgresql +``` + +## Introduction + +This chart bootstraps a [PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This chart has been tested to work with NGINX Ingress, cert-manager, fluentd and Prometheus on top of the [BKPR](https://kubeprod.io/). + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 +- PV provisioner support in the underlying infrastructure + +## Installing the Chart +To install the chart with the release name `my-release`: + +```console +$ helm install my-release bitnami/postgresql +``` + +The command deploys PostgreSQL on the Kubernetes cluster in the default configuration. The [Parameters](#parameters) section lists the parameters that can be configured during installation. + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `my-release` deployment: + +```console +$ helm delete my-release +``` + +The command removes all the Kubernetes components but PVC's associated with the chart and deletes the release. + +To delete the PVC's associated with `my-release`: + +```console +$ kubectl delete pvc -l release=my-release +``` + +> **Note**: Deleting the PVC's will delete postgresql data as well. Please be cautious before doing it. + +## Parameters + +The following tables lists the configurable parameters of the PostgreSQL chart and their default values. + +| Parameter | Description | Default | +|-----------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------| +| `global.imageRegistry` | Global Docker Image registry | `nil` | +| `global.postgresql.postgresqlDatabase` | PostgreSQL database (overrides `postgresqlDatabase`) | `nil` | +| `global.postgresql.postgresqlUsername` | PostgreSQL username (overrides `postgresqlUsername`) | `nil` | +| `global.postgresql.existingSecret` | Name of existing secret to use for PostgreSQL passwords (overrides `existingSecret`) | `nil` | +| `global.postgresql.postgresqlPassword` | PostgreSQL admin password (overrides `postgresqlPassword`) | `nil` | +| `global.postgresql.servicePort` | PostgreSQL port (overrides `service.port`) | `nil` | +| `global.postgresql.replicationPassword` | Replication user password (overrides `replication.password`) | `nil` | +| `global.imagePullSecrets` | Global Docker registry secret names as an array | `[]` (does not add image pull secrets to deployed pods) | +| `global.storageClass` | Global storage class for dynamic provisioning | `nil` | +| `image.registry` | PostgreSQL Image registry | `docker.io` | +| `image.repository` | PostgreSQL Image name | `bitnami/postgresql` | +| `image.tag` | PostgreSQL Image tag | `{TAG_NAME}` | +| `image.pullPolicy` | PostgreSQL Image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `image.debug` | Specify if debug values should be set | `false` | +| `nameOverride` | String to partially override common.names.fullname template with a string (will prepend the release name) | `nil` | +| `fullnameOverride` | String to fully override common.names.fullname template with a string | `nil` | +| `volumePermissions.enabled` | Enable init container that changes volume permissions in the data directory (for cases where the default k8s `runAsUser` and `fsUser` values do not work) | `false` | +| `volumePermissions.image.registry` | Init container volume-permissions image registry | `docker.io` | +| `volumePermissions.image.repository` | Init container volume-permissions image name | `bitnami/bitnami-shell` | +| `volumePermissions.image.tag` | Init container volume-permissions image tag | `"10"` | +| `volumePermissions.image.pullPolicy` | Init container volume-permissions image pull policy | `Always` | +| `volumePermissions.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `volumePermissions.securityContext.runAsUser` | User ID for the init container (when facing issues in OpenShift or uid unknown, try value "auto") | `0` | +| `usePasswordFile` | Have the secrets mounted as a file instead of env vars | `false` | +| `ldap.enabled` | Enable LDAP support | `false` | +| `ldap.existingSecret` | Name of existing secret to use for LDAP passwords | `nil` | +| `ldap.url` | LDAP URL beginning in the form `ldap[s]://host[:port]/basedn[?[attribute][?[scope][?[filter]]]]` | `nil` | +| `ldap.server` | IP address or name of the LDAP server. | `nil` | +| `ldap.port` | Port number on the LDAP server to connect to | `nil` | +| `ldap.scheme` | Set to `ldaps` to use LDAPS. | `nil` | +| `ldap.tls` | Set to `1` to use TLS encryption | `nil` | +| `ldap.prefix` | String to prepend to the user name when forming the DN to bind | `nil` | +| `ldap.suffix` | String to append to the user name when forming the DN to bind | `nil` | +| `ldap.search_attr` | Attribute to match against the user name in the search | `nil` | +| `ldap.search_filter` | The search filter to use when doing search+bind authentication | `nil` | +| `ldap.baseDN` | Root DN to begin the search for the user in | `nil` | +| `ldap.bindDN` | DN of user to bind to LDAP | `nil` | +| `ldap.bind_password` | Password for the user to bind to LDAP | `nil` | +| `replication.enabled` | Enable replication | `false` | +| `replication.user` | Replication user | `repl_user` | +| `replication.password` | Replication user password | `repl_password` | +| `replication.readReplicas` | Number of read replicas replicas | `1` | +| `replication.synchronousCommit` | Set synchronous commit mode. Allowed values: `on`, `remote_apply`, `remote_write`, `local` and `off` | `off` | +| `replication.numSynchronousReplicas` | Number of replicas that will have synchronous replication. Note: Cannot be greater than `replication.readReplicas`. | `0` | +| `replication.applicationName` | Cluster application name. Useful for advanced replication settings | `my_application` | +| `existingSecret` | Name of existing secret to use for PostgreSQL passwords. The secret has to contain the keys `postgresql-password` which is the password for `postgresqlUsername` when it is different of `postgres`, `postgresql-postgres-password` which will override `postgresqlPassword`, `postgresql-replication-password` which will override `replication.password` and `postgresql-ldap-password` which will be used to authenticate on LDAP. The value is evaluated as a template. | `nil` | +| `postgresqlPostgresPassword` | PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`, in which case`postgres` is the admin username). | _random 10 character alphanumeric string_ | +| `postgresqlUsername` | PostgreSQL user (creates a non-admin user when `postgresqlUsername` is not `postgres`) | `postgres` | +| `postgresqlPassword` | PostgreSQL user password | _random 10 character alphanumeric string_ | +| `postgresqlDatabase` | PostgreSQL database | `nil` | +| `postgresqlDataDir` | PostgreSQL data dir folder | `/bitnami/postgresql` (same value as persistence.mountPath) | +| `extraEnv` | Any extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `[]` | +| `extraEnvVarsCM` | Name of a Config Map containing extra environment variables you would like to pass on to the pod. The value is evaluated as a template. | `nil` | +| `postgresqlInitdbArgs` | PostgreSQL initdb extra arguments | `nil` | +| `postgresqlInitdbWalDir` | PostgreSQL location for transaction log | `nil` | +| `postgresqlConfiguration` | Runtime Config Parameters | `nil` | +| `postgresqlExtendedConf` | Extended Runtime Config Parameters (appended to main or default configuration) | `nil` | +| `pgHbaConfiguration` | Content of pg_hba.conf | `nil (do not create pg_hba.conf)` | +| `postgresqlSharedPreloadLibraries` | Shared preload libraries (comma-separated list) | `pgaudit` | +| `postgresqlMaxConnections` | Maximum total connections | `nil` | +| `postgresqlPostgresConnectionLimit` | Maximum total connections for the postgres user | `nil` | +| `postgresqlDbUserConnectionLimit` | Maximum total connections for the non-admin user | `nil` | +| `postgresqlTcpKeepalivesInterval` | TCP keepalives interval | `nil` | +| `postgresqlTcpKeepalivesIdle` | TCP keepalives idle | `nil` | +| `postgresqlTcpKeepalivesCount` | TCP keepalives count | `nil` | +| `postgresqlStatementTimeout` | Statement timeout | `nil` | +| `postgresqlPghbaRemoveFilters` | Comma-separated list of patterns to remove from the pg_hba.conf file | `nil` | +| `customStartupProbe` | Override default startup probe | `nil` | +| `customLivenessProbe` | Override default liveness probe | `nil` | +| `customReadinessProbe` | Override default readiness probe | `nil` | +| `audit.logHostname` | Add client hostnames to the log file | `false` | +| `audit.logConnections` | Add client log-in operations to the log file | `false` | +| `audit.logDisconnections` | Add client log-outs operations to the log file | `false` | +| `audit.pgAuditLog` | Add operations to log using the pgAudit extension | `nil` | +| `audit.clientMinMessages` | Message log level to share with the user | `nil` | +| `audit.logLinePrefix` | Template string for the log line prefix | `nil` | +| `audit.logTimezone` | Timezone for the log timestamps | `nil` | +| `configurationConfigMap` | ConfigMap with the PostgreSQL configuration files (Note: Overrides `postgresqlConfiguration` and `pgHbaConfiguration`). The value is evaluated as a template. | `nil` | +| `extendedConfConfigMap` | ConfigMap with the extended PostgreSQL configuration files. The value is evaluated as a template. | `nil` | +| `initdbScripts` | Dictionary of initdb scripts | `nil` | +| `initdbUser` | PostgreSQL user to execute the .sql and sql.gz scripts | `nil` | +| `initdbPassword` | Password for the user specified in `initdbUser` | `nil` | +| `initdbScriptsConfigMap` | ConfigMap with the initdb scripts (Note: Overrides `initdbScripts`). The value is evaluated as a template. | `nil` | +| `initdbScriptsSecret` | Secret with initdb scripts that contain sensitive information (Note: can be used with `initdbScriptsConfigMap` or `initdbScripts`). The value is evaluated as a template. | `nil` | +| `service.type` | Kubernetes Service type | `ClusterIP` | +| `service.port` | PostgreSQL port | `5432` | +| `service.nodePort` | Kubernetes Service nodePort | `nil` | +| `service.annotations` | Annotations for PostgreSQL service | `{}` (evaluated as a template) | +| `service.loadBalancerIP` | loadBalancerIP if service type is `LoadBalancer` | `nil` | +| `service.loadBalancerSourceRanges` | Address that are allowed when svc is LoadBalancer | `[]` (evaluated as a template) | +| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | +| `shmVolume.enabled` | Enable emptyDir volume for /dev/shm for primary and read replica(s) Pod(s) | `true` | +| `shmVolume.chmod.enabled` | Run at init chmod 777 of the /dev/shm (ignored if `volumePermissions.enabled` is `false`) | `true` | +| `persistence.enabled` | Enable persistence using PVC | `true` | +| `persistence.existingClaim` | Provide an existing `PersistentVolumeClaim`, the value is evaluated as a template. | `nil` | +| `persistence.mountPath` | Path to mount the volume at | `/bitnami/postgresql` | +| `persistence.subPath` | Subdirectory of the volume to mount at | `""` | +| `persistence.storageClass` | PVC Storage Class for PostgreSQL volume | `nil` | +| `persistence.accessModes` | PVC Access Mode for PostgreSQL volume | `[ReadWriteOnce]` | +| `persistence.size` | PVC Storage Request for PostgreSQL volume | `8Gi` | +| `persistence.annotations` | Annotations for the PVC | `{}` | +| `persistence.selector` | Selector to match an existing Persistent Volume (this value is evaluated as a template) | `{}` | +| `commonAnnotations` | Annotations to be added to all deployed resources (rendered as a template) | `{}` | +| `primary.podAffinityPreset` | PostgreSQL primary pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.podAntiAffinityPreset` | PostgreSQL primary pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `primary.nodeAffinityPreset.type` | PostgreSQL primary node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `primary.nodeAffinityPreset.key` | PostgreSQL primary node label key to match Ignored if `primary.affinity` is set. | `""` | +| `primary.nodeAffinityPreset.values` | PostgreSQL primary node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `primary.affinity` | Affinity for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.nodeSelector` | Node labels for PostgreSQL primary pods assignment | `{}` (evaluated as a template) | +| `primary.tolerations` | Tolerations for PostgreSQL primary pods assignment | `[]` (evaluated as a template) | +| `primary.anotations` | Map of annotations to add to the statefulset (postgresql primary) | `{}` | +| `primary.labels` | Map of labels to add to the statefulset (postgresql primary) | `{}` | +| `primary.podAnnotations` | Map of annotations to add to the pods (postgresql primary) | `{}` | +| `primary.podLabels` | Map of labels to add to the pods (postgresql primary) | `{}` | +| `primary.priorityClassName` | Priority Class to use for each pod (postgresql primary) | `nil` | +| `primary.extraInitContainers` | Additional init containers to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql primary) | `[]` | +| `primary.extraVolumes` | Additional volumes to add to the pods (postgresql primary) | `[]` | +| `primary.sidecars` | Add additional containers to the pod | `[]` | +| `primary.service.type` | Allows using a different service type for primary | `nil` | +| `primary.service.nodePort` | Allows using a different nodePort for primary | `nil` | +| `primary.service.clusterIP` | Allows using a different clusterIP for primary | `nil` | +| `primaryAsStandBy.enabled` | Whether to enable current cluster's primary as standby server of another cluster or not. | `false` | +| `primaryAsStandBy.primaryHost` | The Host of replication primary in the other cluster. | `nil` | +| `primaryAsStandBy.primaryPort ` | The Port of replication primary in the other cluster. | `nil` | +| `readReplicas.podAffinityPreset` | PostgreSQL read only pod affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.podAntiAffinityPreset` | PostgreSQL read only pod anti-affinity preset. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `soft` | +| `readReplicas.nodeAffinityPreset.type` | PostgreSQL read only node affinity preset type. Ignored if `primary.affinity` is set. Allowed values: `soft` or `hard` | `""` | +| `readReplicas.nodeAffinityPreset.key` | PostgreSQL read only node label key to match Ignored if `primary.affinity` is set. | `""` | +| `readReplicas.nodeAffinityPreset.values` | PostgreSQL read only node label values to match. Ignored if `primary.affinity` is set. | `[]` | +| `readReplicas.affinity` | Affinity for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.nodeSelector` | Node labels for PostgreSQL read only pods assignment | `{}` (evaluated as a template) | +| `readReplicas.anotations` | Map of annotations to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.resources` | CPU/Memory resource requests/limits override for readReplicass. Will fallback to `values.resources` if not defined. | `{}` | +| `readReplicas.labels` | Map of labels to add to the statefulsets (postgresql readReplicas) | `{}` | +| `readReplicas.podAnnotations` | Map of annotations to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.podLabels` | Map of labels to add to the pods (postgresql readReplicas) | `{}` | +| `readReplicas.priorityClassName` | Priority Class to use for each pod (postgresql readReplicas) | `nil` | +| `readReplicas.extraInitContainers` | Additional init containers to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumeMounts` | Additional volume mounts to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.extraVolumes` | Additional volumes to add to the pods (postgresql readReplicas) | `[]` | +| `readReplicas.sidecars` | Add additional containers to the pod | `[]` | +| `readReplicas.service.type` | Allows using a different service type for readReplicas | `nil` | +| `readReplicas.service.nodePort` | Allows using a different nodePort for readReplicas | `nil` | +| `readReplicas.service.clusterIP` | Allows using a different clusterIP for readReplicas | `nil` | +| `readReplicas.persistence.enabled` | Whether to enable readReplicas replicas persistence | `true` | +| `terminationGracePeriodSeconds` | Seconds the pod needs to terminate gracefully | `nil` | +| `resources` | CPU/Memory resource requests/limits | Memory: `256Mi`, CPU: `250m` | +| `securityContext.*` | Other pod security context to be included as-is in the pod spec | `{}` | +| `securityContext.enabled` | Enable security context | `true` | +| `securityContext.fsGroup` | Group ID for the pod | `1001` | +| `containerSecurityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `containerSecurityContext.enabled` | Enable container security context | `true` | +| `containerSecurityContext.runAsUser` | User ID for the container | `1001` | +| `serviceAccount.enabled` | Enable service account (Note: Service Account will only be automatically created if `serviceAccount.name` is not set) | `false` | +| `serviceAccount.name` | Name of existing service account | `nil` | +| `networkPolicy.enabled` | Enable NetworkPolicy | `false` | +| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | +| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which ingress traffic could be allowed | `{}` | +| `startupProbe.enabled` | Enable startupProbe | `false` | +| `startupProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `startupProbe.periodSeconds` | How often to perform the probe | 15 | +| `startupProbe.timeoutSeconds` | When the probe times | 5 | +| `startupProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 10 | +| `startupProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed. | 1 | +| `livenessProbe.enabled` | Enable livenessProbe | `true` | +| `livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `readinessProbe.enabled` | Enable readinessProbe | `true` | +| `readinessProbe.initialDelaySeconds` | Delay before readiness probe is initiated | 5 | +| `readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `tls.enabled` | Enable TLS traffic support | `false` | +| `tls.preferServerCiphers` | Whether to use the server's TLS cipher preferences rather than the client's | `true` | +| `tls.certificatesSecret` | Name of an existing secret that contains the certificates | `nil` | +| `tls.certFilename` | Certificate filename | `""` | +| `tls.certKeyFilename` | Certificate key filename | `""` | +| `tls.certCAFilename` | CA Certificate filename. If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate. | `nil` | +| `tls.crlFilename` | File containing a Certificate Revocation List | `nil` | +| `metrics.enabled` | Start a prometheus exporter | `false` | +| `metrics.service.type` | Kubernetes Service type | `ClusterIP` | +| `service.clusterIP` | Static clusterIP or None for headless services | `nil` | +| `metrics.service.annotations` | Additional annotations for metrics exporter pod | `{ prometheus.io/scrape: "true", prometheus.io/port: "9187"}` | +| `metrics.service.loadBalancerIP` | loadBalancerIP if redis metrics service type is `LoadBalancer` | `nil` | +| `metrics.serviceMonitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` | +| `metrics.serviceMonitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | `{}` | +| `metrics.serviceMonitor.namespace` | Optional namespace in which to create ServiceMonitor | `nil` | +| `metrics.serviceMonitor.interval` | Scrape interval. If not set, the Prometheus default scrape interval is used | `nil` | +| `metrics.serviceMonitor.scrapeTimeout` | Scrape timeout. If not set, the Prometheus default scrape timeout is used | `nil` | +| `metrics.prometheusRule.enabled` | Set this to true to create prometheusRules for Prometheus operator | `false` | +| `metrics.prometheusRule.additionalLabels` | Additional labels that can be used so prometheusRules will be discovered by Prometheus | `{}` | +| `metrics.prometheusRule.namespace` | namespace where prometheusRules resource should be created | the same namespace as postgresql | +| `metrics.prometheusRule.rules` | [rules](https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/) to be created, check values for an example. | `[]` | +| `metrics.image.registry` | PostgreSQL Exporter Image registry | `docker.io` | +| `metrics.image.repository` | PostgreSQL Exporter Image name | `bitnami/postgres-exporter` | +| `metrics.image.tag` | PostgreSQL Exporter Image tag | `{TAG_NAME}` | +| `metrics.image.pullPolicy` | PostgreSQL Exporter Image pull policy | `IfNotPresent` | +| `metrics.image.pullSecrets` | Specify Image pull secrets | `nil` (does not add image pull secrets to deployed pods) | +| `metrics.customMetrics` | Additional custom metrics | `nil` | +| `metrics.extraEnvVars` | Extra environment variables to add to exporter | `{}` (evaluated as a template) | +| `metrics.securityContext.*` | Other container security context to be included as-is in the container spec | `{}` | +| `metrics.securityContext.enabled` | Enable security context for metrics | `false` | +| `metrics.securityContext.runAsUser` | User ID for the container for metrics | `1001` | +| `metrics.livenessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 30 | +| `metrics.livenessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.livenessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.livenessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.livenessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `metrics.readinessProbe.enabled` | would you like a readinessProbe to be enabled | `true` | +| `metrics.readinessProbe.initialDelaySeconds` | Delay before liveness probe is initiated | 5 | +| `metrics.readinessProbe.periodSeconds` | How often to perform the probe | 10 | +| `metrics.readinessProbe.timeoutSeconds` | When the probe times out | 5 | +| `metrics.readinessProbe.failureThreshold` | Minimum consecutive failures for the probe to be considered failed after having succeeded. | 6 | +| `metrics.readinessProbe.successThreshold` | Minimum consecutive successes for the probe to be considered successful after having failed | 1 | +| `updateStrategy` | Update strategy policy | `{type: "RollingUpdate"}` | +| `psp.create` | Create Pod Security Policy | `false` | +| `rbac.create` | Create Role and RoleBinding (required for PSP to work) | `false` | +| `extraDeploy` | Array of extra objects to deploy with the release (evaluated as a template). | `nil` | + +Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example, + +```console +$ helm install my-release \ + --set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \ + bitnami/postgresql +``` + +The above command sets the PostgreSQL `postgres` account password to `secretpassword`. Additionally it creates a database named `my-database`. + +> NOTE: Once this chart is deployed, it is not possible to change the application's access credentials, such as usernames or passwords, using Helm. To change these application credentials after deployment, delete any persistent volumes (PVs) used by the chart and re-deploy it, or use the application's built-in administrative tools if available. + +Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example, + +```console +$ helm install my-release -f values.yaml bitnami/postgresql +``` + +> **Tip**: You can use the default [values.yaml](values.yaml) + +## Configuration and installation details + +### [Rolling VS Immutable tags](https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/) + +It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image. + +Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist. + +### Customizing primary and read replica services in a replicated configuration + +At the top level, there is a service object which defines the services for both primary and readReplicas. For deeper customization, there are service objects for both the primary and read types individually. This allows you to override the values in the top level service object so that the primary and read can be of different service types and with different clusterIPs / nodePorts. Also in the case you want the primary and read to be of type nodePort, you will need to set the nodePorts to different values to prevent a collision. The values that are deeper in the primary.service or readReplicas.service objects will take precedence over the top level service object. + +### Change PostgreSQL version + +To modify the PostgreSQL version used in this chart you can specify a [valid image tag](https://hub.docker.com/r/bitnami/postgresql/tags/) using the `image.tag` parameter. For example, `image.tag=X.Y.Z`. This approach is also applicable to other images like exporters. + +### postgresql.conf / pg_hba.conf files as configMap + +This helm chart also supports to customize the whole configuration file. + +Add your custom file to "files/postgresql.conf" in your working directory. This file will be mounted as configMap to the containers and it will be used for configuring the PostgreSQL server. + +Alternatively, you can add additional PostgreSQL configuration parameters using the `postgresqlExtendedConf` parameter as a dict, using camelCase, e.g. {"sharedBuffers": "500MB"}. Alternatively, to replace the entire default configuration use `postgresqlConfiguration`. + +In addition to these options, you can also set an external ConfigMap with all the configuration files. This is done by setting the `configurationConfigMap` parameter. Note that this will override the two previous options. + +### Allow settings to be loaded from files other than the default `postgresql.conf` + +If you don't want to provide the whole PostgreSQL configuration file and only specify certain parameters, you can add your extended `.conf` files to "files/conf.d/" in your working directory. +Those files will be mounted as configMap to the containers adding/overwriting the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +Alternatively, you can also set an external ConfigMap with all the extra configuration files. This is done by setting the `extendedConfConfigMap` parameter. Note that this will override the previous option. + +### Initialize a fresh instance + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image allows you to use your custom scripts to initialize a fresh instance. In order to execute the scripts, they must be located inside the chart folder `files/docker-entrypoint-initdb.d` so they can be consumed as a ConfigMap. + +Alternatively, you can specify custom scripts using the `initdbScripts` parameter as dict. + +In addition to these options, you can also set an external ConfigMap with all the initialization scripts. This is done by setting the `initdbScriptsConfigMap` parameter. Note that this will override the two previous options. If your initialization scripts contain sensitive information such as credentials or passwords, you can use the `initdbScriptsSecret` parameter. + +The allowed extensions are `.sh`, `.sql` and `.sql.gz`. + +### Securing traffic using TLS + +TLS support can be enabled in the chart by specifying the `tls.` parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the chart: + +- `tls.enabled`: Enable TLS support. Defaults to `false` +- `tls.certificatesSecret`: Name of an existing secret that contains the certificates. No defaults. +- `tls.certFilename`: Certificate filename. No defaults. +- `tls.certKeyFilename`: Certificate key filename. No defaults. + +For example: + +* First, create the secret with the cetificates files: + + ```console + kubectl create secret generic certificates-tls-secret --from-file=./cert.crt --from-file=./cert.key --from-file=./ca.crt + ``` + +* Then, use the following parameters: + + ```console + volumePermissions.enabled=true + tls.enabled=true + tls.certificatesSecret="certificates-tls-secret" + tls.certFilename="cert.crt" + tls.certKeyFilename="cert.key" + ``` + + > Note TLS and VolumePermissions: PostgreSQL requires certain permissions on sensitive files (such as certificate keys) to start up. Due to an on-going [issue](https://github.com/kubernetes/kubernetes/issues/57923) regarding kubernetes permissions and the use of `containerSecurityContext.runAsUser`, you must enable `volumePermissions` to ensure everything works as expected. + +### Sidecars + +If you need additional containers to run within the same pod as PostgreSQL (e.g. an additional metrics or logging exporter), you can do so via the `sidecars` config parameter. Simply define your container according to the Kubernetes container spec. + +```yaml +# For the PostgreSQL primary +primary: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +# For the PostgreSQL replicas +readReplicas: + sidecars: + - name: your-image-name + image: your-image + imagePullPolicy: Always + ports: + - name: portname + containerPort: 1234 +``` + +### Metrics + +The chart optionally can start a metrics exporter for [prometheus](https://prometheus.io). The metrics endpoint (port 9187) is not exposed and it is expected that the metrics are collected from inside the k8s cluster using something similar as the described in the [example Prometheus scrape configuration](https://github.com/prometheus/prometheus/blob/master/documentation/examples/prometheus-kubernetes.yml). + +The exporter allows to create custom metrics from additional SQL queries. See the Chart's `values.yaml` for an example and consult the [exporters documentation](https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file) for more details. + +### Use of global variables + +In more complex scenarios, we may have the following tree of dependencies + +``` + +--------------+ + | | + +------------+ Chart 1 +-----------+ + | | | | + | --------+------+ | + | | | + | | | + | | | + | | | + v v v ++-------+------+ +--------+------+ +--------+------+ +| | | | | | +| PostgreSQL | | Sub-chart 1 | | Sub-chart 2 | +| | | | | | ++--------------+ +---------------+ +---------------+ +``` + +The three charts below depend on the parent chart Chart 1. However, subcharts 1 and 2 may need to connect to PostgreSQL as well. In order to do so, subcharts 1 and 2 need to know the PostgreSQL credentials, so one option for deploying could be deploy Chart 1 with the following parameters: + +``` +postgresql.postgresqlPassword=testtest +subchart1.postgresql.postgresqlPassword=testtest +subchart2.postgresql.postgresqlPassword=testtest +postgresql.postgresqlDatabase=db1 +subchart1.postgresql.postgresqlDatabase=db1 +subchart2.postgresql.postgresqlDatabase=db1 +``` + +If the number of dependent sub-charts increases, installing the chart with parameters can become increasingly difficult. An alternative would be to set the credentials using global variables as follows: + +``` +global.postgresql.postgresqlPassword=testtest +global.postgresql.postgresqlDatabase=db1 +``` + +This way, the credentials will be available in all of the subcharts. + +## Persistence + +The [Bitnami PostgreSQL](https://github.com/bitnami/bitnami-docker-postgresql) image stores the PostgreSQL data and configurations at the `/bitnami/postgresql` path of the container. + +Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube. +See the [Parameters](#parameters) section to configure the PVC or to disable persistence. + +If you already have data in it, you will fail to sync to standby nodes for all commits, details can refer to [code](https://github.com/bitnami/bitnami-docker-postgresql/blob/8725fe1d7d30ebe8d9a16e9175d05f7ad9260c93/9.6/debian-9/rootfs/libpostgresql.sh#L518-L556). If you need to use those data, please covert them to sql and import after `helm install` finished. + +## NetworkPolicy + +To enable network policy for PostgreSQL, install [a networking plugin that implements the Kubernetes NetworkPolicy spec](https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy#before-you-begin), and set `networkPolicy.enabled` to `true`. + +For Kubernetes v1.5 & v1.6, you must also turn on NetworkPolicy by setting the DefaultDeny namespace annotation. Note: this will enforce policy for _all_ pods in the namespace: + +```console +$ kubectl annotate namespace default "net.beta.kubernetes.io/network-policy={\"ingress\":{\"isolation\":\"DefaultDeny\"}}" +``` + +With NetworkPolicy enabled, traffic will be limited to just port 5432. + +For more precise policy, set `networkPolicy.allowExternal=false`. This will only allow pods with the generated client label to connect to PostgreSQL. +This label will be displayed in the output of a successful install. + +## Differences between Bitnami PostgreSQL image and [Docker Official](https://hub.docker.com/_/postgres) image + +- The Docker Official PostgreSQL image does not support replication. If you pass any replication environment variable, this would be ignored. The only environment variables supported by the Docker Official image are POSTGRES_USER, POSTGRES_DB, POSTGRES_PASSWORD, POSTGRES_INITDB_ARGS, POSTGRES_INITDB_WALDIR and PGDATA. All the remaining environment variables are specific to the Bitnami PostgreSQL image. +- The Bitnami PostgreSQL image is non-root by default. This requires that you run the pod with `securityContext` and updates the permissions of the volume with an `initContainer`. A key benefit of this configuration is that the pod follows security best practices and is prepared to run on Kubernetes distributions with hard security constraints like OpenShift. +- For OpenShift, one may either define the runAsUser and fsGroup accordingly, or try this more dynamic option: volumePermissions.securityContext.runAsUser="auto",securityContext.enabled=false,containerSecurityContext.enabled=false,shmVolume.chmod.enabled=false + +### Deploy chart using Docker Official PostgreSQL Image + +From chart version 4.0.0, it is possible to use this chart with the Docker Official PostgreSQL image. +Besides specifying the new Docker repository and tag, it is important to modify the PostgreSQL data directory and volume mount point. Basically, the PostgreSQL data dir cannot be the mount point directly, it has to be a subdirectory. + +``` +image.repository=postgres +image.tag=10.6 +postgresqlDataDir=/data/pgdata +persistence.mountPath=/data/ +``` + +### Setting Pod's affinity + +This chart allows you to set your custom affinity using the `XXX.affinity` paremeter(s). Find more infomation about Pod's affinity in the [kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity). + +As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the [bitnami/common](https://github.com/bitnami/charts/tree/master/bitnami/common#affinities) chart. To do so, set the `XXX.podAffinityPreset`, `XXX.podAntiAffinityPreset`, or `XXX.nodeAffinityPreset` parameters. + +## Troubleshooting + +Find more information about how to deal with common errors related to Bitnami’s Helm charts in [this troubleshooting guide](https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues). + +## Upgrading + +It's necessary to specify the existing passwords while performing an upgrade to ensure the secrets are not updated with invalid randomly generated passwords. Remember to specify the existing values of the `postgresqlPassword` and `replication.password` parameters when upgrading the chart: + +```bash +$ helm upgrade my-release bitnami/postgresql \ + --set postgresqlPassword=[POSTGRESQL_PASSWORD] \ + --set replication.password=[REPLICATION_PASSWORD] +``` + +> Note: you need to substitute the placeholders _[POSTGRESQL_PASSWORD]_, and _[REPLICATION_PASSWORD]_ with the values obtained from instructions in the installation notes. + +### To 10.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Move dependency information from the *requirements.yaml* to the *Chart.yaml* +- After running `helm dependency update`, a *Chart.lock* file is generated containing the same structure used in the previous *requirements.lock* +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Chart. + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ + +#### Breaking changes + +- The term `master` has been replaced with `primary` and `slave` with `readReplicas` throughout the chart. Role names have changed from `master` and `slave` to `primary` and `read`. + +To upgrade to `10.0.0`, it should be done reusing the PVCs used to hold the PostgreSQL data on your previous release. To do so, follow the instructions below (the following example assumes that the release name is `postgresql`): + +> NOTE: Please, create a backup of your database before running any of those actions. + +Obtain the credentials and the names of the PVCs used to hold the PostgreSQL data on your current release: + +```console +$ export POSTGRESQL_PASSWORD=$(kubectl get secret --namespace default postgresql -o jsonpath="{.data.postgresql-password}" | base64 --decode) +$ export POSTGRESQL_PVC=$(kubectl get pvc -l app.kubernetes.io/instance=postgresql,role=master -o jsonpath="{.items[0].metadata.name}") +``` + +Delete the PostgreSQL statefulset. Notice the option `--cascade=false`: + +```console +$ kubectl delete statefulsets.apps postgresql-postgresql --cascade=false +``` + +Now the upgrade works: + +```console +$ helm upgrade postgresql bitnami/postgresql --set postgresqlPassword=$POSTGRESQL_PASSWORD --set persistence.existingClaim=$POSTGRESQL_PVC +``` + +You will have to delete the existing PostgreSQL pod and the new statefulset is going to create a new one + +```console +$ kubectl delete pod postgresql-postgresql-0 +``` + +Finally, you should see the lines below in PostgreSQL container logs: + +```console +$ kubectl logs $(kubectl get pods -l app.kubernetes.io/instance=postgresql,app.kubernetes.io/name=postgresql,role=primary -o jsonpath="{.items[0].metadata.name}") +... +postgresql 08:05:12.59 INFO ==> Deploying PostgreSQL with persisted data... +... +``` + +### To 9.0.0 + +In this version the chart was adapted to follow the Helm label best practices, see [PR 3021](https://github.com/bitnami/charts/pull/3021). That means the backward compatibility is not guarantee when upgrading the chart to this major version. + +As a workaround, you can delete the existing statefulset (using the `--cascade=false` flag pods are not deleted) before upgrade the chart. For example, this can be a valid workflow: + +- Deploy an old version (8.X.X) + +```console +$ helm install postgresql bitnami/postgresql --version 8.10.14 +``` + +- Old version is up and running + +```console +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 1 2020-08-04 13:39:54.783480286 +0000 UTC deployed postgresql-8.10.14 11.8.0 + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 76s +``` + +- The upgrade to the latest one (9.X.X) is going to fail + +```console +$ helm upgrade postgresql bitnami/postgresql +Error: UPGRADE FAILED: cannot patch "postgresql-postgresql" with kind StatefulSet: StatefulSet.apps "postgresql-postgresql" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden +``` + +- Delete the statefulset + +```console +$ kubectl delete statefulsets.apps --cascade=false postgresql-postgresql +statefulset.apps "postgresql-postgresql" deleted +``` + +- Now the upgrade works + +```console +$ helm upgrade postgresql bitnami/postgresql +$ helm ls +NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION +postgresql default 3 2020-08-04 13:42:08.020385884 +0000 UTC deployed postgresql-9.1.2 11.8.0 +``` + +- We can kill the existing pod and the new statefulset is going to create a new one: + +```console +$ kubectl delete pod postgresql-postgresql-0 +pod "postgresql-postgresql-0" deleted + +$ kubectl get pods +NAME READY STATUS RESTARTS AGE +postgresql-postgresql-0 1/1 Running 0 19s +``` + +Please, note that without the `--cascade=false` both objects (statefulset and pod) are going to be removed and both objects will be deployed again with the `helm upgrade` command + +### To 8.0.0 + +Prefixes the port names with their protocols to comply with Istio conventions. + +If you depend on the port names in your setup, make sure to update them to reflect this change. + +### To 7.1.0 + +Adds support for LDAP configuration. + +### To 7.0.0 + +Helm performs a lookup for the object based on its group (apps), version (v1), and kind (Deployment). Also known as its GroupVersionKind, or GVK. Changing the GVK is considered a compatibility breaker from Kubernetes' point of view, so you cannot "upgrade" those objects to the new GVK in-place. Earlier versions of Helm 3 did not perform the lookup correctly which has since been fixed to match the spec. + +In https://github.com/helm/charts/pull/17281 the `apiVersion` of the statefulset resources was updated to `apps/v1` in tune with the api's deprecated, resulting in compatibility breakage. + +This major version bump signifies this change. + +### To 6.5.7 + +In this version, the chart will use PostgreSQL with the Postgis extension included. The version used with Postgresql version 10, 11 and 12 is Postgis 2.5. It has been compiled with the following dependencies: + +- protobuf +- protobuf-c +- json-c +- geos +- proj + +### To 5.0.0 + +In this version, the **chart is using PostgreSQL 11 instead of PostgreSQL 10**. You can find the main difference and notable changes in the following links: [https://www.postgresql.org/about/news/1894/](https://www.postgresql.org/about/news/1894/) and [https://www.postgresql.org/about/featurematrix/](https://www.postgresql.org/about/featurematrix/). + +For major releases of PostgreSQL, the internal data storage format is subject to change, thus complicating upgrades, you can see some errors like the following one in the logs: + +```console +Welcome to the Bitnami postgresql container +Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-postgresql +Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-postgresql/issues +Send us your feedback at containers@bitnami.com + +INFO ==> ** Starting PostgreSQL setup ** +NFO ==> Validating settings in POSTGRESQL_* env vars.. +INFO ==> Initializing PostgreSQL database... +INFO ==> postgresql.conf file not detected. Generating it... +INFO ==> pg_hba.conf file not detected. Generating it... +INFO ==> Deploying PostgreSQL with persisted data... +INFO ==> Configuring replication parameters +INFO ==> Loading custom scripts... +INFO ==> Enabling remote connections +INFO ==> Stopping PostgreSQL... +INFO ==> ** PostgreSQL setup finished! ** + +INFO ==> ** Starting PostgreSQL ** + [1] FATAL: database files are incompatible with server + [1] DETAIL: The data directory was initialized by PostgreSQL version 10, which is not compatible with this version 11.3. +``` + +In this case, you should migrate the data from the old chart to the new one following an approach similar to that described in [this section](https://www.postgresql.org/docs/current/upgrading.html#UPGRADING-VIA-PGDUMPALL) from the official documentation. Basically, create a database dump in the old chart, move and restore it in the new one. + +### To 4.0.0 + +This chart will use by default the Bitnami PostgreSQL container starting from version `10.7.0-r68`. This version moves the initialization logic from node.js to bash. This new version of the chart requires setting the `POSTGRES_PASSWORD` in the slaves as well, in order to properly configure the `pg_hba.conf` file. Users from previous versions of the chart are advised to upgrade immediately. + +IMPORTANT: If you do not want to upgrade the chart version then make sure you use the `10.7.0-r68` version of the container. Otherwise, you will get this error + +``` +The POSTGRESQL_PASSWORD environment variable is empty or not set. Set the environment variable ALLOW_EMPTY_PASSWORD=yes to allow the container to be started with blank passwords. This is recommended only for development +``` + +### To 3.0.0 + +This releases make it possible to specify different nodeSelector, affinity and tolerations for master and slave pods. +It also fixes an issue with `postgresql.master.fullname` helper template not obeying fullnameOverride. + +#### Breaking changes + +- `affinty` has been renamed to `master.affinity` and `slave.affinity`. +- `tolerations` has been renamed to `master.tolerations` and `slave.tolerations`. +- `nodeSelector` has been renamed to `master.nodeSelector` and `slave.nodeSelector`. + +### To 2.0.0 + +In order to upgrade from the `0.X.X` branch to `1.X.X`, you should follow the below steps: + +- Obtain the service name (`SERVICE_NAME`) and password (`OLD_PASSWORD`) of the existing postgresql chart. You can find the instructions to obtain the password in the NOTES.txt, the service name can be obtained by running + +```console +$ kubectl get svc +``` + +- Install (not upgrade) the new version + +```console +$ helm repo update +$ helm install my-release bitnami/postgresql +``` + +- Connect to the new pod (you can obtain the name by running `kubectl get pods`): + +```console +$ kubectl exec -it NAME bash +``` + +- Once logged in, create a dump file from the previous database using `pg_dump`, for that we should connect to the previous postgresql chart: + +```console +$ pg_dump -h SERVICE_NAME -U postgres DATABASE_NAME > /tmp/backup.sql +``` + +After run above command you should be prompted for a password, this password is the previous chart password (`OLD_PASSWORD`). +This operation could take some time depending on the database size. + +- Once you have the backup file, you can restore it with a command like the one below: + +```console +$ psql -U postgres DATABASE_NAME < /tmp/backup.sql +``` + +In this case, you are accessing to the local postgresql, so the password should be the new one (you can find it in NOTES.txt). + +If you want to restore the database and the database schema does not exist, it is necessary to first follow the steps described below. + +```console +$ psql -U postgres +postgres=# drop database DATABASE_NAME; +postgres=# create database DATABASE_NAME; +postgres=# create user USER_NAME; +postgres=# alter role USER_NAME with password 'BITNAMI_USER_PASSWORD'; +postgres=# grant all privileges on database DATABASE_NAME to USER_NAME; +postgres=# alter database DATABASE_NAME owner to USER_NAME; +``` diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/.helmignore b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/.helmignore new file mode 100644 index 0000000000..50af031725 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/Chart.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/Chart.yaml new file mode 100644 index 0000000000..bcc3808d08 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/Chart.yaml @@ -0,0 +1,23 @@ +annotations: + category: Infrastructure +apiVersion: v2 +appVersion: 1.4.2 +description: A Library Helm Chart for grouping common logic between bitnami charts. + This chart is not deployable by itself. +home: https://github.com/bitnami/charts/tree/master/bitnami/common +icon: https://bitnami.com/downloads/logos/bitnami-mark.png +keywords: +- common +- helper +- template +- function +- bitnami +maintainers: +- email: containers@bitnami.com + name: Bitnami +name: common +sources: +- https://github.com/bitnami/charts +- http://www.bitnami.com/ +type: library +version: 1.4.2 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/README.md new file mode 100644 index 0000000000..7287cbb5fc --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/README.md @@ -0,0 +1,322 @@ +# Bitnami Common Library Chart + +A [Helm Library Chart](https://helm.sh/docs/topics/library_charts/#helm) for grouping common logic between bitnami charts. + +## TL;DR + +```yaml +dependencies: + - name: common + version: 0.x.x + repository: https://charts.bitnami.com/bitnami +``` + +```bash +$ helm dependency update +``` + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "common.names.fullname" . }} +data: + myvalue: "Hello World" +``` + +## Introduction + +This chart provides a common template helpers which can be used to develop new charts using [Helm](https://helm.sh) package manager. + +Bitnami charts can be used with [Kubeapps](https://kubeapps.com/) for deployment and management of Helm Charts in clusters. This Helm chart has been tested on top of [Bitnami Kubernetes Production Runtime](https://kubeprod.io/) (BKPR). Deploy BKPR to get automated TLS certificates, logging and monitoring for your applications. + +## Prerequisites + +- Kubernetes 1.12+ +- Helm 3.1.0 + +## Parameters + +The following table lists the helpers available in the library which are scoped in different sections. + +### Affinities + +| Helper identifier | Description | Expected Input | +|-------------------------------|------------------------------------------------------|------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.node.hard` | Return a hard nodeAffinity definition | `dict "key" "FOO" "values" (list "BAR" "BAZ")` | +| `common.affinities.pod.soft` | Return a soft podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | +| `common.affinities.pod.hard` | Return a hard podAffinity/podAntiAffinity definition | `dict "component" "FOO" "context" $` | + +### Capabilities + +| Helper identifier | Description | Expected Input | +|----------------------------------------------|------------------------------------------------------------------------------------------------|-------------------| +| `common.capabilities.kubeVersion` | Return the target Kubernetes version (using client default if .Values.kubeVersion is not set). | `.` Chart context | +| `common.capabilities.deployment.apiVersion` | Return the appropriate apiVersion for deployment. | `.` Chart context | +| `common.capabilities.statefulset.apiVersion` | Return the appropriate apiVersion for statefulset. | `.` Chart context | +| `common.capabilities.ingress.apiVersion` | Return the appropriate apiVersion for ingress. | `.` Chart context | +| `common.capabilities.rbac.apiVersion` | Return the appropriate apiVersion for RBAC resources. | `.` Chart context | +| `common.capabilities.crd.apiVersion` | Return the appropriate apiVersion for CRDs. | `.` Chart context | +| `common.capabilities.supportsHelmVersion` | Returns true if the used Helm version is 3.3+ | `.` Chart context | + +### Errors + +| Helper identifier | Description | Expected Input | +|-----------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------| +| `common.errors.upgrade.passwords.empty` | It will ensure required passwords are given when we are upgrading a chart. If `validationErrors` is not empty it will throw an error and will stop the upgrade action. | `dict "validationErrors" (list $validationError00 $validationError01) "context" $` | + +### Images + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|---------------------------------------------------------------------------------------------------------| +| `common.images.image` | Return the proper and full image name | `dict "imageRoot" .Values.path.to.the.image "global" $`, see [ImageRoot](#imageroot) for the structure. | +| `common.images.pullSecrets` | Return the proper Docker Image Registry Secret Names | `dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global` | + +### Ingress + +| Helper identifier | Description | Expected Input | +|--------------------------|----------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.ingress.backend` | Generate a proper Ingress backend entry depending on the API version | `dict "serviceName" "foo" "servicePort" "bar"`, see the [Ingress deprecation notice](https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/) for the syntax differences | + +### Labels + +| Helper identifier | Description | Expected Input | +|-----------------------------|------------------------------------------------------|-------------------| +| `common.labels.standard` | Return Kubernetes standard labels | `.` Chart context | +| `common.labels.matchLabels` | Return the proper Docker Image Registry Secret Names | `.` Chart context | + +### Names + +| Helper identifier | Description | Expected Inpput | +|-------------------------|------------------------------------------------------------|-------------------| +| `common.names.name` | Expand the name of the chart or use `.Values.nameOverride` | `.` Chart context | +| `common.names.fullname` | Create a default fully qualified app name. | `.` Chart context | +| `common.names.chart` | Chart name plus version | `.` Chart context | + +### Secrets + +| Helper identifier | Description | Expected Input | +|---------------------------|--------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.secrets.name` | Generate the name of the secret. | `dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $` see [ExistingSecret](#existingsecret) for the structure. | +| `common.secrets.key` | Generate secret key. | `dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName"` see [ExistingSecret](#existingsecret) for the structure. | +| `common.passwords.manage` | Generate secret password or retrieve one if already created. | `dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $`, length, strong and chartNAme fields are optional. | +| `common.secrets.exists` | Returns whether a previous generated secret already exists. | `dict "secret" "secret-name" "context" $` | + +### Storage + +| Helper identifier | Description | Expected Input | +|-------------------------------|---------------------------------------|---------------------------------------------------------------------------------------------------------------------| +| `common.affinities.node.soft` | Return a soft nodeAffinity definition | `dict "persistence" .Values.path.to.the.persistence "global" $`, see [Persistence](#persistence) for the structure. | + +### TplValues + +| Helper identifier | Description | Expected Input | +|---------------------------|----------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.tplvalues.render` | Renders a value that contains template | `dict "value" .Values.path.to.the.Value "context" $`, value is the value should rendered as template, context frequently is the chart context `$` or `.` | + +### Utils + +| Helper identifier | Description | Expected Input | +|--------------------------------|------------------------------------------------------------------------------------------|------------------------------------------------------------------------| +| `common.utils.fieldToEnvVar` | Build environment variable name given a field. | `dict "field" "my-password"` | +| `common.utils.secret.getvalue` | Print instructions to get a secret value. | `dict "secret" "secret-name" "field" "secret-value-field" "context" $` | +| `common.utils.getValueFromKey` | Gets a value from `.Values` object given its key path | `dict "key" "path.to.key" "context" $` | +| `common.utils.getKeyFromList` | Returns first `.Values` key with a defined value or first of the list if all non-defined | `dict "keys" (list "path.to.key1" "path.to.key2") "context" $` | + +### Validations + +| Helper identifier | Description | Expected Input | +|--------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| +| `common.validations.values.single.empty` | Validate a value must not be empty. | `dict "valueKey" "path.to.value" "secret" "secret.name" "field" "my-password" "subchart" "subchart" "context" $` secret, field and subchart are optional. In case they are given, the helper will generate a how to get instruction. See [ValidateValue](#validatevalue) | +| `common.validations.values.multiple.empty` | Validate a multiple values must not be empty. It returns a shared error for all the values. | `dict "required" (list $validateValueConf00 $validateValueConf01) "context" $`. See [ValidateValue](#validatevalue) | +| `common.validations.values.mariadb.passwords` | This helper will ensure required password for MariaDB are not empty. It returns a shared error for all the values. | `dict "secret" "mariadb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mariadb chart and the helper. | +| `common.validations.values.postgresql.passwords` | This helper will ensure required password for PostgreSQL are not empty. It returns a shared error for all the values. | `dict "secret" "postgresql-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use postgresql chart and the helper. | +| `common.validations.values.redis.passwords` | This helper will ensure required password for RedisTM are not empty. It returns a shared error for all the values. | `dict "secret" "redis-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use redis chart and the helper. | +| `common.validations.values.cassandra.passwords` | This helper will ensure required password for Cassandra are not empty. It returns a shared error for all the values. | `dict "secret" "cassandra-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use cassandra chart and the helper. | +| `common.validations.values.mongodb.passwords` | This helper will ensure required password for MongoDB® are not empty. It returns a shared error for all the values. | `dict "secret" "mongodb-secret" "subchart" "true" "context" $` subchart field is optional and could be true or false it depends on where you will use mongodb chart and the helper. | + +### Warnings + +| Helper identifier | Description | Expected Input | +|------------------------------|----------------------------------|------------------------------------------------------------| +| `common.warnings.rollingTag` | Warning about using rolling tag. | `ImageRoot` see [ImageRoot](#imageroot) for the structure. | + +## Special input schemas + +### ImageRoot + +```yaml +registry: + type: string + description: Docker registry where the image is located + example: docker.io + +repository: + type: string + description: Repository and image name + example: bitnami/nginx + +tag: + type: string + description: image tag + example: 1.16.1-debian-10-r63 + +pullPolicy: + type: string + description: Specify a imagePullPolicy. Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + +pullSecrets: + type: array + items: + type: string + description: Optionally specify an array of imagePullSecrets. + +debug: + type: boolean + description: Set to true if you would like to see extra information on logs + example: false + +## An instance would be: +# registry: docker.io +# repository: bitnami/nginx +# tag: 1.16.1-debian-10-r63 +# pullPolicy: IfNotPresent +# debug: false +``` + +### Persistence + +```yaml +enabled: + type: boolean + description: Whether enable persistence. + example: true + +storageClass: + type: string + description: Ghost data Persistent Volume Storage Class, If set to "-", storageClassName: "" which disables dynamic provisioning. + example: "-" + +accessMode: + type: string + description: Access mode for the Persistent Volume Storage. + example: ReadWriteOnce + +size: + type: string + description: Size the Persistent Volume Storage. + example: 8Gi + +path: + type: string + description: Path to be persisted. + example: /bitnami + +## An instance would be: +# enabled: true +# storageClass: "-" +# accessMode: ReadWriteOnce +# size: 8Gi +# path: /bitnami +``` + +### ExistingSecret + +```yaml +name: + type: string + description: Name of the existing secret. + example: mySecret +keyMapping: + description: Mapping between the expected key name and the name of the key in the existing secret. + type: object + +## An instance would be: +# name: mySecret +# keyMapping: +# password: myPasswordKey +``` + +#### Example of use + +When we store sensitive data for a deployment in a secret, some times we want to give to users the possibility of using theirs existing secrets. + +```yaml +# templates/secret.yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "common.names.fullname" . }} + labels: + app: {{ include "common.names.fullname" . }} +type: Opaque +data: + password: {{ .Values.password | b64enc | quote }} + +# templates/dpl.yaml +--- +... + env: + - name: PASSWORD + valueFrom: + secretKeyRef: + name: {{ include "common.secrets.name" (dict "existingSecret" .Values.existingSecret "context" $) }} + key: {{ include "common.secrets.key" (dict "existingSecret" .Values.existingSecret "key" "password") }} +... + +# values.yaml +--- +name: mySecret +keyMapping: + password: myPasswordKey +``` + +### ValidateValue + +#### NOTES.txt + +```console +{{- $validateValueConf00 := (dict "valueKey" "path.to.value00" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value01" "secret" "secretName" "field" "password-01") -}} + +{{ include "common.validations.values.multiple.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} +``` + +If we force those values to be empty we will see some alerts + +```console +$ helm install test mychart --set path.to.value00="",path.to.value01="" + 'path.to.value00' must not be empty, please add '--set path.to.value00=$PASSWORD_00' to the command. To get the current value: + + export PASSWORD_00=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-00}" | base64 --decode) + + 'path.to.value01' must not be empty, please add '--set path.to.value01=$PASSWORD_01' to the command. To get the current value: + + export PASSWORD_01=$(kubectl get secret --namespace default secretName -o jsonpath="{.data.password-01}" | base64 --decode) +``` + +## Upgrading + +### To 1.0.0 + +[On November 13, 2020, Helm v2 support was formally finished](https://github.com/helm/charts#status-of-the-project), this major version is the result of the required changes applied to the Helm Chart to be able to incorporate the different features added in Helm v3 and to be consistent with the Helm project itself regarding the Helm v2 EOL. + +**What changes were introduced in this major version?** + +- Previous versions of this Helm Chart use `apiVersion: v1` (installable by both Helm 2 and 3), this Helm Chart was updated to `apiVersion: v2` (installable by Helm 3 only). [Here](https://helm.sh/docs/topics/charts/#the-apiversion-field) you can find more information about the `apiVersion` field. +- Use `type: library`. [Here](https://v3.helm.sh/docs/faq/#library-chart-support) you can find more information. +- The different fields present in the *Chart.yaml* file has been ordered alphabetically in a homogeneous way for all the Bitnami Helm Charts + +**Considerations when upgrading to this version** + +- If you want to upgrade to this version from a previous one installed with Helm v3, you shouldn't face any issues +- If you want to upgrade to this version using Helm v2, this scenario is not supported as this version doesn't support Helm v2 anymore +- If you installed the previous version with Helm v2 and wants to upgrade to this version with Helm v3, please refer to the [official Helm documentation](https://helm.sh/docs/topics/v2_v3_migration/#migration-use-cases) about migrating from Helm v2 to v3 + +**Useful links** + +- https://docs.bitnami.com/tutorials/resolve-helm2-helm3-post-migration-issues/ +- https://helm.sh/docs/topics/v2_v3_migration/ +- https://helm.sh/blog/migrate-from-helm-v2-to-helm-v3/ diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl new file mode 100644 index 0000000000..493a6dc7e4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_affinities.tpl @@ -0,0 +1,94 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return a soft nodeAffinity definition +{{ include "common.affinities.nodes.soft" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.soft" -}} +preferredDuringSchedulingIgnoredDuringExecution: + - preference: + matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} + weight: 1 +{{- end -}} + +{{/* +Return a hard nodeAffinity definition +{{ include "common.affinities.nodes.hard" (dict "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes.hard" -}} +requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: {{ .key }} + operator: In + values: + {{- range .values }} + - {{ . }} + {{- end }} +{{- end -}} + +{{/* +Return a nodeAffinity definition +{{ include "common.affinities.nodes" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.nodes" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.nodes.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.nodes.hard" . -}} + {{- end -}} +{{- end -}} + +{{/* +Return a soft podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.soft" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.soft" -}} +{{- $component := default "" .component -}} +preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 10 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname + weight: 1 +{{- end -}} + +{{/* +Return a hard podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods.hard" (dict "component" "FOO" "context" $) -}} +*/}} +{{- define "common.affinities.pods.hard" -}} +{{- $component := default "" .component -}} +requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: {{- (include "common.labels.matchLabels" .context) | nindent 8 }} + {{- if not (empty $component) }} + {{ printf "app.kubernetes.io/component: %s" $component }} + {{- end }} + namespaces: + - {{ .context.Release.Namespace | quote }} + topologyKey: kubernetes.io/hostname +{{- end -}} + +{{/* +Return a podAffinity/podAntiAffinity definition +{{ include "common.affinities.pods" (dict "type" "soft" "key" "FOO" "values" (list "BAR" "BAZ")) -}} +*/}} +{{- define "common.affinities.pods" -}} + {{- if eq .type "soft" }} + {{- include "common.affinities.pods.soft" . -}} + {{- else if eq .type "hard" }} + {{- include "common.affinities.pods.hard" . -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl new file mode 100644 index 0000000000..4dde56a38d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_capabilities.tpl @@ -0,0 +1,95 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Return the target Kubernetes version +*/}} +{{- define "common.capabilities.kubeVersion" -}} +{{- if .Values.global }} + {{- if .Values.global.kubeVersion }} + {{- .Values.global.kubeVersion -}} + {{- else }} + {{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} + {{- end -}} +{{- else }} +{{- default .Capabilities.KubeVersion.Version .Values.kubeVersion -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for deployment. +*/}} +{{- define "common.capabilities.deployment.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for statefulset. +*/}} +{{- define "common.capabilities.statefulset.apiVersion" -}} +{{- if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apps/v1beta1" -}} +{{- else -}} +{{- print "apps/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "common.capabilities.ingress.apiVersion" -}} +{{- if .Values.ingress -}} +{{- if .Values.ingress.apiVersion -}} +{{- .Values.ingress.apiVersion -}} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end }} +{{- else if semverCompare "<1.14-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "extensions/v1beta1" -}} +{{- else if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "networking.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "networking.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for RBAC resources. +*/}} +{{- define "common.capabilities.rbac.apiVersion" -}} +{{- if semverCompare "<1.17-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "rbac.authorization.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "rbac.authorization.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for CRDs. +*/}} +{{- define "common.capabilities.crd.apiVersion" -}} +{{- if semverCompare "<1.19-0" (include "common.capabilities.kubeVersion" .) -}} +{{- print "apiextensions.k8s.io/v1beta1" -}} +{{- else -}} +{{- print "apiextensions.k8s.io/v1" -}} +{{- end -}} +{{- end -}} + +{{/* +Returns true if the used Helm version is 3.3+. +A way to check the used Helm version was not introduced until version 3.3.0 with .Capabilities.HelmVersion, which contains an additional "{}}" structure. +This check is introduced as a regexMatch instead of {{ if .Capabilities.HelmVersion }} because checking for the key HelmVersion in <3.3 results in a "interface not found" error. +**To be removed when the catalog's minimun Helm version is 3.3** +*/}} +{{- define "common.capabilities.supportsHelmVersion" -}} +{{- if regexMatch "{(v[0-9])*[^}]*}}$" (.Capabilities | toString ) }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl new file mode 100644 index 0000000000..a79cc2e322 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_errors.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Through error when upgrading using empty passwords values that must not be empty. + +Usage: +{{- $validationError00 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password00" "secret" "secretName" "field" "password-00") -}} +{{- $validationError01 := include "common.validations.values.single.empty" (dict "valueKey" "path.to.password01" "secret" "secretName" "field" "password-01") -}} +{{ include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $validationError00 $validationError01) "context" $) }} + +Required password params: + - validationErrors - String - Required. List of validation strings to be return, if it is empty it won't throw error. + - context - Context - Required. Parent context. +*/}} +{{- define "common.errors.upgrade.passwords.empty" -}} + {{- $validationErrors := join "" .validationErrors -}} + {{- if and $validationErrors .context.Release.IsUpgrade -}} + {{- $errorString := "\nPASSWORDS ERROR: You must provide your current passwords when upgrading the release." -}} + {{- $errorString = print $errorString "\n Note that even after reinstallation, old credentials may be needed as they may be kept in persistent volume claims." -}} + {{- $errorString = print $errorString "\n Further information can be obtained at https://docs.bitnami.com/general/how-to/troubleshoot-helm-chart-issues/#credential-errors-while-upgrading-chart-releases" -}} + {{- $errorString = print $errorString "\n%s" -}} + {{- printf $errorString $validationErrors | fail -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl new file mode 100644 index 0000000000..60f04fd6e2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_images.tpl @@ -0,0 +1,47 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper image name +{{ include "common.images.image" ( dict "imageRoot" .Values.path.to.the.image "global" $) }} +*/}} +{{- define "common.images.image" -}} +{{- $registryName := .imageRoot.registry -}} +{{- $repositoryName := .imageRoot.repository -}} +{{- $tag := .imageRoot.tag | toString -}} +{{- if .global }} + {{- if .global.imageRegistry }} + {{- $registryName = .global.imageRegistry -}} + {{- end -}} +{{- end -}} +{{- if $registryName }} +{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- else -}} +{{- printf "%s:%s" $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +{{ include "common.images.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "global" .Values.global) }} +*/}} +{{- define "common.images.pullSecrets" -}} + {{- $pullSecrets := list }} + + {{- if .global }} + {{- range .global.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- range .images -}} + {{- range .pullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- end -}} + + {{- if (not (empty $pullSecrets)) }} +imagePullSecrets: + {{- range $pullSecrets }} + - name: {{ . }} + {{- end }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl new file mode 100644 index 0000000000..622ef50e3c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_ingress.tpl @@ -0,0 +1,42 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Generate backend entry that is compatible with all Kubernetes API versions. + +Usage: +{{ include "common.ingress.backend" (dict "serviceName" "backendName" "servicePort" "backendPort" "context" $) }} + +Params: + - serviceName - String. Name of an existing service backend + - servicePort - String/Int. Port name (or number) of the service. It will be translated to different yaml depending if it is a string or an integer. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.ingress.backend" -}} +{{- $apiVersion := (include "common.capabilities.ingress.apiVersion" .context) -}} +{{- if or (eq $apiVersion "extensions/v1beta1") (eq $apiVersion "networking.k8s.io/v1beta1") -}} +serviceName: {{ .serviceName }} +servicePort: {{ .servicePort }} +{{- else -}} +service: + name: {{ .serviceName }} + port: + {{- if typeIs "string" .servicePort }} + name: {{ .servicePort }} + {{- else if typeIs "int" .servicePort }} + number: {{ .servicePort }} + {{- end }} +{{- end -}} +{{- end -}} + +{{/* +Print "true" if the API pathType field is supported +Usage: +{{ include "common.ingress.supportsPathType" . }} +*/}} +{{- define "common.ingress.supportsPathType" -}} +{{- if (semverCompare "<1.18-0" (include "common.capabilities.kubeVersion" .)) -}} +{{- print "false" -}} +{{- else -}} +{{- print "true" -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl new file mode 100644 index 0000000000..252066c7e2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_labels.tpl @@ -0,0 +1,18 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Kubernetes standard labels +*/}} +{{- define "common.labels.standard" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +helm.sh/chart: {{ include "common.names.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end -}} + +{{/* +Labels to use on deploy.spec.selector.matchLabels and svc.spec.selector +*/}} +{{- define "common.labels.matchLabels" -}} +app.kubernetes.io/name: {{ include "common.names.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl new file mode 100644 index 0000000000..adf2a74f48 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_names.tpl @@ -0,0 +1,32 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "common.names.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "common.names.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "common.names.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl new file mode 100644 index 0000000000..60b84a7019 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_secrets.tpl @@ -0,0 +1,129 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Generate secret name. + +Usage: +{{ include "common.secrets.name" (dict "existingSecret" .Values.path.to.the.existingSecret "defaultNameSuffix" "mySuffix" "context" $) }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - defaultNameSuffix - String - Optional. It is used only if we have several secrets in the same deployment. + - context - Dict - Required. The context for the template evaluation. +*/}} +{{- define "common.secrets.name" -}} +{{- $name := (include "common.names.fullname" .context) -}} + +{{- if .defaultNameSuffix -}} +{{- $name = printf "%s-%s" $name .defaultNameSuffix | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- with .existingSecret -}} +{{- if not (typeIs "string" .) -}} +{{- with .name -}} +{{- $name = . -}} +{{- end -}} +{{- else -}} +{{- $name = . -}} +{{- end -}} +{{- end -}} + +{{- printf "%s" $name -}} +{{- end -}} + +{{/* +Generate secret key. + +Usage: +{{ include "common.secrets.key" (dict "existingSecret" .Values.path.to.the.existingSecret "key" "keyName") }} + +Params: + - existingSecret - ExistingSecret/String - Optional. The path to the existing secrets in the values.yaml given by the user + to be used instead of the default one. Allows for it to be of type String (just the secret name) for backwards compatibility. + +info: https://github.com/bitnami/charts/tree/master/bitnami/common#existingsecret + - key - String - Required. Name of the key in the secret. +*/}} +{{- define "common.secrets.key" -}} +{{- $key := .key -}} + +{{- if .existingSecret -}} + {{- if not (typeIs "string" .existingSecret) -}} + {{- if .existingSecret.keyMapping -}} + {{- $key = index .existingSecret.keyMapping $.key -}} + {{- end -}} + {{- end }} +{{- end -}} + +{{- printf "%s" $key -}} +{{- end -}} + +{{/* +Generate secret password or retrieve one if already created. + +Usage: +{{ include "common.secrets.passwords.manage" (dict "secret" "secret-name" "key" "keyName" "providedValues" (list "path.to.password1" "path.to.password2") "length" 10 "strong" false "chartName" "chartName" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - key - String - Required - Name of the key in the secret. + - providedValues - List - Required - The path to the validating value in the values.yaml, e.g: "mysql.password". Will pick first parameter with a defined value. + - length - int - Optional - Length of the generated random password. + - strong - Boolean - Optional - Whether to add symbols to the generated random password. + - chartName - String - Optional - Name of the chart used when said chart is deployed as a subchart. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.passwords.manage" -}} + +{{- $password := "" }} +{{- $subchart := "" }} +{{- $chartName := default "" .chartName }} +{{- $passwordLength := default 10 .length }} +{{- $providedPasswordKey := include "common.utils.getKeyFromList" (dict "keys" .providedValues "context" $.context) }} +{{- $providedPasswordValue := include "common.utils.getValueFromKey" (dict "key" $providedPasswordKey "context" $.context) }} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- if index $secret.data .key }} + {{- $password = index $secret.data .key }} + {{- end -}} +{{- else if $providedPasswordValue }} + {{- $password = $providedPasswordValue | toString | b64enc | quote }} +{{- else }} + + {{- if .context.Values.enabled }} + {{- $subchart = $chartName }} + {{- end -}} + + {{- $requiredPassword := dict "valueKey" $providedPasswordKey "secret" .secret "field" .key "subchart" $subchart "context" $.context -}} + {{- $requiredPasswordError := include "common.validations.values.single.empty" $requiredPassword -}} + {{- $passwordValidationErrors := list $requiredPasswordError -}} + {{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" $passwordValidationErrors "context" $.context) -}} + + {{- if .strong }} + {{- $subStr := list (lower (randAlpha 1)) (randNumeric 1) (upper (randAlpha 1)) | join "_" }} + {{- $password = randAscii $passwordLength }} + {{- $password = regexReplaceAllLiteral "\\W" $password "@" | substr 5 $passwordLength }} + {{- $password = printf "%s%s" $subStr $password | toString | shuffle | b64enc | quote }} + {{- else }} + {{- $password = randAlphaNum $passwordLength | b64enc | quote }} + {{- end }} +{{- end -}} +{{- printf "%s" $password -}} +{{- end -}} + +{{/* +Returns whether a previous generated secret already exists + +Usage: +{{ include "common.secrets.exists" (dict "secret" "secret-name" "context" $) }} + +Params: + - secret - String - Required - Name of the 'Secret' resource where the password is stored. + - context - Context - Required - Parent context. +*/}} +{{- define "common.secrets.exists" -}} +{{- $secret := (lookup "v1" "Secret" $.context.Release.Namespace .secret) }} +{{- if $secret }} + {{- true -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl new file mode 100644 index 0000000000..60e2a844f6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_storage.tpl @@ -0,0 +1,23 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Return the proper Storage Class +{{ include "common.storage.class" ( dict "persistence" .Values.path.to.the.persistence "global" $) }} +*/}} +{{- define "common.storage.class" -}} + +{{- $storageClass := .persistence.storageClass -}} +{{- if .global -}} + {{- if .global.storageClass -}} + {{- $storageClass = .global.storageClass -}} + {{- end -}} +{{- end -}} + +{{- if $storageClass -}} + {{- if (eq "-" $storageClass) -}} + {{- printf "storageClassName: \"\"" -}} + {{- else }} + {{- printf "storageClassName: %s" $storageClass -}} + {{- end -}} +{{- end -}} + +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl new file mode 100644 index 0000000000..2db166851b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_tplvalues.tpl @@ -0,0 +1,13 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Renders a value that contains template. +Usage: +{{ include "common.tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "common.tplvalues.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl new file mode 100644 index 0000000000..ea083a249f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_utils.tpl @@ -0,0 +1,62 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Print instructions to get a secret value. +Usage: +{{ include "common.utils.secret.getvalue" (dict "secret" "secret-name" "field" "secret-value-field" "context" $) }} +*/}} +{{- define "common.utils.secret.getvalue" -}} +{{- $varname := include "common.utils.fieldToEnvVar" . -}} +export {{ $varname }}=$(kubectl get secret --namespace {{ .context.Release.Namespace | quote }} {{ .secret }} -o jsonpath="{.data.{{ .field }}}" | base64 --decode) +{{- end -}} + +{{/* +Build env var name given a field +Usage: +{{ include "common.utils.fieldToEnvVar" dict "field" "my-password" }} +*/}} +{{- define "common.utils.fieldToEnvVar" -}} + {{- $fieldNameSplit := splitList "-" .field -}} + {{- $upperCaseFieldNameSplit := list -}} + + {{- range $fieldNameSplit -}} + {{- $upperCaseFieldNameSplit = append $upperCaseFieldNameSplit ( upper . ) -}} + {{- end -}} + + {{ join "_" $upperCaseFieldNameSplit }} +{{- end -}} + +{{/* +Gets a value from .Values given +Usage: +{{ include "common.utils.getValueFromKey" (dict "key" "path.to.key" "context" $) }} +*/}} +{{- define "common.utils.getValueFromKey" -}} +{{- $splitKey := splitList "." .key -}} +{{- $value := "" -}} +{{- $latestObj := $.context.Values -}} +{{- range $splitKey -}} + {{- if not $latestObj -}} + {{- printf "please review the entire path of '%s' exists in values" $.key | fail -}} + {{- end -}} + {{- $value = ( index $latestObj . ) -}} + {{- $latestObj = $value -}} +{{- end -}} +{{- printf "%v" (default "" $value) -}} +{{- end -}} + +{{/* +Returns first .Values key with a defined value or first of the list if all non-defined +Usage: +{{ include "common.utils.getKeyFromList" (dict "keys" (list "path.to.key1" "path.to.key2") "context" $) }} +*/}} +{{- define "common.utils.getKeyFromList" -}} +{{- $key := first .keys -}} +{{- $reverseKeys := reverse .keys }} +{{- range $reverseKeys }} + {{- $value := include "common.utils.getValueFromKey" (dict "key" . "context" $.context ) }} + {{- if $value -}} + {{- $key = . }} + {{- end -}} +{{- end -}} +{{- printf "%s" $key -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl new file mode 100644 index 0000000000..ae10fa41ee --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/_warnings.tpl @@ -0,0 +1,14 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Warning about using rolling tag. +Usage: +{{ include "common.warnings.rollingTag" .Values.path.to.the.imageRoot }} +*/}} +{{- define "common.warnings.rollingTag" -}} + +{{- if and (contains "bitnami/" .repository) (not (.tag | toString | regexFind "-r\\d+$|sha256:")) }} +WARNING: Rolling tag detected ({{ .repository }}:{{ .tag }}), please note that it is strongly recommended to avoid using rolling tags in a production environment. ++info https://docs.bitnami.com/containers/how-to/understand-rolling-tags-containers/ +{{- end }} + +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl new file mode 100644 index 0000000000..8679ddffb1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_cassandra.tpl @@ -0,0 +1,72 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Cassandra required passwords are not empty. + +Usage: +{{ include "common.validations.values.cassandra.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where Cassandra values are stored, e.g: "cassandra-passwords-secret" + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.cassandra.passwords" -}} + {{- $existingSecret := include "common.cassandra.values.existingSecret" . -}} + {{- $enabled := include "common.cassandra.values.enabled" . -}} + {{- $dbUserPrefix := include "common.cassandra.values.key.dbUser" . -}} + {{- $valueKeyPassword := printf "%s.password" $dbUserPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "cassandra-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.cassandra.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.cassandra.dbUser.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.dbUser.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled cassandra. + +Usage: +{{ include "common.cassandra.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.cassandra.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.cassandra.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key dbUser + +Usage: +{{ include "common.cassandra.values.key.dbUser" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Cassandra is used as subchart or not. Default: false +*/}} +{{- define "common.cassandra.values.key.dbUser" -}} + {{- if .subchart -}} + cassandra.dbUser + {{- else -}} + dbUser + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl new file mode 100644 index 0000000000..bb5ed7253d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mariadb.tpl @@ -0,0 +1,103 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MariaDB required passwords are not empty. + +Usage: +{{ include "common.validations.values.mariadb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MariaDB values are stored, e.g: "mysql-passwords-secret" + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mariadb.passwords" -}} + {{- $existingSecret := include "common.mariadb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mariadb.values.enabled" . -}} + {{- $architecture := include "common.mariadb.values.architecture" . -}} + {{- $authPrefix := include "common.mariadb.values.key.auth" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicationPassword := printf "%s.replicationPassword" $authPrefix -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mariadb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- if not (empty $valueUsername) -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mariadb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replication") -}} + {{- $requiredReplicationPassword := dict "valueKey" $valueKeyReplicationPassword "secret" .secret "field" "mariadb-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mariadb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mariadb. + +Usage: +{{ include "common.mariadb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mariadb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mariadb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mariadb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mariadb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mariadb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mariadb.values.key.auth" -}} + {{- if .subchart -}} + mariadb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl new file mode 100644 index 0000000000..7d5ecbccb4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_mongodb.tpl @@ -0,0 +1,108 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate MongoDB(R) required passwords are not empty. + +Usage: +{{ include "common.validations.values.mongodb.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where MongoDB(R) values are stored, e.g: "mongodb-passwords-secret" + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.mongodb.passwords" -}} + {{- $existingSecret := include "common.mongodb.values.auth.existingSecret" . -}} + {{- $enabled := include "common.mongodb.values.enabled" . -}} + {{- $authPrefix := include "common.mongodb.values.key.auth" . -}} + {{- $architecture := include "common.mongodb.values.architecture" . -}} + {{- $valueKeyRootPassword := printf "%s.rootPassword" $authPrefix -}} + {{- $valueKeyUsername := printf "%s.username" $authPrefix -}} + {{- $valueKeyDatabase := printf "%s.database" $authPrefix -}} + {{- $valueKeyPassword := printf "%s.password" $authPrefix -}} + {{- $valueKeyReplicaSetKey := printf "%s.replicaSetKey" $authPrefix -}} + {{- $valueKeyAuthEnabled := printf "%s.enabled" $authPrefix -}} + + {{- $authEnabled := include "common.utils.getValueFromKey" (dict "key" $valueKeyAuthEnabled "context" .context) -}} + + {{- if and (not $existingSecret) (eq $enabled "true") (eq $authEnabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredRootPassword := dict "valueKey" $valueKeyRootPassword "secret" .secret "field" "mongodb-root-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRootPassword -}} + + {{- $valueUsername := include "common.utils.getValueFromKey" (dict "key" $valueKeyUsername "context" .context) }} + {{- $valueDatabase := include "common.utils.getValueFromKey" (dict "key" $valueKeyDatabase "context" .context) }} + {{- if and $valueUsername $valueDatabase -}} + {{- $requiredPassword := dict "valueKey" $valueKeyPassword "secret" .secret "field" "mongodb-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPassword -}} + {{- end -}} + + {{- if (eq $architecture "replicaset") -}} + {{- $requiredReplicaSetKey := dict "valueKey" $valueKeyReplicaSetKey "secret" .secret "field" "mongodb-replica-set-key" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredReplicaSetKey -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.mongodb.values.auth.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDb is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.auth.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.auth.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.auth.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled mongodb. + +Usage: +{{ include "common.mongodb.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.mongodb.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.mongodb.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key auth + +Usage: +{{ include "common.mongodb.values.key.auth" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MongoDB(R) is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.key.auth" -}} + {{- if .subchart -}} + mongodb.auth + {{- else -}} + auth + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for architecture + +Usage: +{{ include "common.mongodb.values.architecture" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether MariaDB is used as subchart or not. Default: false +*/}} +{{- define "common.mongodb.values.architecture" -}} + {{- if .subchart -}} + {{- .context.Values.mongodb.architecture -}} + {{- else -}} + {{- .context.Values.architecture -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl new file mode 100644 index 0000000000..992bcd3908 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_postgresql.tpl @@ -0,0 +1,131 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate PostgreSQL required passwords are not empty. + +Usage: +{{ include "common.validations.values.postgresql.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where postgresql values are stored, e.g: "postgresql-passwords-secret" + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.postgresql.passwords" -}} + {{- $existingSecret := include "common.postgresql.values.existingSecret" . -}} + {{- $enabled := include "common.postgresql.values.enabled" . -}} + {{- $valueKeyPostgresqlPassword := include "common.postgresql.values.key.postgressPassword" . -}} + {{- $valueKeyPostgresqlReplicationEnabled := include "common.postgresql.values.key.replicationPassword" . -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $requiredPostgresqlPassword := dict "valueKey" $valueKeyPostgresqlPassword "secret" .secret "field" "postgresql-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlPassword -}} + + {{- $enabledReplication := include "common.postgresql.values.enabled.replication" . -}} + {{- if (eq $enabledReplication "true") -}} + {{- $requiredPostgresqlReplicationPassword := dict "valueKey" $valueKeyPostgresqlReplicationEnabled "secret" .secret "field" "postgresql-replication-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredPostgresqlReplicationPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to decide whether evaluate global values. + +Usage: +{{ include "common.postgresql.values.use.global" (dict "key" "key-of-global" "context" $) }} +Params: + - key - String - Required. Field to be evaluated within global, e.g: "existingSecret" +*/}} +{{- define "common.postgresql.values.use.global" -}} + {{- if .context.Values.global -}} + {{- if .context.Values.global.postgresql -}} + {{- index .context.Values.global.postgresql .key | quote -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.postgresql.values.existingSecret" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.existingSecret" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "existingSecret" "context" .context) -}} + + {{- if .subchart -}} + {{- default (.context.Values.postgresql.existingSecret | quote) $globalValue -}} + {{- else -}} + {{- default (.context.Values.existingSecret | quote) $globalValue -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled postgresql. + +Usage: +{{ include "common.postgresql.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.postgresql.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key postgressPassword. + +Usage: +{{ include "common.postgresql.values.key.postgressPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.postgressPassword" -}} + {{- $globalValue := include "common.postgresql.values.use.global" (dict "key" "postgresqlUsername" "context" .context) -}} + + {{- if not $globalValue -}} + {{- if .subchart -}} + postgresql.postgresqlPassword + {{- else -}} + postgresqlPassword + {{- end -}} + {{- else -}} + global.postgresql.postgresqlPassword + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled.replication. + +Usage: +{{ include "common.postgresql.values.enabled.replication" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.enabled.replication" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.postgresql.replication.enabled -}} + {{- else -}} + {{- printf "%v" .context.Values.replication.enabled -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for the key replication.password. + +Usage: +{{ include "common.postgresql.values.key.replicationPassword" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether postgresql is used as subchart or not. Default: false +*/}} +{{- define "common.postgresql.values.key.replicationPassword" -}} + {{- if .subchart -}} + postgresql.replication.password + {{- else -}} + replication.password + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl new file mode 100644 index 0000000000..3e2a47c039 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_redis.tpl @@ -0,0 +1,72 @@ + +{{/* vim: set filetype=mustache: */}} +{{/* +Validate Redis(TM) required passwords are not empty. + +Usage: +{{ include "common.validations.values.redis.passwords" (dict "secret" "secretName" "subchart" false "context" $) }} +Params: + - secret - String - Required. Name of the secret where redis values are stored, e.g: "redis-passwords-secret" + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.validations.values.redis.passwords" -}} + {{- $existingSecret := include "common.redis.values.existingSecret" . -}} + {{- $enabled := include "common.redis.values.enabled" . -}} + {{- $valueKeyPrefix := include "common.redis.values.keys.prefix" . -}} + {{- $valueKeyRedisPassword := printf "%s%s" $valueKeyPrefix "password" -}} + {{- $valueKeyRedisUsePassword := printf "%s%s" $valueKeyPrefix "usePassword" -}} + + {{- if and (not $existingSecret) (eq $enabled "true") -}} + {{- $requiredPasswords := list -}} + + {{- $usePassword := include "common.utils.getValueFromKey" (dict "key" $valueKeyRedisUsePassword "context" .context) -}} + {{- if eq $usePassword "true" -}} + {{- $requiredRedisPassword := dict "valueKey" $valueKeyRedisPassword "secret" .secret "field" "redis-password" -}} + {{- $requiredPasswords = append $requiredPasswords $requiredRedisPassword -}} + {{- end -}} + + {{- include "common.validations.values.multiple.empty" (dict "required" $requiredPasswords "context" .context) -}} + {{- end -}} +{{- end -}} + +{{/* +Redis Auxiliary function to get the right value for existingSecret. + +Usage: +{{ include "common.redis.values.existingSecret" (dict "context" $) }} +Params: + - subchart - Boolean - Optional. Whether Redis(TM) is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.existingSecret" -}} + {{- if .subchart -}} + {{- .context.Values.redis.existingSecret | quote -}} + {{- else -}} + {{- .context.Values.existingSecret | quote -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right value for enabled redis. + +Usage: +{{ include "common.redis.values.enabled" (dict "context" $) }} +*/}} +{{- define "common.redis.values.enabled" -}} + {{- if .subchart -}} + {{- printf "%v" .context.Values.redis.enabled -}} + {{- else -}} + {{- printf "%v" (not .context.Values.enabled) -}} + {{- end -}} +{{- end -}} + +{{/* +Auxiliary function to get the right prefix path for the values + +Usage: +{{ include "common.redis.values.key.prefix" (dict "subchart" "true" "context" $) }} +Params: + - subchart - Boolean - Optional. Whether redis is used as subchart or not. Default: false +*/}} +{{- define "common.redis.values.keys.prefix" -}} + {{- if .subchart -}}redis.{{- else -}}{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl new file mode 100644 index 0000000000..9a814cf40d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/templates/validations/_validations.tpl @@ -0,0 +1,46 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Validate values must not be empty. + +Usage: +{{- $validateValueConf00 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-00") -}} +{{- $validateValueConf01 := (dict "valueKey" "path.to.value" "secret" "secretName" "field" "password-01") -}} +{{ include "common.validations.values.empty" (dict "required" (list $validateValueConf00 $validateValueConf01) "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" +*/}} +{{- define "common.validations.values.multiple.empty" -}} + {{- range .required -}} + {{- include "common.validations.values.single.empty" (dict "valueKey" .valueKey "secret" .secret "field" .field "context" $.context) -}} + {{- end -}} +{{- end -}} + +{{/* +Validate a value must not be empty. + +Usage: +{{ include "common.validations.value.empty" (dict "valueKey" "mariadb.password" "secret" "secretName" "field" "my-password" "subchart" "subchart" "context" $) }} + +Validate value params: + - valueKey - String - Required. The path to the validating value in the values.yaml, e.g: "mysql.password" + - secret - String - Optional. Name of the secret where the validating value is generated/stored, e.g: "mysql-passwords-secret" + - field - String - Optional. Name of the field in the secret data, e.g: "mysql-password" + - subchart - String - Optional - Name of the subchart that the validated password is part of. +*/}} +{{- define "common.validations.values.single.empty" -}} + {{- $value := include "common.utils.getValueFromKey" (dict "key" .valueKey "context" .context) }} + {{- $subchart := ternary "" (printf "%s." .subchart) (empty .subchart) }} + + {{- if not $value -}} + {{- $varname := "my-value" -}} + {{- $getCurrentValue := "" -}} + {{- if and .secret .field -}} + {{- $varname = include "common.utils.fieldToEnvVar" . -}} + {{- $getCurrentValue = printf " To get the current value:\n\n %s\n" (include "common.utils.secret.getvalue" .) -}} + {{- end -}} + {{- printf "\n '%s' must not be empty, please add '--set %s%s=$%s' to the command.%s" .valueKey $subchart .valueKey $varname $getCurrentValue -}} + {{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/values.yaml new file mode 100644 index 0000000000..9ecdc93f58 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/charts/common/values.yaml @@ -0,0 +1,3 @@ +## bitnami/common +## It is required by CI/CD tools and processes. +exampleValue: common-chart diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml new file mode 100644 index 0000000000..97e18a4cc0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/commonAnnotations.yaml @@ -0,0 +1,3 @@ +commonAnnotations: + helm.sh/hook: "\"pre-install, pre-upgrade\"" + helm.sh/hook-weight: "-1" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/default-values.yaml new file mode 100644 index 0000000000..fc2ba605ad --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/default-values.yaml @@ -0,0 +1 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml new file mode 100644 index 0000000000..347d3b40a8 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/ci/shmvolume-disabled-values.yaml @@ -0,0 +1,2 @@ +shmVolume: + enabled: false diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/README.md new file mode 100644 index 0000000000..1813a2feaa --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/README.md @@ -0,0 +1 @@ +Copy here your postgresql.conf and/or pg_hba.conf files to use it as a config map. diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/conf.d/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/conf.d/README.md new file mode 100644 index 0000000000..184c1875d5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/conf.d/README.md @@ -0,0 +1,4 @@ +If you don't want to provide the whole configuration file and only specify certain parameters, you can copy here your extended `.conf` files. +These files will be injected as a config maps and add/overwrite the default configuration using the `include_dir` directive that allows settings to be loaded from files other than the default `postgresql.conf`. + +More info in the [bitnami-docker-postgresql README](https://github.com/bitnami/bitnami-docker-postgresql#configuration-file). diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md new file mode 100644 index 0000000000..cba38091e0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/files/docker-entrypoint-initdb.d/README.md @@ -0,0 +1,3 @@ +You can copy here your custom `.sh`, `.sql` or `.sql.gz` file so they are executed during the first boot of the image. + +More info in the [bitnami-docker-postgresql](https://github.com/bitnami/bitnami-docker-postgresql#initializing-a-new-instance) repository. \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/NOTES.txt new file mode 100644 index 0000000000..4e98958c13 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/NOTES.txt @@ -0,0 +1,59 @@ +** Please be patient while the chart is being deployed ** + +PostgreSQL can be accessed via port {{ template "postgresql.port" . }} on the following DNS name from within your cluster: + + {{ template "common.names.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local - Read/Write connection +{{- if .Values.replication.enabled }} + {{ template "common.names.fullname" . }}-read.{{ .Release.Namespace }}.svc.cluster.local - Read only connection +{{- end }} + +{{- if not (eq (include "postgresql.username" .) "postgres") }} + +To get the password for "postgres" run: + + export POSTGRES_ADMIN_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-postgres-password}" | base64 --decode) +{{- end }} + +To get the password for "{{ template "postgresql.username" . }}" run: + + export POSTGRES_PASSWORD=$(kubectl get secret --namespace {{ .Release.Namespace }} {{ template "postgresql.secretName" . }} -o jsonpath="{.data.postgresql-password}" | base64 --decode) + +To connect to your database run the following command: + + kubectl run {{ template "common.names.fullname" . }}-client --rm --tty -i --restart='Never' --namespace {{ .Release.Namespace }} --image {{ template "postgresql.image" . }} --env="PGPASSWORD=$POSTGRES_PASSWORD" {{- if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} + --labels="{{ template "common.names.fullname" . }}-client=true" {{- end }} --command -- psql --host {{ template "common.names.fullname" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{ if and (.Values.networkPolicy.enabled) (not .Values.networkPolicy.allowExternal) }} +Note: Since NetworkPolicy is enabled, only pods with label {{ template "common.names.fullname" . }}-client=true" will be able to connect to this PostgreSQL cluster. +{{- end }} + +To connect to your database from outside the cluster execute the following commands: + +{{- if contains "NodePort" .Values.service.type }} + + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "common.names.fullname" . }}) + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $NODE_IP --port $NODE_PORT -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "LoadBalancer" .Values.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + Watch the status with: 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "common.names.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "common.names.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host $SERVICE_IP --port {{ template "postgresql.port" . }} -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} + +{{- else if contains "ClusterIP" .Values.service.type }} + + kubectl port-forward --namespace {{ .Release.Namespace }} svc/{{ template "common.names.fullname" . }} {{ template "postgresql.port" . }}:{{ template "postgresql.port" . }} & + {{ if (include "postgresql.password" . ) }}PGPASSWORD="$POSTGRES_PASSWORD" {{ end }}psql --host 127.0.0.1 -U {{ .Values.postgresqlUsername }} -d {{- if .Values.postgresqlDatabase }} {{ .Values.postgresqlDatabase }}{{- else }} postgres{{- end }} -p {{ template "postgresql.port" . }} + +{{- end }} + +{{- include "postgresql.validateValues" . -}} + +{{- include "common.warnings.rollingTag" .Values.image -}} + +{{- $passwordValidationErrors := include "common.validations.values.postgresql.passwords" (dict "secret" (include "common.names.fullname" .) "context" $) -}} + +{{- include "common.errors.upgrade.passwords.empty" (dict "validationErrors" (list $passwordValidationErrors) "context" $) -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/_helpers.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/_helpers.tpl new file mode 100644 index 0000000000..1f98efe789 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/_helpers.tpl @@ -0,0 +1,337 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Expand the name of the chart. +*/}} +{{- define "postgresql.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "postgresql.primary.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- $fullname := default (printf "%s-%s" .Release.Name $name) .Values.fullnameOverride -}} +{{- if .Values.replication.enabled -}} +{{- printf "%s-%s" $fullname "primary" | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $fullname | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper PostgreSQL image name +*/}} +{{- define "postgresql.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper PostgreSQL metrics image name +*/}} +{{- define "postgresql.metrics.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.metrics.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper image name (for the init container volume-permissions image) +*/}} +{{- define "postgresql.volumePermissions.image" -}} +{{ include "common.images.image" (dict "imageRoot" .Values.volumePermissions.image "global" .Values.global) }} +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names +*/}} +{{- define "postgresql.imagePullSecrets" -}} +{{ include "common.images.pullSecrets" (dict "images" (list .Values.image .Values.metrics.image .Values.volumePermissions.image) "global" .Values.global) }} +{{- end -}} + +{{/* +Return PostgreSQL postgres user password +*/}} +{{- define "postgresql.postgres.password" -}} +{{- if .Values.global.postgresql.postgresqlPostgresPassword }} + {{- .Values.global.postgresql.postgresqlPostgresPassword -}} +{{- else if .Values.postgresqlPostgresPassword -}} + {{- .Values.postgresqlPostgresPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL password +*/}} +{{- define "postgresql.password" -}} +{{- if .Values.global.postgresql.postgresqlPassword }} + {{- .Values.global.postgresql.postgresqlPassword -}} +{{- else if .Values.postgresqlPassword -}} + {{- .Values.postgresqlPassword -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication password +*/}} +{{- define "postgresql.replication.password" -}} +{{- if .Values.global.postgresql.replicationPassword }} + {{- .Values.global.postgresql.replicationPassword -}} +{{- else if .Values.replication.password -}} + {{- .Values.replication.password -}} +{{- else -}} + {{- randAlphaNum 10 -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL username +*/}} +{{- define "postgresql.username" -}} +{{- if .Values.global.postgresql.postgresqlUsername }} + {{- .Values.global.postgresql.postgresqlUsername -}} +{{- else -}} + {{- .Values.postgresqlUsername -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL replication username +*/}} +{{- define "postgresql.replication.username" -}} +{{- if .Values.global.postgresql.replicationUser }} + {{- .Values.global.postgresql.replicationUser -}} +{{- else -}} + {{- .Values.replication.user -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL port +*/}} +{{- define "postgresql.port" -}} +{{- if .Values.global.postgresql.servicePort }} + {{- .Values.global.postgresql.servicePort -}} +{{- else -}} + {{- .Values.service.port -}} +{{- end -}} +{{- end -}} + +{{/* +Return PostgreSQL created database +*/}} +{{- define "postgresql.database" -}} +{{- if .Values.global.postgresql.postgresqlDatabase }} + {{- .Values.global.postgresql.postgresqlDatabase -}} +{{- else if .Values.postgresqlDatabase -}} + {{- .Values.postgresqlDatabase -}} +{{- end -}} +{{- end -}} + +{{/* +Get the password secret. +*/}} +{{- define "postgresql.secretName" -}} +{{- if .Values.global.postgresql.existingSecret }} + {{- printf "%s" (tpl .Values.global.postgresql.existingSecret $) -}} +{{- else if .Values.existingSecret -}} + {{- printf "%s" (tpl .Values.existingSecret $) -}} +{{- else -}} + {{- printf "%s" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if we should use an existingSecret. +*/}} +{{- define "postgresql.useExistingSecret" -}} +{{- if or .Values.global.postgresql.existingSecret .Values.existingSecret -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a secret object should be created +*/}} +{{- define "postgresql.createSecret" -}} +{{- if not (include "postgresql.useExistingSecret" .) -}} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the configuration ConfigMap name. +*/}} +{{- define "postgresql.configurationCM" -}} +{{- if .Values.configurationConfigMap -}} +{{- printf "%s" (tpl .Values.configurationConfigMap $) -}} +{{- else -}} +{{- printf "%s-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the extended configuration ConfigMap name. +*/}} +{{- define "postgresql.extendedConfigurationCM" -}} +{{- if .Values.extendedConfConfigMap -}} +{{- printf "%s" (tpl .Values.extendedConfConfigMap $) -}} +{{- else -}} +{{- printf "%s-extended-configuration" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Return true if a configmap should be mounted with PostgreSQL configuration +*/}} +{{- define "postgresql.mountConfigurationCM" -}} +{{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + {{- true -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts ConfigMap name. +*/}} +{{- define "postgresql.initdbScriptsCM" -}} +{{- if .Values.initdbScriptsConfigMap -}} +{{- printf "%s" (tpl .Values.initdbScriptsConfigMap $) -}} +{{- else -}} +{{- printf "%s-init-scripts" (include "common.names.fullname" .) -}} +{{- end -}} +{{- end -}} + +{{/* +Get the initialization scripts Secret name. +*/}} +{{- define "postgresql.initdbScriptsSecret" -}} +{{- printf "%s" (tpl .Values.initdbScriptsSecret $) -}} +{{- end -}} + +{{/* +Get the metrics ConfigMap name. +*/}} +{{- define "postgresql.metricsCM" -}} +{{- printf "%s-metrics" (include "common.names.fullname" .) -}} +{{- end -}} + +{{/* +Get the readiness probe command +*/}} +{{- define "postgresql.readinessProbeCommand" -}} +- | +{{- if (include "postgresql.database" .) }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- else }} + exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} +{{- end }} +{{- if contains "bitnami/" .Values.image.repository }} + [ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ] +{{- end -}} +{{- end -}} + +{{/* +Compile all warnings into a single message, and call fail. +*/}} +{{- define "postgresql.validateValues" -}} +{{- $messages := list -}} +{{- $messages := append $messages (include "postgresql.validateValues.ldapConfigurationMethod" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.psp" .) -}} +{{- $messages := append $messages (include "postgresql.validateValues.tls" .) -}} +{{- $messages := without $messages "" -}} +{{- $message := join "\n" $messages -}} + +{{- if $message -}} +{{- printf "\nVALUES VALIDATION:\n%s" $message | fail -}} +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If ldap.url is used then you don't need the other settings for ldap +*/}} +{{- define "postgresql.validateValues.ldapConfigurationMethod" -}} +{{- if and .Values.ldap.enabled (and (not (empty .Values.ldap.url)) (not (empty .Values.ldap.server))) }} +postgresql: ldap.url, ldap.server + You cannot set both `ldap.url` and `ldap.server` at the same time. + Please provide a unique way to configure LDAP. + More info at https://www.postgresql.org/docs/current/auth-ldap.html +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql - If PSP is enabled RBAC should be enabled too +*/}} +{{- define "postgresql.validateValues.psp" -}} +{{- if and .Values.psp.create (not .Values.rbac.create) }} +postgresql: psp.create, rbac.create + RBAC should be enabled if PSP is enabled in order for PSP to work. + More info at https://kubernetes.io/docs/concepts/policy/pod-security-policy/#authorizing-policies +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for podsecuritypolicy. +*/}} +{{- define "podsecuritypolicy.apiVersion" -}} +{{- if semverCompare "<1.10-0" .Capabilities.KubeVersion.GitVersion -}} +{{- print "extensions/v1beta1" -}} +{{- else -}} +{{- print "policy/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "postgresql.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"extensions/v1beta1" +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion -}} +"networking.k8s.io/v1" +{{- end -}} +{{- end -}} + +{{/* +Validate values of Postgresql TLS - When TLS is enabled, so must be VolumePermissions +*/}} +{{- define "postgresql.validateValues.tls" -}} +{{- if and .Values.tls.enabled (not .Values.volumePermissions.enabled) }} +postgresql: tls.enabled, volumePermissions.enabled + When TLS is enabled you must enable volumePermissions as well to ensure certificates files have + the right permissions. +{{- end -}} +{{- end -}} + +{{/* +Return the path to the cert file. +*/}} +{{- define "postgresql.tlsCert" -}} +{{- required "Certificate filename is required when TLS in enabled" .Values.tls.certFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the cert key file. +*/}} +{{- define "postgresql.tlsCertKey" -}} +{{- required "Certificate Key filename is required when TLS in enabled" .Values.tls.certKeyFilename | printf "/opt/bitnami/postgresql/certs/%s" -}} +{{- end -}} + +{{/* +Return the path to the CA cert file. +*/}} +{{- define "postgresql.tlsCACert" -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.certCAFilename -}} +{{- end -}} + +{{/* +Return the path to the CRL file. +*/}} +{{- define "postgresql.tlsCRL" -}} +{{- if .Values.tls.crlFilename -}} +{{- printf "/opt/bitnami/postgresql/certs/%s" .Values.tls.crlFilename -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/configmap.yaml new file mode 100644 index 0000000000..3a5ea18ae9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/configmap.yaml @@ -0,0 +1,31 @@ +{{ if and (or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration) (not .Values.configurationConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- if (.Files.Glob "files/postgresql.conf") }} +{{ (.Files.Glob "files/postgresql.conf").AsConfig | indent 2 }} +{{- else if .Values.postgresqlConfiguration }} + postgresql.conf: | +{{- range $key, $value := default dict .Values.postgresqlConfiguration }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- if (.Files.Glob "files/pg_hba.conf") }} +{{ (.Files.Glob "files/pg_hba.conf").AsConfig | indent 2 }} +{{- else if .Values.pgHbaConfiguration }} + pg_hba.conf: | +{{ .Values.pgHbaConfiguration | indent 4 }} +{{- end }} +{{ end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml new file mode 100644 index 0000000000..b0dad253b5 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extended-config-configmap.yaml @@ -0,0 +1,26 @@ +{{- if and (or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf) (not .Values.extendedConfConfigMap)}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-extended-configuration + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: +{{- with .Files.Glob "files/conf.d/*.conf" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{ with .Values.postgresqlExtendedConf }} + override.conf: | +{{- range $key, $value := . }} + {{- if kindIs "string" $value }} + {{ $key | snakecase }} = '{{ $value }}' + {{- else }} + {{ $key | snakecase }} = {{ $value }} + {{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extra-list.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extra-list.yaml new file mode 100644 index 0000000000..9ac65f9e16 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/extra-list.yaml @@ -0,0 +1,4 @@ +{{- range .Values.extraDeploy }} +--- +{{ include "common.tplvalues.render" (dict "value" . "context" $) }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml new file mode 100644 index 0000000000..7796c67a93 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/initialization-configmap.yaml @@ -0,0 +1,25 @@ +{{- if and (or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScripts) (not .Values.initdbScriptsConfigMap) }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "common.names.fullname" . }}-init-scripts + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.sql.gz" }} +binaryData: +{{- range $path, $bytes := . }} + {{ base $path }}: {{ $.Files.Get $path | b64enc | quote }} +{{- end }} +{{- end }} +data: +{{- with .Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql}" }} +{{ .AsConfig | indent 2 }} +{{- end }} +{{- with .Values.initdbScripts }} +{{ toYaml . | indent 2 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml new file mode 100644 index 0000000000..fa539582bb --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-configmap.yaml @@ -0,0 +1,14 @@ +{{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "postgresql.metricsCM" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +data: + custom-metrics.yaml: {{ toYaml .Values.metrics.customMetrics | quote }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml new file mode 100644 index 0000000000..af8b67e2ff --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/metrics-svc.yaml @@ -0,0 +1,26 @@ +{{- if .Values.metrics.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-metrics + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- toYaml .Values.metrics.service.annotations | nindent 4 }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ .Values.metrics.service.type }} + {{- if and (eq .Values.metrics.service.type "LoadBalancer") .Values.metrics.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.metrics.service.loadBalancerIP }} + {{- end }} + ports: + - name: http-metrics + port: 9187 + targetPort: http-metrics + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml new file mode 100644 index 0000000000..4f2740ea0c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/networkpolicy.yaml @@ -0,0 +1,39 @@ +{{- if .Values.networkPolicy.enabled }} +kind: NetworkPolicy +apiVersion: {{ template "postgresql.networkPolicy.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + ingress: + # Allow inbound connections + - ports: + - port: {{ template "postgresql.port" . }} + {{- if not .Values.networkPolicy.allowExternal }} + from: + - podSelector: + matchLabels: + {{ template "common.names.fullname" . }}-client: "true" + {{- if .Values.networkPolicy.explicitNamespacesSelector }} + namespaceSelector: +{{ toYaml .Values.networkPolicy.explicitNamespacesSelector | indent 12 }} + {{- end }} + - podSelector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 14 }} + role: read + {{- end }} + {{- if .Values.metrics.enabled }} + # Allow prometheus scrapes + - ports: + - port: 9187 + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..0c49694fad --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/podsecuritypolicy.yaml @@ -0,0 +1,38 @@ +{{- if .Values.psp.create }} +apiVersion: {{ include "podsecuritypolicy.apiVersion" . }} +kind: PodSecurityPolicy +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + privileged: false + volumes: + - 'configMap' + - 'secret' + - 'persistentVolumeClaim' + - 'emptyDir' + - 'projected' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml new file mode 100644 index 0000000000..d0f408c78f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/prometheusrule.yaml @@ -0,0 +1,23 @@ +{{- if and .Values.metrics.enabled .Values.metrics.prometheusRule.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: PrometheusRule +metadata: + name: {{ template "common.names.fullname" . }} +{{- with .Values.metrics.prometheusRule.namespace }} + namespace: {{ . }} +{{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- with .Values.metrics.prometheusRule.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} +spec: +{{- with .Values.metrics.prometheusRule.rules }} + groups: + - name: {{ template "postgresql.name" $ }} + rules: {{ tpl (toYaml .) $ | nindent 8 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/role.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/role.yaml new file mode 100644 index 0000000000..017a5716b1 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/role.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: Role +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +rules: + {{- if .Values.psp.create }} + - apiGroups: ["extensions"] + resources: ["podsecuritypolicies"] + verbs: ["use"] + resourceNames: + - {{ template "common.names.fullname" . }} + {{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/rolebinding.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/rolebinding.yaml new file mode 100644 index 0000000000..189775a15a --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if .Values.rbac.create }} +kind: RoleBinding +apiVersion: {{ include "common.capabilities.rbac.apiVersion" . }} +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: {{ template "common.names.fullname" . }} + apiGroup: rbac.authorization.k8s.io +subjects: + - kind: ServiceAccount + name: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/secrets.yaml new file mode 100644 index 0000000000..d492cd593b --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/secrets.yaml @@ -0,0 +1,24 @@ +{{- if (include "postgresql.createSecret" .) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +type: Opaque +data: + {{- if not (eq (include "postgresql.username" .) "postgres") }} + postgresql-postgres-password: {{ include "postgresql.postgres.password" . | b64enc | quote }} + {{- end }} + postgresql-password: {{ include "postgresql.password" . | b64enc | quote }} + {{- if .Values.replication.enabled }} + postgresql-replication-password: {{ include "postgresql.replication.password" . | b64enc | quote }} + {{- end }} + {{- if (and .Values.ldap.enabled .Values.ldap.bind_password)}} + postgresql-ldap-password: {{ .Values.ldap.bind_password | b64enc | quote }} + {{- end }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml new file mode 100644 index 0000000000..03f0f50e7d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if and (.Values.serviceAccount.enabled) (not .Values.serviceAccount.name) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "common.labels.standard" . | nindent 4 }} + name: {{ template "common.names.fullname" . }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml new file mode 100644 index 0000000000..587ce85b87 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/servicemonitor.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.metrics.enabled .Values.metrics.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "common.names.fullname" . }} + {{- if .Values.metrics.serviceMonitor.namespace }} + namespace: {{ .Values.metrics.serviceMonitor.namespace }} + {{- end }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.metrics.serviceMonitor.additionalLabels }} + {{- toYaml .Values.metrics.serviceMonitor.additionalLabels | nindent 4 }} + {{- end }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + +spec: + endpoints: + - port: http-metrics + {{- if .Values.metrics.serviceMonitor.interval }} + interval: {{ .Values.metrics.serviceMonitor.interval }} + {{- end }} + {{- if .Values.metrics.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.metrics.serviceMonitor.scrapeTimeout }} + {{- end }} + namespaceSelector: + matchNames: + - {{ .Release.Namespace }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml new file mode 100644 index 0000000000..b038299bf6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset-readreplicas.yaml @@ -0,0 +1,411 @@ +{{- if .Values.replication.enabled }} +{{- $readReplicasResources := coalesce .Values.readReplicas.resources .Values.resources -}} +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: "{{ template "common.names.fullname" . }}-read" + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: read +{{- with .Values.readReplicas.labels }} +{{ toYaml . | indent 4 }} +{{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.readReplicas.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: {{ .Values.replication.readReplicas }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: read + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + app.kubernetes.io/component: read + role: read +{{- with .Values.readReplicas.podLabels }} +{{ toYaml . | indent 8 }} +{{- end }} +{{- with .Values.readReplicas.podAnnotations }} + annotations: +{{ toYaml . | indent 8 }} +{{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.readReplicas.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAffinityPreset "component" "read" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.readReplicas.podAntiAffinityPreset "component" "read" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.readReplicas.nodeAffinityPreset.type "key" .Values.readReplicas.nodeAffinityPreset.key "values" .Values.readReplicas.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.readReplicas.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.readReplicas.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.readReplicas.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name}} + {{- end }} + {{- if or .Values.readReplicas.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{ if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.readReplicas.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.readReplicas.priorityClassName }} + priorityClassName: {{ .Values.readReplicas.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if $readReplicasResources }} + resources: {{- toYaml $readReplicasResources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + - name: POSTGRES_REPLICATION_MODE + value: "slave" + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + - name: POSTGRES_MASTER_HOST + value: {{ template "common.names.fullname" . }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ include "postgresql.port" . | quote }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{ end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.readReplicas.extraVolumeMounts }} + {{- toYaml .Values.readReplicas.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.readReplicas.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.readReplicas.sidecars "context" $ ) | nindent 8 }} +{{- end }} + volumes: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} + {{- if or (not .Values.persistence.enabled) (not .Values.readReplicas.persistence.enabled) }} + - name: data + emptyDir: {} + {{- end }} + {{- if .Values.readReplicas.extraVolumes }} + {{- toYaml .Values.readReplicas.extraVolumes | nindent 8 }} + {{- end }} + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} +{{- if and .Values.persistence.enabled .Values.readReplicas.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset.yaml new file mode 100644 index 0000000000..f8163fd99f --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/statefulset.yaml @@ -0,0 +1,609 @@ +apiVersion: {{ include "common.capabilities.statefulset.apiVersion" . }} +kind: StatefulSet +metadata: + name: {{ template "postgresql.primary.fullname" . }} + labels: {{- include "common.labels.standard" . | nindent 4 }} + app.kubernetes.io/component: primary + {{- with .Values.primary.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- with .Values.primary.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ template "common.names.fullname" . }}-headless + replicas: 1 + updateStrategy: + type: {{ .Values.updateStrategy.type }} + {{- if (eq "Recreate" .Values.updateStrategy.type) }} + rollingUpdate: null + {{- end }} + selector: + matchLabels: + {{- include "common.labels.matchLabels" . | nindent 6 }} + role: primary + template: + metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 8 }} + role: primary + app.kubernetes.io/component: primary + {{- with .Values.primary.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.primary.podAnnotations }} + annotations: {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- if .Values.schedulerName }} + schedulerName: "{{ .Values.schedulerName }}" + {{- end }} +{{- include "postgresql.imagePullSecrets" . | indent 6 }} + {{- if .Values.primary.affinity }} + affinity: {{- include "common.tplvalues.render" (dict "value" .Values.primary.affinity "context" $) | nindent 8 }} + {{- else }} + affinity: + podAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAffinityPreset "component" "primary" "context" $) | nindent 10 }} + podAntiAffinity: {{- include "common.affinities.pods" (dict "type" .Values.primary.podAntiAffinityPreset "component" "primary" "context" $) | nindent 10 }} + nodeAffinity: {{- include "common.affinities.nodes" (dict "type" .Values.primary.nodeAffinityPreset.type "key" .Values.primary.nodeAffinityPreset.key "values" .Values.primary.nodeAffinityPreset.values) | nindent 10 }} + {{- end }} + {{- if .Values.primary.nodeSelector }} + nodeSelector: {{- include "common.tplvalues.render" (dict "value" .Values.primary.nodeSelector "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.primary.tolerations }} + tolerations: {{- include "common.tplvalues.render" (dict "value" .Values.primary.tolerations "context" $) | nindent 8 }} + {{- end }} + {{- if .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ .Values.terminationGracePeriodSeconds }} + {{- end }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.serviceAccount.enabled }} + serviceAccountName: {{ default (include "common.names.fullname" . ) .Values.serviceAccount.name }} + {{- end }} + {{- if or .Values.primary.extraInitContainers (and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled))) }} + initContainers: + {{- if and .Values.volumePermissions.enabled (or .Values.persistence.enabled (and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled) .Values.tls.enabled) }} + - name: init-chmod-data + image: {{ template "postgresql.volumePermissions.image" . }} + imagePullPolicy: {{ .Values.volumePermissions.image.pullPolicy | quote }} + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + command: + - /bin/sh + - -cx + - | + {{- if .Values.persistence.enabled }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown `id -u`:`id -G | cut -d " " -f2` {{ .Values.persistence.mountPath }} + {{- else }} + chown {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} {{ .Values.persistence.mountPath }} + {{- end }} + mkdir -p {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + chmod 700 {{ .Values.persistence.mountPath }}/data {{- if (include "postgresql.mountConfigurationCM" .) }} {{ .Values.persistence.mountPath }}/conf {{- end }} + find {{ .Values.persistence.mountPath }} -mindepth 1 -maxdepth 1 {{- if not (include "postgresql.mountConfigurationCM" .) }} -not -name "conf" {{- end }} -not -name ".snapshot" -not -name "lost+found" | \ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + xargs chown -R `id -u`:`id -G | cut -d " " -f2` + {{- else }} + xargs chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} + {{- end }} + {{- end }} + {{- if and .Values.shmVolume.enabled .Values.shmVolume.chmod.enabled }} + chmod -R 777 /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + cp /tmp/certs/* /opt/bitnami/postgresql/certs/ + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + chown -R `id -u`:`id -G | cut -d " " -f2` /opt/bitnami/postgresql/certs/ + {{- else }} + chown -R {{ .Values.containerSecurityContext.runAsUser }}:{{ .Values.securityContext.fsGroup }} /opt/bitnami/postgresql/certs/ + {{- end }} + chmod 600 {{ template "postgresql.tlsCertKey" . }} + {{- end }} + {{- if eq ( toString ( .Values.volumePermissions.securityContext.runAsUser )) "auto" }} + securityContext: {{- omit .Values.volumePermissions.securityContext "runAsUser" | toYaml | nindent 12 }} + {{- else }} + securityContext: {{- .Values.volumePermissions.securityContext | toYaml | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + mountPath: /tmp/certs + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + {{- end }} + {{- end }} + {{- if .Values.primary.extraInitContainers }} + {{- include "common.tplvalues.render" ( dict "value" .Values.primary.extraInitContainers "context" $ ) | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.primary.priorityClassName }} + priorityClassName: {{ .Values.primary.priorityClassName }} + {{- end }} + containers: + - name: {{ template "common.names.fullname" . }} + image: {{ template "postgresql.image" . }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- if .Values.resources }} + resources: {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + - name: BITNAMI_DEBUG + value: {{ ternary "true" "false" .Values.image.debug | quote }} + - name: POSTGRESQL_PORT_NUMBER + value: "{{ template "postgresql.port" . }}" + - name: POSTGRESQL_VOLUME_DIR + value: "{{ .Values.persistence.mountPath }}" + {{- if .Values.postgresqlInitdbArgs }} + - name: POSTGRES_INITDB_ARGS + value: {{ .Values.postgresqlInitdbArgs | quote }} + {{- end }} + {{- if .Values.postgresqlInitdbWalDir }} + - name: POSTGRES_INITDB_WALDIR + value: {{ .Values.postgresqlInitdbWalDir | quote }} + {{- end }} + {{- if .Values.initdbUser }} + - name: POSTGRESQL_INITSCRIPTS_USERNAME + value: {{ .Values.initdbUser }} + {{- end }} + {{- if .Values.initdbPassword }} + - name: POSTGRESQL_INITSCRIPTS_PASSWORD + value: {{ .Values.initdbPassword }} + {{- end }} + {{- if .Values.persistence.mountPath }} + - name: PGDATA + value: {{ .Values.postgresqlDataDir | quote }} + {{- end }} + {{- if .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_MASTER_HOST + value: {{ .Values.primaryAsStandBy.primaryHost }} + - name: POSTGRES_MASTER_PORT_NUMBER + value: {{ .Values.primaryAsStandBy.primaryPort | quote }} + {{- end }} + {{- if or .Values.replication.enabled .Values.primaryAsStandBy.enabled }} + - name: POSTGRES_REPLICATION_MODE + {{- if .Values.primaryAsStandBy.enabled }} + value: "slave" + {{- else }} + value: "master" + {{- end }} + - name: POSTGRES_REPLICATION_USER + value: {{ include "postgresql.replication.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_REPLICATION_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-replication-password" + {{- else }} + - name: POSTGRES_REPLICATION_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-replication-password + {{- end }} + {{- if not (eq .Values.replication.synchronousCommit "off")}} + - name: POSTGRES_SYNCHRONOUS_COMMIT_MODE + value: {{ .Values.replication.synchronousCommit | quote }} + - name: POSTGRES_NUM_SYNCHRONOUS_REPLICAS + value: {{ .Values.replication.numSynchronousReplicas | quote }} + {{- end }} + - name: POSTGRES_CLUSTER_APP_NAME + value: {{ .Values.replication.applicationName }} + {{- end }} + {{- if not (eq (include "postgresql.username" .) "postgres") }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-postgres-password" + {{- else }} + - name: POSTGRES_POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-postgres-password + {{- end }} + {{- end }} + - name: POSTGRES_USER + value: {{ include "postgresql.username" . | quote }} + {{- if .Values.usePasswordFile }} + - name: POSTGRES_PASSWORD_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: POSTGRES_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + {{- if (include "postgresql.database" .) }} + - name: POSTGRES_DB + value: {{ (include "postgresql.database" .) | quote }} + {{- end }} + {{- if .Values.extraEnv }} + {{- include "common.tplvalues.render" (dict "value" .Values.extraEnv "context" $) | nindent 12 }} + {{- end }} + - name: POSTGRESQL_ENABLE_LDAP + value: {{ ternary "yes" "no" .Values.ldap.enabled | quote }} + {{- if .Values.ldap.enabled }} + - name: POSTGRESQL_LDAP_SERVER + value: {{ .Values.ldap.server }} + - name: POSTGRESQL_LDAP_PORT + value: {{ .Values.ldap.port | quote }} + - name: POSTGRESQL_LDAP_SCHEME + value: {{ .Values.ldap.scheme }} + {{- if .Values.ldap.tls }} + - name: POSTGRESQL_LDAP_TLS + value: "1" + {{- end }} + - name: POSTGRESQL_LDAP_PREFIX + value: {{ .Values.ldap.prefix | quote }} + - name: POSTGRESQL_LDAP_SUFFIX + value: {{ .Values.ldap.suffix | quote }} + - name: POSTGRESQL_LDAP_BASE_DN + value: {{ .Values.ldap.baseDN }} + - name: POSTGRESQL_LDAP_BIND_DN + value: {{ .Values.ldap.bindDN }} + {{- if (not (empty .Values.ldap.bind_password)) }} + - name: POSTGRESQL_LDAP_BIND_PASSWORD + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-ldap-password + {{- end}} + - name: POSTGRESQL_LDAP_SEARCH_ATTR + value: {{ .Values.ldap.search_attr }} + - name: POSTGRESQL_LDAP_SEARCH_FILTER + value: {{ .Values.ldap.search_filter }} + - name: POSTGRESQL_LDAP_URL + value: {{ .Values.ldap.url }} + {{- end}} + - name: POSTGRESQL_ENABLE_TLS + value: {{ ternary "yes" "no" .Values.tls.enabled | quote }} + {{- if .Values.tls.enabled }} + - name: POSTGRESQL_TLS_PREFER_SERVER_CIPHERS + value: {{ ternary "yes" "no" .Values.tls.preferServerCiphers | quote }} + - name: POSTGRESQL_TLS_CERT_FILE + value: {{ template "postgresql.tlsCert" . }} + - name: POSTGRESQL_TLS_KEY_FILE + value: {{ template "postgresql.tlsCertKey" . }} + {{- if .Values.tls.certCAFilename }} + - name: POSTGRESQL_TLS_CA_FILE + value: {{ template "postgresql.tlsCACert" . }} + {{- end }} + {{- if .Values.tls.crlFilename }} + - name: POSTGRESQL_TLS_CRL_FILE + value: {{ template "postgresql.tlsCRL" . }} + {{- end }} + {{- end }} + - name: POSTGRESQL_LOG_HOSTNAME + value: {{ .Values.audit.logHostname | quote }} + - name: POSTGRESQL_LOG_CONNECTIONS + value: {{ .Values.audit.logConnections | quote }} + - name: POSTGRESQL_LOG_DISCONNECTIONS + value: {{ .Values.audit.logDisconnections | quote }} + {{- if .Values.audit.logLinePrefix }} + - name: POSTGRESQL_LOG_LINE_PREFIX + value: {{ .Values.audit.logLinePrefix | quote }} + {{- end }} + {{- if .Values.audit.logTimezone }} + - name: POSTGRESQL_LOG_TIMEZONE + value: {{ .Values.audit.logTimezone | quote }} + {{- end }} + {{- if .Values.audit.pgAuditLog }} + - name: POSTGRESQL_PGAUDIT_LOG + value: {{ .Values.audit.pgAuditLog | quote }} + {{- end }} + - name: POSTGRESQL_PGAUDIT_LOG_CATALOG + value: {{ .Values.audit.pgAuditLogCatalog | quote }} + - name: POSTGRESQL_CLIENT_MIN_MESSAGES + value: {{ .Values.audit.clientMinMessages | quote }} + - name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES + value: {{ .Values.postgresqlSharedPreloadLibraries | quote }} + {{- if .Values.postgresqlMaxConnections }} + - name: POSTGRESQL_MAX_CONNECTIONS + value: {{ .Values.postgresqlMaxConnections | quote }} + {{- end }} + {{- if .Values.postgresqlPostgresConnectionLimit }} + - name: POSTGRESQL_POSTGRES_CONNECTION_LIMIT + value: {{ .Values.postgresqlPostgresConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlDbUserConnectionLimit }} + - name: POSTGRESQL_USERNAME_CONNECTION_LIMIT + value: {{ .Values.postgresqlDbUserConnectionLimit | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesInterval }} + - name: POSTGRESQL_TCP_KEEPALIVES_INTERVAL + value: {{ .Values.postgresqlTcpKeepalivesInterval | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesIdle }} + - name: POSTGRESQL_TCP_KEEPALIVES_IDLE + value: {{ .Values.postgresqlTcpKeepalivesIdle | quote }} + {{- end }} + {{- if .Values.postgresqlStatementTimeout }} + - name: POSTGRESQL_STATEMENT_TIMEOUT + value: {{ .Values.postgresqlStatementTimeout | quote }} + {{- end }} + {{- if .Values.postgresqlTcpKeepalivesCount }} + - name: POSTGRESQL_TCP_KEEPALIVES_COUNT + value: {{ .Values.postgresqlTcpKeepalivesCount | quote }} + {{- end }} + {{- if .Values.postgresqlPghbaRemoveFilters }} + - name: POSTGRESQL_PGHBA_REMOVE_FILTERS + value: {{ .Values.postgresqlPghbaRemoveFilters | quote }} + {{- end }} + {{- if .Values.extraEnvVarsCM }} + envFrom: + - configMapRef: + name: {{ tpl .Values.extraEnvVarsCM . }} + {{- end }} + ports: + - name: tcp-postgresql + containerPort: {{ template "postgresql.port" . }} + {{- if .Values.startupProbe.enabled }} + startupProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.startupProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.startupProbe.timeoutSeconds }} + successThreshold: {{ .Values.startupProbe.successThreshold }} + failureThreshold: {{ .Values.startupProbe.failureThreshold }} + {{- else if .Values.customStartupProbe }} + startupProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customStartupProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + exec: + command: + - /bin/sh + - -c + {{- if (include "postgresql.database" .) }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} -d "dbname={{ include "postgresql.database" . }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}{{- end }}" -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- else }} + - exec pg_isready -U {{ include "postgresql.username" . | quote }} {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} -d "sslcert={{ include "postgresql.tlsCert" . }} sslkey={{ include "postgresql.tlsCertKey" . }}"{{- end }} -h 127.0.0.1 -p {{ template "postgresql.port" . }} + {{- end }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + {{- else if .Values.customLivenessProbe }} + livenessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customLivenessProbe "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + exec: + command: + - /bin/sh + - -c + - -e + {{- include "postgresql.readinessProbeCommand" . | nindent 16 }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + {{- else if .Values.customReadinessProbe }} + readinessProbe: {{- include "common.tplvalues.render" (dict "value" .Values.customReadinessProbe "context" $) | nindent 12 }} + {{- end }} + volumeMounts: + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + mountPath: /docker-entrypoint-initdb.d/ + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + mountPath: /docker-entrypoint-initdb.d/secret + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + mountPath: /bitnami/postgresql/conf/conf.d/ + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + mountPath: /dev/shm + {{- end }} + {{- if .Values.persistence.enabled }} + - name: data + mountPath: {{ .Values.persistence.mountPath }} + subPath: {{ .Values.persistence.subPath }} + {{- end }} + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap }} + - name: postgresql-config + mountPath: /bitnami/postgresql/conf + {{- end }} + {{- if .Values.primary.extraVolumeMounts }} + {{- toYaml .Values.primary.extraVolumeMounts | nindent 12 }} + {{- end }} +{{- if .Values.primary.sidecars }} +{{- include "common.tplvalues.render" ( dict "value" .Values.primary.sidecars "context" $ ) | nindent 8 }} +{{- end }} +{{- if .Values.metrics.enabled }} + - name: metrics + image: {{ template "postgresql.metrics.image" . }} + imagePullPolicy: {{ .Values.metrics.image.pullPolicy | quote }} + {{- if .Values.metrics.securityContext.enabled }} + securityContext: {{- omit .Values.metrics.securityContext "enabled" | toYaml | nindent 12 }} + {{- end }} + env: + {{- $database := required "In order to enable metrics you need to specify a database (.Values.postgresqlDatabase or .Values.global.postgresql.postgresqlDatabase)" (include "postgresql.database" .) }} + {{- $sslmode := ternary "require" "disable" .Values.tls.enabled }} + {{- if and .Values.tls.enabled .Values.tls.certCAFilename }} + - name: DATA_SOURCE_NAME + value: {{ printf "host=127.0.0.1 port=%d user=%s sslmode=%s sslcert=%s sslkey=%s" (int (include "postgresql.port" .)) (include "postgresql.username" .) $sslmode (include "postgresql.tlsCert" .) (include "postgresql.tlsCertKey" .) }} + {{- else }} + - name: DATA_SOURCE_URI + value: {{ printf "127.0.0.1:%d/%s?sslmode=%s" (int (include "postgresql.port" .)) $database $sslmode }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: DATA_SOURCE_PASS_FILE + value: "/opt/bitnami/postgresql/secrets/postgresql-password" + {{- else }} + - name: DATA_SOURCE_PASS + valueFrom: + secretKeyRef: + name: {{ template "postgresql.secretName" . }} + key: postgresql-password + {{- end }} + - name: DATA_SOURCE_USER + value: {{ template "postgresql.username" . }} + {{- if .Values.metrics.extraEnvVars }} + {{- include "common.tplvalues.render" (dict "value" .Values.metrics.extraEnvVars "context" $) | nindent 12 }} + {{- end }} + {{- if .Values.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.livenessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.livenessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.livenessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.livenessProbe.failureThreshold }} + {{- end }} + {{- if .Values.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: / + port: http-metrics + initialDelaySeconds: {{ .Values.metrics.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.metrics.readinessProbe.periodSeconds }} + timeoutSeconds: {{ .Values.metrics.readinessProbe.timeoutSeconds }} + successThreshold: {{ .Values.metrics.readinessProbe.successThreshold }} + failureThreshold: {{ .Values.metrics.readinessProbe.failureThreshold }} + {{- end }} + volumeMounts: + {{- if .Values.usePasswordFile }} + - name: postgresql-password + mountPath: /opt/bitnami/postgresql/secrets/ + {{- end }} + {{- if .Values.tls.enabled }} + - name: postgresql-certificates + mountPath: /opt/bitnami/postgresql/certs + readOnly: true + {{- end }} + {{- if .Values.metrics.customMetrics }} + - name: custom-metrics + mountPath: /conf + readOnly: true + args: ["--extend.query-path", "/conf/custom-metrics.yaml"] + {{- end }} + ports: + - name: http-metrics + containerPort: 9187 + {{- if .Values.metrics.resources }} + resources: {{- toYaml .Values.metrics.resources | nindent 12 }} + {{- end }} +{{- end }} + volumes: + {{- if or (.Files.Glob "files/postgresql.conf") (.Files.Glob "files/pg_hba.conf") .Values.postgresqlConfiguration .Values.pgHbaConfiguration .Values.configurationConfigMap}} + - name: postgresql-config + configMap: + name: {{ template "postgresql.configurationCM" . }} + {{- end }} + {{- if or (.Files.Glob "files/conf.d/*.conf") .Values.postgresqlExtendedConf .Values.extendedConfConfigMap }} + - name: postgresql-extended-config + configMap: + name: {{ template "postgresql.extendedConfigurationCM" . }} + {{- end }} + {{- if .Values.usePasswordFile }} + - name: postgresql-password + secret: + secretName: {{ template "postgresql.secretName" . }} + {{- end }} + {{- if or (.Files.Glob "files/docker-entrypoint-initdb.d/*.{sh,sql,sql.gz}") .Values.initdbScriptsConfigMap .Values.initdbScripts }} + - name: custom-init-scripts + configMap: + name: {{ template "postgresql.initdbScriptsCM" . }} + {{- end }} + {{- if .Values.initdbScriptsSecret }} + - name: custom-init-scripts-secret + secret: + secretName: {{ template "postgresql.initdbScriptsSecret" . }} + {{- end }} + {{- if .Values.tls.enabled }} + - name: raw-certificates + secret: + secretName: {{ required "A secret containing TLS certificates is required when TLS is enabled" .Values.tls.certificatesSecret }} + - name: postgresql-certificates + emptyDir: {} + {{- end }} + {{- if .Values.primary.extraVolumes }} + {{- toYaml .Values.primary.extraVolumes | nindent 8 }} + {{- end }} + {{- if and .Values.metrics.enabled .Values.metrics.customMetrics }} + - name: custom-metrics + configMap: + name: {{ template "postgresql.metricsCM" . }} + {{- end }} + {{- if .Values.shmVolume.enabled }} + - name: dshm + emptyDir: + medium: Memory + sizeLimit: 1Gi + {{- end }} +{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }} + - name: data + persistentVolumeClaim: +{{- with .Values.persistence.existingClaim }} + claimName: {{ tpl . $ }} +{{- end }} +{{- else if not .Values.persistence.enabled }} + - name: data + emptyDir: {} +{{- else if and .Values.persistence.enabled (not .Values.persistence.existingClaim) }} + volumeClaimTemplates: + - metadata: + name: data + {{- with .Values.persistence.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ $value }} + {{- end }} + {{- end }} + spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{ include "common.storage.class" (dict "persistence" .Values.persistence "global" .Values.global) }} + {{- if .Values.persistence.selector }} + selector: {{- include "common.tplvalues.render" (dict "value" .Values.persistence.selector "context" $) | nindent 10 }} + {{- end -}} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-headless.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-headless.yaml new file mode 100644 index 0000000000..6f5f3b9ee4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-headless.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-headless + labels: + {{- include "common.labels.standard" . | nindent 4 }} + {{- if .Values.commonAnnotations }} + annotations: {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + # Use this annotation in addition to the actual publishNotReadyAddresses + # field below because the annotation will stop being respected soon but the + # field is broken in some versions of Kubernetes: + # https://github.com/kubernetes/kubernetes/issues/58662 + service.alpha.kubernetes.io/tolerate-unready-endpoints: "true" + namespace: {{ .Release.Namespace }} +spec: + type: ClusterIP + clusterIP: None + # We want all pods in the StatefulSet to have their addresses published for + # the sake of the other Postgresql pods even before they're ready, since they + # have to be able to talk to each other in order to become ready. + publishNotReadyAddresses: true + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-read.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-read.yaml new file mode 100644 index 0000000000..56195ea1e6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc-read.yaml @@ -0,0 +1,43 @@ +{{- if .Values.replication.enabled }} +{{- $serviceAnnotations := coalesce .Values.readReplicas.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.readReplicas.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.readReplicas.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.readReplicas.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.readReplicas.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.readReplicas.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }}-read + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: read +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc.yaml new file mode 100644 index 0000000000..a29431b6a4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/templates/svc.yaml @@ -0,0 +1,41 @@ +{{- $serviceAnnotations := coalesce .Values.primary.service.annotations .Values.service.annotations -}} +{{- $serviceType := coalesce .Values.primary.service.type .Values.service.type -}} +{{- $serviceLoadBalancerIP := coalesce .Values.primary.service.loadBalancerIP .Values.service.loadBalancerIP -}} +{{- $serviceLoadBalancerSourceRanges := coalesce .Values.primary.service.loadBalancerSourceRanges .Values.service.loadBalancerSourceRanges -}} +{{- $serviceClusterIP := coalesce .Values.primary.service.clusterIP .Values.service.clusterIP -}} +{{- $serviceNodePort := coalesce .Values.primary.service.nodePort .Values.service.nodePort -}} +apiVersion: v1 +kind: Service +metadata: + name: {{ template "common.names.fullname" . }} + labels: + {{- include "common.labels.standard" . | nindent 4 }} + annotations: + {{- if .Values.commonAnnotations }} + {{- include "common.tplvalues.render" ( dict "value" .Values.commonAnnotations "context" $ ) | nindent 4 }} + {{- end }} + {{- if $serviceAnnotations }} + {{- include "common.tplvalues.render" (dict "value" $serviceAnnotations "context" $) | nindent 4 }} + {{- end }} + namespace: {{ .Release.Namespace }} +spec: + type: {{ $serviceType }} + {{- if and $serviceLoadBalancerIP (eq $serviceType "LoadBalancer") }} + loadBalancerIP: {{ $serviceLoadBalancerIP }} + {{- end }} + {{- if and (eq $serviceType "LoadBalancer") $serviceLoadBalancerSourceRanges }} + loadBalancerSourceRanges: {{- include "common.tplvalues.render" (dict "value" $serviceLoadBalancerSourceRanges "context" $) | nindent 4 }} + {{- end }} + {{- if and (eq $serviceType "ClusterIP") $serviceClusterIP }} + clusterIP: {{ $serviceClusterIP }} + {{- end }} + ports: + - name: tcp-postgresql + port: {{ template "postgresql.port" . }} + targetPort: tcp-postgresql + {{- if $serviceNodePort }} + nodePort: {{ $serviceNodePort }} + {{- end }} + selector: + {{- include "common.labels.matchLabels" . | nindent 4 }} + role: primary diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.schema.json b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.schema.json new file mode 100644 index 0000000000..66a2a9dd06 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.schema.json @@ -0,0 +1,103 @@ +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "properties": { + "postgresqlUsername": { + "type": "string", + "title": "Admin user", + "form": true + }, + "postgresqlPassword": { + "type": "string", + "title": "Password", + "form": true + }, + "persistence": { + "type": "object", + "properties": { + "size": { + "type": "string", + "title": "Persistent Volume Size", + "form": true, + "render": "slider", + "sliderMin": 1, + "sliderMax": 100, + "sliderUnit": "Gi" + } + } + }, + "resources": { + "type": "object", + "title": "Required Resources", + "description": "Configure resource requests", + "form": true, + "properties": { + "requests": { + "type": "object", + "properties": { + "memory": { + "type": "string", + "form": true, + "render": "slider", + "title": "Memory Request", + "sliderMin": 10, + "sliderMax": 2048, + "sliderUnit": "Mi" + }, + "cpu": { + "type": "string", + "form": true, + "render": "slider", + "title": "CPU Request", + "sliderMin": 10, + "sliderMax": 2000, + "sliderUnit": "m" + } + } + } + } + }, + "replication": { + "type": "object", + "form": true, + "title": "Replication Details", + "properties": { + "enabled": { + "type": "boolean", + "title": "Enable Replication", + "form": true + }, + "readReplicas": { + "type": "integer", + "title": "read Replicas", + "form": true, + "hidden": { + "value": false, + "path": "replication/enabled" + } + } + } + }, + "volumePermissions": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "form": true, + "title": "Enable Init Containers", + "description": "Change the owner of the persist volume mountpoint to RunAsUser:fsGroup" + } + } + }, + "metrics": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "title": "Configure metrics exporter", + "form": true + } + } + } + } +} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.yaml new file mode 100644 index 0000000000..82ce092344 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/charts/postgresql/values.yaml @@ -0,0 +1,824 @@ +## Global Docker image parameters +## Please, note that this will override the image parameters, including dependencies, configured to use the global value +## Current available global Docker image parameters: imageRegistry and imagePullSecrets +## +global: + postgresql: {} +# imageRegistry: myRegistryName +# imagePullSecrets: +# - myRegistryKeySecretName +# storageClass: myStorageClass + +## Bitnami PostgreSQL image version +## ref: https://hub.docker.com/r/bitnami/postgresql/tags/ +## +image: + registry: docker.io + repository: bitnami/postgresql + tag: 11.11.0-debian-10-r71 + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + + ## Set to true if you would like to see extra information on logs + ## It turns BASH and/or NAMI debugging in the image + ## + debug: false + +## String to partially override common.names.fullname template (will maintain the release name) +## +# nameOverride: + +## String to fully override common.names.fullname template +## +# fullnameOverride: + +## +## Init containers parameters: +## volumePermissions: Change the owner of the persist volume mountpoint to RunAsUser:fsGroup +## +volumePermissions: + enabled: false + image: + registry: docker.io + repository: bitnami/bitnami-shell + tag: "10" + ## Specify a imagePullPolicy + ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' + ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images + ## + pullPolicy: Always + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Init container Security Context + ## Note: the chown of the data folder is done to securityContext.runAsUser + ## and not the below volumePermissions.securityContext.runAsUser + ## When runAsUser is set to special value "auto", init container will try to chwon the + ## data folder to autodetermined user&group, using commands: `id -u`:`id -G | cut -d" " -f2` + ## "auto" is especially useful for OpenShift which has scc with dynamic userids (and 0 is not allowed). + ## You may want to use this volumePermissions.securityContext.runAsUser="auto" in combination with + ## pod securityContext.enabled=false and shmVolume.chmod.enabled=false + ## + securityContext: + runAsUser: 0 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: + +## Pod Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +securityContext: + enabled: true + fsGroup: 1001 + +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +## +containerSecurityContext: + enabled: true + runAsUser: 1001 + +## Pod Service Account +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ +## +serviceAccount: + enabled: false + ## Name of an already existing service account. Setting this value disables the automatic service account creation. + # name: + +## Pod Security Policy +## ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +psp: + create: false + +## Creates role for ServiceAccount +## Required for PSP +## +rbac: + create: false + +replication: + enabled: false + user: repl_user + password: repl_password + readReplicas: 1 + ## Set synchronous commit mode: on, off, remote_apply, remote_write and local + ## ref: https://www.postgresql.org/docs/9.6/runtime-config-wal.html#GUC-WAL-LEVEL + synchronousCommit: 'off' + ## From the number of `readReplicas` defined above, set the number of those that will have synchronous replication + ## NOTE: It cannot be > readReplicas + numSynchronousReplicas: 0 + ## Replication Cluster application name. Useful for defining multiple replication policies + ## + applicationName: my_application + +## PostgreSQL admin password (used when `postgresqlUsername` is not `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-user-on-first-run (see note!) +# postgresqlPostgresPassword: + +## PostgreSQL user (has superuser privileges if username is `postgres`) +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +postgresqlUsername: postgres + +## PostgreSQL password +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#setting-the-root-password-on-first-run +## +# postgresqlPassword: + +## PostgreSQL password using existing secret +## existingSecret: secret +## + +## Mount PostgreSQL secret as a file instead of passing environment variable +# usePasswordFile: false + +## Create a database +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md#creating-a-database-on-first-run +## +# postgresqlDatabase: + +## PostgreSQL data dir +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +postgresqlDataDir: /bitnami/postgresql/data + +## An array to add extra environment variables +## For example: +## extraEnv: +## - name: FOO +## value: "bar" +## +# extraEnv: +extraEnv: [] + +## Name of a ConfigMap containing extra env vars +## +# extraEnvVarsCM: + +## Specify extra initdb args +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbArgs: + +## Specify a custom location for the PostgreSQL transaction log +## ref: https://github.com/bitnami/bitnami-docker-postgresql/blob/master/README.md +## +# postgresqlInitdbWalDir: + +## PostgreSQL configuration +## Specify runtime configuration parameters as a dict, using camelCase, e.g. +## {"sharedBuffers": "500MB"} +## Alternatively, you can put your postgresql.conf under the files/ directory +## ref: https://www.postgresql.org/docs/current/static/runtime-config.html +## +# postgresqlConfiguration: + +## PostgreSQL extended configuration +## As above, but _appended_ to the main configuration +## Alternatively, you can put your *.conf under the files/conf.d/ directory +## https://github.com/bitnami/bitnami-docker-postgresql#allow-settings-to-be-loaded-from-files-other-than-the-default-postgresqlconf +## +# postgresqlExtendedConf: + +## Configure current cluster's primary server to be the standby server in other cluster. +## This will allow cross cluster replication and provide cross cluster high availability. +## You will need to configure pgHbaConfiguration if you want to enable this feature with local cluster replication enabled. +## +primaryAsStandBy: + enabled: false + # primaryHost: + # primaryPort: + +## PostgreSQL client authentication configuration +## Specify content for pg_hba.conf +## Default: do not create pg_hba.conf +## Alternatively, you can put your pg_hba.conf under the files/ directory +# pgHbaConfiguration: |- +# local all all trust +# host all all localhost trust +# host mydatabase mysuser 192.168.0.0/24 md5 + +## ConfigMap with PostgreSQL configuration +## NOTE: This will override postgresqlConfiguration and pgHbaConfiguration +# configurationConfigMap: + +## ConfigMap with PostgreSQL extended configuration +# extendedConfConfigMap: + +## initdb scripts +## Specify dictionary of scripts to be run at first boot +## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory +## +# initdbScripts: +# my_init_script.sh: | +# #!/bin/sh +# echo "Do something." + +## ConfigMap with scripts to be run at first boot +## NOTE: This will override initdbScripts +# initdbScriptsConfigMap: + +## Secret with scripts to be run at first boot (in case it contains sensitive information) +## NOTE: This can work along initdbScripts or initdbScriptsConfigMap +# initdbScriptsSecret: + +## Specify the PostgreSQL username and password to execute the initdb scripts +# initdbUser: +# initdbPassword: + +## Audit settings +## https://github.com/bitnami/bitnami-docker-postgresql#auditing +## +audit: + ## Log client hostnames + ## + logHostname: false + ## Log connections to the server + ## + logConnections: false + ## Log disconnections + ## + logDisconnections: false + ## Operation to audit using pgAudit (default if not set) + ## + pgAuditLog: "" + ## Log catalog using pgAudit + ## + pgAuditLogCatalog: "off" + ## Log level for clients + ## + clientMinMessages: error + ## Template for log line prefix (default if not set) + ## + logLinePrefix: "" + ## Log timezone + ## + logTimezone: "" + +## Shared preload libraries +## +postgresqlSharedPreloadLibraries: "pgaudit" + +## Maximum total connections +## +postgresqlMaxConnections: + +## Maximum connections for the postgres user +## +postgresqlPostgresConnectionLimit: + +## Maximum connections for the created user +## +postgresqlDbUserConnectionLimit: + +## TCP keepalives interval +## +postgresqlTcpKeepalivesInterval: + +## TCP keepalives idle +## +postgresqlTcpKeepalivesIdle: + +## TCP keepalives count +## +postgresqlTcpKeepalivesCount: + +## Statement timeout +## +postgresqlStatementTimeout: + +## Remove pg_hba.conf lines with the following comma-separated patterns +## (cannot be used with custom pg_hba.conf) +## +postgresqlPghbaRemoveFilters: + +## Optional duration in seconds the pod needs to terminate gracefully. +## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods +## +# terminationGracePeriodSeconds: 30 + +## LDAP configuration +## +ldap: + enabled: false + url: '' + server: '' + port: '' + prefix: '' + suffix: '' + baseDN: '' + bindDN: '' + bind_password: + search_attr: '' + search_filter: '' + scheme: '' + tls: {} + +## PostgreSQL service configuration +## +service: + ## PosgresSQL service type + ## + type: ClusterIP + # clusterIP: None + port: 5432 + + ## Specify the nodePort value for the LoadBalancer and NodePort service types. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport + ## + # nodePort: + + ## Provide any additional annotations which may be required. Evaluated as a template. + ## + annotations: {} + ## Set the LoadBalancer service type to internal only. + ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer + ## + # loadBalancerIP: + ## Load Balancer sources. Evaluated as a template. + ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service + ## + # loadBalancerSourceRanges: + # - 10.10.10.0/24 + +## Start primary and read(s) pod(s) without limitations on shm memory. +## By default docker and containerd (and possibly other container runtimes) +## limit `/dev/shm` to `64M` (see e.g. the +## [docker issue](https://github.com/docker-library/postgres/issues/416) and the +## [containerd issue](https://github.com/containerd/containerd/issues/3654), +## which could be not enough if PostgreSQL uses parallel workers heavily. +## +shmVolume: + ## Set `shmVolume.enabled` to `true` to mount a new tmpfs volume to remove + ## this limitation. + ## + enabled: true + ## Set to `true` to `chmod 777 /dev/shm` on a initContainer. + ## This option is ignored if `volumePermissions.enabled` is `false` + ## + chmod: + enabled: true + +## PostgreSQL data Persistent Volume Storage Class +## If defined, storageClassName: +## If set to "-", storageClassName: "", which disables dynamic provisioning +## If undefined (the default) or set to null, no storageClassName spec is +## set, choosing the default provisioner. (gp2 on AWS, standard on +## GKE, AWS & OpenStack) +## +persistence: + enabled: true + ## A manually managed Persistent Volume and Claim + ## If defined, PVC must be created manually before volume will be bound + ## The value is evaluated as a template, so, for example, the name can depend on .Release or .Chart + ## + # existingClaim: + + ## The path the volume will be mounted at, useful when using different + ## PostgreSQL images. + ## + mountPath: /bitnami/postgresql + + ## The subdirectory of the volume to mount to, useful in dev environments + ## and one PV for multiple services. + ## + subPath: '' + + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 8Gi + annotations: {} + ## selector can be used to match an existing PersistentVolume + ## selector: + ## matchLabels: + ## app: my-app + selector: {} + +## updateStrategy for PostgreSQL StatefulSet and its reads StatefulSets +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies +## +updateStrategy: + type: RollingUpdate + +## +## PostgreSQL Primary parameters +## +primary: + ## PostgreSQL Primary pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL Primary pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL Primary node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: primary.podAffinityPreset, primary.podAntiAffinityPreset, and primary.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL primary pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL primary Volume mounts + ## + extraVolumeMounts: [] + ## Additional PostgreSQL primary Volumes + ## + extraVolumes: [] + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for primary + ## + service: {} + # type: + # nodePort: + # clusterIP: + +## +## PostgreSQL read only replica parameters +## +readReplicas: + ## PostgreSQL read only pod affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAffinityPreset: "" + + ## PostgreSQL read only pod anti-affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity + ## Allowed values: soft, hard + ## + podAntiAffinityPreset: soft + + ## PostgreSQL read only node affinity preset + ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity + ## Allowed values: soft, hard + ## + nodeAffinityPreset: + ## Node affinity type + ## Allowed values: soft, hard + type: "" + ## Node label key to match + ## E.g. + ## key: "kubernetes.io/e2e-az-name" + ## + key: "" + ## Node label values to match + ## E.g. + ## values: + ## - e2e-az1 + ## - e2e-az2 + ## + values: [] + + ## Affinity for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## Note: readReplicas.podAffinityPreset, readReplicas.podAntiAffinityPreset, and readReplicas.nodeAffinityPreset will be ignored when it's set + ## + affinity: {} + + ## Node labels for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + ## + nodeSelector: {} + + ## Tolerations for PostgreSQL read only pods assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + labels: {} + annotations: {} + podLabels: {} + podAnnotations: {} + priorityClassName: '' + + ## Extra init containers + ## Example + ## + ## extraInitContainers: + ## - name: do-something + ## image: busybox + ## command: ['do', 'something'] + ## + extraInitContainers: [] + + ## Additional PostgreSQL read replicas Volume mounts + ## + extraVolumeMounts: [] + + ## Additional PostgreSQL read replicas Volumes + ## + extraVolumes: [] + + ## Add sidecars to the pod + ## + ## For example: + ## sidecars: + ## - name: your-image-name + ## image: your-image + ## imagePullPolicy: Always + ## ports: + ## - name: portname + ## containerPort: 1234 + ## + sidecars: [] + + ## Override the service configuration for read + ## + service: {} + # type: + # nodePort: + # clusterIP: + + ## Whether to enable PostgreSQL read replicas data Persistent + ## + persistence: + enabled: true + + # Override the resource configuration for read replicas + resources: {} + # requests: + # memory: 256Mi + # cpu: 250m + +## Configure resource requests and limits +## ref: http://kubernetes.io/docs/user-guide/compute-resources/ +## +resources: + requests: + memory: 256Mi + cpu: 250m + +## Add annotations to all the deployed resources +## +commonAnnotations: {} + +networkPolicy: + ## Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## + enabled: false + + ## The Policy model to apply. When set to false, only pods with the correct + ## client label will have network access to the port PostgreSQL is listening + ## on. When true, PostgreSQL will accept connections from any source + ## (with the correct destination port). + ## + allowExternal: true + + ## if explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace + ## and that match other criteria, the ones that have the good label, can reach the DB. + ## But sometimes, we want the DB to be accessible to clients from other namespaces, in this case, we can use this + ## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added. + ## + ## Example: + ## explicitNamespacesSelector: + ## matchLabels: + ## role: frontend + ## matchExpressions: + ## - {key: role, operator: In, values: [frontend]} + ## + explicitNamespacesSelector: {} + +## Configure extra options for startup, liveness and readiness probes +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes +## +startupProbe: + enabled: false + initialDelaySeconds: 30 + periodSeconds: 15 + timeoutSeconds: 5 + failureThreshold: 10 + successThreshold: 1 + +livenessProbe: + enabled: true + initialDelaySeconds: 30 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Custom Startup probe +## +customStartupProbe: {} + +## Custom Liveness probe +## +customLivenessProbe: {} + +## Custom Rediness probe +## +customReadinessProbe: {} + +## +## TLS configuration +## +tls: + # Enable TLS traffic + enabled: false + # + # Whether to use the server's TLS cipher preferences rather than the client's. + preferServerCiphers: true + # + # Name of the Secret that contains the certificates + certificatesSecret: '' + # + # Certificate filename + certFilename: '' + # + # Certificate Key filename + certKeyFilename: '' + # + # CA Certificate filename + # If provided, PostgreSQL will authenticate TLS/SSL clients by requesting them a certificate + # ref: https://www.postgresql.org/docs/9.6/auth-methods.html + certCAFilename: + # + # File containing a Certificate Revocation List + crlFilename: + +## Configure metrics exporter +## +metrics: + enabled: false + # resources: {} + service: + type: ClusterIP + annotations: + prometheus.io/scrape: 'true' + prometheus.io/port: '9187' + loadBalancerIP: + serviceMonitor: + enabled: false + additionalLabels: {} + # namespace: monitoring + # interval: 30s + # scrapeTimeout: 10s + ## Custom PrometheusRule to be defined + ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart + ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions + ## + prometheusRule: + enabled: false + additionalLabels: {} + namespace: '' + ## These are just examples rules, please adapt them to your needs. + ## Make sure to constraint the rules to the current postgresql service. + ## rules: + ## - alert: HugeReplicationLag + ## expr: pg_replication_lag{service="{{ template "common.names.fullname" . }}-metrics"} / 3600 > 1 + ## for: 1m + ## labels: + ## severity: critical + ## annotations: + ## description: replication for {{ template "common.names.fullname" . }} PostgreSQL is lagging by {{ "{{ $value }}" }} hour(s). + ## summary: PostgreSQL replication is lagging by {{ "{{ $value }}" }} hour(s). + ## + rules: [] + + image: + registry: docker.io + repository: bitnami/postgres-exporter + tag: 0.9.0-debian-10-r43 + pullPolicy: IfNotPresent + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## + # pullSecrets: + # - myRegistryKeySecretName + ## Define additional custom metrics + ## ref: https://github.com/wrouesnel/postgres_exporter#adding-new-metrics-via-a-config-file + # customMetrics: + # pg_database: + # query: "SELECT d.datname AS name, CASE WHEN pg_catalog.has_database_privilege(d.datname, 'CONNECT') THEN pg_catalog.pg_database_size(d.datname) ELSE 0 END AS size_bytes FROM pg_catalog.pg_database d where datname not in ('template0', 'template1', 'postgres')" + # metrics: + # - name: + # usage: "LABEL" + # description: "Name of the database" + # - size_bytes: + # usage: "GAUGE" + # description: "Size of the database in bytes" + # + ## An array to add extra env vars to configure postgres-exporter + ## see: https://github.com/wrouesnel/postgres_exporter#environment-variables + ## For example: + # extraEnvVars: + # - name: PG_EXPORTER_DISABLE_DEFAULT_METRICS + # value: "true" + extraEnvVars: {} + + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## + securityContext: + enabled: false + runAsUser: 1001 + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes) + ## Configure extra options for liveness and readiness probes + ## + livenessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + + readinessProbe: + enabled: true + initialDelaySeconds: 5 + periodSeconds: 10 + timeoutSeconds: 5 + failureThreshold: 6 + successThreshold: 1 + +## Array with extra yaml to deploy with the chart. Evaluated as a template +## +extraDeploy: [] diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/access-tls-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/access-tls-values.yaml new file mode 100644 index 0000000000..1a8c4698d2 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/access-tls-values.yaml @@ -0,0 +1,24 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/default-values.yaml new file mode 100644 index 0000000000..fc34693998 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/default-values.yaml @@ -0,0 +1,21 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/derby-test-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/derby-test-values.yaml new file mode 100644 index 0000000000..82ff485457 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/derby-test-values.yaml @@ -0,0 +1,19 @@ +databaseUpgradeReady: true + +postgresql: + enabled: false +artifactory: + podSecurityContext: + fsGroupChangePolicy: "OnRootMismatch" + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/global-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/global-values.yaml new file mode 100644 index 0000000000..33bbf04a20 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/global-values.yaml @@ -0,0 +1,247 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + customInitContainersBegin: | + - name: "custom-init-begin-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + customInitContainers: | + - name: "custom-init-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in local" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-local + mountPath: "/scriptslocal" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-local" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +global: + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + customInitContainersBegin: | + - name: "custom-init-begin-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + customInitContainers: | + - name: "custom-init-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in global" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: artifactory-volume + # Add custom volumes + customVolumes: | + - name: custom-script-global + emptyDir: + sizeLimit: 100Mi + # Add custom volumesMounts + customVolumeMounts: | + - name: custom-script-global + mountPath: "/scripts" + # Add custom sidecar containers + customSidecarContainers: | + - name: "sidecar-list-global" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in global' >> /scripts/sidecarglobal.txt; cat /scripts/sidecarglobal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scripts" + name: custom-script-global + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + +nginx: + customInitContainers: | + - name: "custom-init-begin-nginx" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + command: + - 'sh' + - '-c' + - echo "running in nginx" + volumeMounts: + - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + name: custom-script-local + customSidecarContainers: | + - name: "sidecar-list-nginx" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - NET_RAW + command: ["sh","-c","echo 'Sidecar is running in local' >> /scriptslocal/sidecarlocal.txt; cat /scriptslocal/sidecarlocal.txt; while true; do sleep 30; done"] + volumeMounts: + - mountPath: "/scriptslocal" + name: custom-script-local + resources: + requests: + memory: "32Mi" + cpu: "50m" + limits: + memory: "128Mi" + cpu: "100m" + # Add custom volumes + customVolumes: | + - name: custom-script-local + emptyDir: + sizeLimit: 100Mi + + artifactoryConf: | + {{- if .Values.nginx.https.enabled }} + ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; + ssl_certificate {{ .Values.nginx.persistence.mountPath }}/ssl/tls.crt; + ssl_certificate_key {{ .Values.nginx.persistence.mountPath }}/ssl/tls.key; + ssl_session_cache shared:SSL:1m; + ssl_prefer_server_ciphers on; + {{- end }} + ## server configuration + server { + listen 8088; + {{- if .Values.nginx.internalPortHttps }} + listen {{ .Values.nginx.internalPortHttps }} ssl; + {{- else -}} + {{- if .Values.nginx.https.enabled }} + listen {{ .Values.nginx.https.internalPort }} ssl; + {{- end }} + {{- end }} + {{- if .Values.nginx.internalPortHttp }} + listen {{ .Values.nginx.internalPortHttp }}; + {{- else -}} + {{- if .Values.nginx.http.enabled }} + listen {{ .Values.nginx.http.internalPort }}; + {{- end }} + {{- end }} + server_name ~(?.+)\.{{ include "artifactory.fullname" . }} {{ include "artifactory.fullname" . }} + {{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} + {{- end -}}; + + if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; + } + ## Application specific logs + ## access_log /var/log/nginx/artifactory-access.log timing; + ## error_log /var/log/nginx/artifactory-error.log; + rewrite ^/artifactory/?$ / redirect; + if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; + } + chunked_transfer_encoding on; + client_max_body_size 0; + + location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$server_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + } + } + + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: + - containerPort: 8088 + name: http2 + service: + ## A list of custom ports to expose through the Ingress controller service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: + - port: 8088 + targetPort: 8088 + protocol: TCP + name: http2 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/large-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/large-values.yaml new file mode 100644 index 0000000000..94a485d6f4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/large-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 300 + resources: + requests: + memory: "6Gi" + cpu: "2" + limits: + memory: "10Gi" + cpu: "8" + javaOpts: + xms: "8g" + xmx: "10g" +access: + database: + maxOpenConnections: 150 + tomcat: + connector: + maxThreads: 100 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 150 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/loggers-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/loggers-values.yaml new file mode 100644 index 0000000000..03c94be953 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/loggers-values.yaml @@ -0,0 +1,43 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + loggers: + - access-audit.log + - access-request.log + - access-security-audit.log + - access-service.log + - artifactory-access.log + - artifactory-event.log + - artifactory-import-export.log + - artifactory-request.log + - artifactory-service.log + - frontend-request.log + - frontend-service.log + - metadata-request.log + - metadata-service.log + - router-request.log + - router-service.log + - router-traefik.log + + catalinaLoggers: + - tomcat-catalina.log + - tomcat-localhost.log diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/medium-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/medium-values.yaml new file mode 100644 index 0000000000..35044dc36d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/medium-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 200 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "8Gi" + cpu: "6" + javaOpts: + xms: "6g" + xmx: "8g" +access: + database: + maxOpenConnections: 100 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 100 + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "200Mi" + cpu: "200m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/migration-disabled-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/migration-disabled-values.yaml new file mode 100644 index 0000000000..092756fb65 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/migration-disabled-values.yaml @@ -0,0 +1,21 @@ +databaseUpgradeReady: true +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + migration: + enabled: false + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/nginx-autoreload-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/nginx-autoreload-values.yaml new file mode 100644 index 0000000000..09616c5bf4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/nginx-autoreload-values.yaml @@ -0,0 +1,42 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +nginx: + customVolumes: | + - name: scripts + configMap: + name: {{ template "artifactory.fullname" . }}-nginx-scripts + defaultMode: 0550 + customVolumeMounts: | + - name: scripts + mountPath: /var/opt/jfrog/nginx/scripts/ + customCommand: + - /bin/sh + - -c + - | + # watch for configmap changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/conf.d:n & + {{ if .Values.nginx.https.enabled -}} + # watch for tls secret changes + /sbin/inotifyd /var/opt/jfrog/nginx/scripts/configreloader.sh {{ .Values.nginx.persistence.mountPath -}}/ssl:n & + {{ end -}} + nginx -g 'daemon off;' diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml new file mode 100644 index 0000000000..a38969a8ff --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values-access-tls-values.yaml @@ -0,0 +1,96 @@ +databaseUpgradeReady: true +artifactory: + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + +access: + accessConfig: + security: + tls: true + resetAccessCAKeys: true + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 102 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values.yaml new file mode 100644 index 0000000000..057ae9bf36 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/rtsplit-values.yaml @@ -0,0 +1,151 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 1 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + + # Add lifecycle hooks for artifactory container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the artifactory postStart handler >> /tmp/message"] + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 100 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false +jfconnect: + enabled: true + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for jfconect container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the jfconnect postStart handler >> /tmp/message"] + + +mc: + enabled: true +splitServicesToContainers: true + +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for router container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the router postStart handler >> /tmp/message"] + +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + # Add lifecycle hooks for frontend container + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the frontend postStart handler >> /tmp/message"] + +metadata: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the metadata postStart handler >> /tmp/message"] + +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the event postStart handler >> /tmp/message"] + +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" + lifecycle: + postStart: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] + preStop: + exec: + command: ["/bin/sh", "-c", "echo Hello from the observability postStart handler >> /tmp/message"] diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/small-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/small-values.yaml new file mode 100644 index 0000000000..70d77790ab --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/small-values.yaml @@ -0,0 +1,82 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +databaseUpgradeReady: true + +# To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release +postgresql: + postgresqlPassword: password + persistence: + enabled: false +artifactory: + persistence: + enabled: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 200 + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "6g" +access: + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 +router: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +frontend: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +metadata: + database: + maxOpenConnections: 80 + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +event: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +jfconnect: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" +observability: + resources: + requests: + memory: "100Mi" + cpu: "100m" + limits: + memory: "1Gi" + cpu: "1" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/test-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/test-values.yaml new file mode 100644 index 0000000000..d2beb0eff6 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/ci/test-values.yaml @@ -0,0 +1,84 @@ +databaseUpgradeReady: true +artifactory: + replicaCount: 3 + joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + unifiedSecretInstallation: false + metrics: + enabled: true + persistence: + enabled: false + resources: + requests: + memory: "4Gi" + cpu: "2" + limits: + memory: "6Gi" + cpu: "4" + javaOpts: + xms: "4g" + xmx: "4g" + statefulset: + annotations: + artifactory: test + +postgresql: + postgresqlPassword: password + postgresqlExtendedConf: + maxConnections: 100 + persistence: + enabled: false + +rbac: + create: true +serviceAccount: + create: true + automountServiceAccountToken: true + +ingress: + enabled: true + className: "testclass" + hosts: + - demonow.xyz +nginx: + enabled: false + +jfconnect: + enabled: false + +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 3 + targetCPUUtilizationPercentage: 70 + +## filebeat sidecar +filebeat: + enabled: true + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output.file: + path: "/tmp/filebeat" + filename: filebeat + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/binarystore.xml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/binarystore.xml new file mode 100644 index 0000000000..e396e0a418 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/binarystore.xml @@ -0,0 +1,426 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" -}} + + {{- if (.Values.artifactory.persistence.maxCacheSize) }} + + + + + + {{- else }} + + + + {{- end }} + + {{- if .Values.artifactory.persistence.maxCacheSize }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + + {{ .Values.artifactory.persistence.nfs.dataDir }}/filestore + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "file-system" -}} + + + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{- end }} + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{- end }} + + + {{- if .Values.artifactory.persistence.fileSystem.cache.enabled }} + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "cluster-file-system" -}} + + + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + 2 + + + + + + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + + + shard-fs-1 + local + + + + + 30 + tester-remote1 + 10000 + remote + + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") (eq .Values.artifactory.persistence.type "cluster-google-storage-v2") (eq .Values.artifactory.persistence.type "google-storage-v2-direct") }} + + + {{- if or (eq .Values.artifactory.persistence.type "google-storage") (eq .Values.artifactory.persistence.type "google-storage-v2") }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-google-storage-v2" }} + + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + 2 + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "google-storage-v2-direct" }} + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "cluster-google-storage-v2" }} + + local + + + + 30 + 10000 + remote + + {{- end }} + + + {{- if .Values.artifactory.persistence.googleStorage.useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .Values.artifactory.persistence.googleStorage.enableSignedUrlRedirect }} + google-cloud-storage + {{ .Values.artifactory.persistence.googleStorage.endpoint }} + {{ .Values.artifactory.persistence.googleStorage.httpsOnly }} + {{ .Values.artifactory.persistence.googleStorage.bucketName }} + {{ .Values.artifactory.persistence.googleStorage.path }} + {{ .Values.artifactory.persistence.googleStorage.bucketExists }} + + +{{- end }} +{{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "cluster-s3-storage-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-archive") }} + + + {{- if eq .Values.artifactory.persistence.type "aws-s3-v3" }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-s3-storage-v3" }} + + + + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "s3-storage-v3-archive" }} + + + + + + + {{- end }} + + {{- if or (eq .Values.artifactory.persistence.type "aws-s3-v3") (eq .Values.artifactory.persistence.type "s3-storage-v3-direct") (eq .Values.artifactory.persistence.type "cluster-s3-storage-v3") }} + + + {{ .Values.artifactory.persistence.maxCacheSize | int64}} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + {{- end }} + + {{- if eq .Values.artifactory.persistence.type "cluster-s3-storage-v3" }} + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + + + + + remote + + + + local + + {{- end }} + + {{- with .Values.artifactory.persistence.awsS3V3 }} + + {{ .testConnection }} + {{- if .identity }} + {{ .identity }} + {{- end }} + {{- if .credential }} + {{ .credential }} + {{- end }} + {{ .region }} + {{ .bucketName }} + {{ .path }} + {{ .endpoint }} + {{- with .port }} + {{ . }} + {{- end }} + {{- with .useHttp }} + {{ . }} + {{- end }} + {{- with .maxConnections }} + {{ . }} + {{- end }} + {{- with .connectionTimeout }} + {{ . }} + {{- end }} + {{- with .socketTimeout }} + {{ . }} + {{- end }} + {{- with .kmsServerSideEncryptionKeyId }} + {{ . }} + {{- end }} + {{- with .kmsKeyRegion }} + {{ . }} + {{- end }} + {{- with .kmsCryptoMode }} + {{ . }} + {{- end }} + {{- if .useInstanceCredentials }} + true + {{- else }} + false + {{- end }} + {{ .usePresigning }} + {{ .signatureExpirySeconds }} + {{ .signedUrlExpirySeconds }} + {{- with .cloudFrontDomainName }} + {{ . }} + {{- end }} + {{- with .cloudFrontKeyPairId }} + {{ . }} + {{- end }} + {{- with .cloudFrontPrivateKey }} + {{ . }} + {{- end }} + {{- with .enableSignedUrlRedirect }} + {{ . }} + {{- end }} + {{- with .enablePathStyleAccess }} + {{ . }} + {{- end }} + {{- with .multiPartLimit }} + {{ . | int64 }} + {{- end }} + {{- with .multipartElementSize }} + {{ . | int64 }} + {{- end }} + + {{- end }} + +{{- end }} + +{{- if or (eq .Values.artifactory.persistence.type "azure-blob") (eq .Values.artifactory.persistence.type "azure-blob-storage-direct") (eq .Values.artifactory.persistence.type "cluster-azure-blob-storage") }} + + + {{- if or (eq .Values.artifactory.persistence.type "azure-blob") }} + + + + + + + + + + {{- else if eq .Values.artifactory.persistence.type "azure-blob-storage-direct" }} + + + + + + {{- else if eq .Values.artifactory.persistence.type "cluster-azure-blob-storage" }} + + + + + + + + + + + + + {{- end }} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{- if eq .Values.artifactory.persistence.type "cluster-azure-blob-storage" }} + + + crossNetworkStrategy + crossNetworkStrategy + {{ .Values.artifactory.persistence.redundancy }} + {{ .Values.artifactory.persistence.lenientLimit }} + + + + remote + + + local + + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} +{{- if eq .Values.artifactory.persistence.type "azure-blob-storage-v2-direct" -}} + + + + {{ .Values.artifactory.persistence.maxCacheSize | int64 }} + {{ .Values.artifactory.persistence.cacheProviderDir }} + {{- if .Values.artifactory.persistence.maxFileSizeLimit }} + {{.Values.artifactory.persistence.maxFileSizeLimit | int64}} + {{- end }} + {{- if .Values.artifactory.persistence.skipDuringUpload }} + {{.Values.artifactory.persistence.skipDuringUpload}} + {{- end }} + + + {{ .Values.artifactory.persistence.azureBlob.accountName }} + {{ .Values.artifactory.persistence.azureBlob.accountKey }} + {{ .Values.artifactory.persistence.azureBlob.endpoint }} + {{ .Values.artifactory.persistence.azureBlob.containerName }} + {{ .Values.artifactory.persistence.azureBlob.multiPartLimit | int64 }} + {{ .Values.artifactory.persistence.azureBlob.multipartElementSize | int64 }} + {{ .Values.artifactory.persistence.azureBlob.testConnection }} + + +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/installer-info.json b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/installer-info.json new file mode 100644 index 0000000000..79f42ed16d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/installer-info.json @@ -0,0 +1,32 @@ +{ + "productId": "Helm_artifactory/{{ .Chart.Version }}", + "features": [ + { + "featureId": "Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}" + }, + { + "featureId": "Database/{{ .Values.database.type }}" + }, + { + "featureId": "PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}" + }, + { + "featureId": "Nginx_Enabled/{{ .Values.nginx.enabled }}" + }, + { + "featureId": "ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}" + }, + { + "featureId": "SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}" + }, + { + "featureId": "UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}" + }, + { + "featureId": "Filebeat_Enabled/{{ .Values.filebeat.enabled }}" + }, + { + "featureId": "ReplicaCount/{{ .Values.artifactory.replicaCount }}" + } + ] +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/migrate.sh b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/migrate.sh new file mode 100644 index 0000000000..ba44160f47 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/migrate.sh @@ -0,0 +1,4311 @@ +#!/bin/bash + +# Flags +FLAG_Y="y" +FLAG_N="n" +FLAGS_Y_N="$FLAG_Y $FLAG_N" +FLAG_NOT_APPLICABLE="_NA_" + +CURRENT_VERSION=$1 + +WRAPPER_SCRIPT_TYPE_RPMDEB="RPMDEB" +WRAPPER_SCRIPT_TYPE_DOCKER_COMPOSE="DOCKERCOMPOSE" + +SENSITIVE_KEY_VALUE="__sensitive_key_hidden___" + +# Shared system keys +SYS_KEY_SHARED_JFROGURL="shared.jfrogUrl" +SYS_KEY_SHARED_SECURITY_JOINKEY="shared.security.joinKey" +SYS_KEY_SHARED_SECURITY_MASTERKEY="shared.security.masterKey" + +SYS_KEY_SHARED_NODE_ID="shared.node.id" +SYS_KEY_SHARED_JAVAHOME="shared.javaHome" + +SYS_KEY_SHARED_DATABASE_TYPE="shared.database.type" +SYS_KEY_SHARED_DATABASE_TYPE_VALUE_POSTGRES="postgresql" +SYS_KEY_SHARED_DATABASE_DRIVER="shared.database.driver" +SYS_KEY_SHARED_DATABASE_URL="shared.database.url" +SYS_KEY_SHARED_DATABASE_USERNAME="shared.database.username" +SYS_KEY_SHARED_DATABASE_PASSWORD="shared.database.password" + +SYS_KEY_SHARED_ELASTICSEARCH_URL="shared.elasticsearch.url" +SYS_KEY_SHARED_ELASTICSEARCH_USERNAME="shared.elasticsearch.username" +SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD="shared.elasticsearch.password" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP="shared.elasticsearch.clusterSetup" +SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE="shared.elasticsearch.unicastFile" +SYS_KEY_SHARED_ELASTICSEARCH_CLUSTERSETUP_VALUE="YES" + +# Define this in product specific script. Should contain the path to unitcast file +# File used by insight server to write cluster active nodes info. This will be read by elasticsearch +#SYS_KEY_SHARED_ELASTICSEARCH_UNICASTFILE_VALUE="" + +SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME="shared.rabbitMq.active.node.name" +SYS_KEY_RABBITMQ_ACTIVE_NODE_IP="shared.rabbitMq.active.node.ip" + +# Filenames +FILE_NAME_SYSTEM_YAML="system.yaml" +FILE_NAME_JOIN_KEY="join.key" +FILE_NAME_MASTER_KEY="master.key" +FILE_NAME_INSTALLER_YAML="installer.yaml" + +# Global constants used in business logic +NODE_TYPE_STANDALONE="standalone" +NODE_TYPE_CLUSTER_NODE="node" +NODE_TYPE_DATABASE="database" + +# External(isable) databases +DATABASE_POSTGRES="POSTGRES" +DATABASE_ELASTICSEARCH="ELASTICSEARCH" +DATABASE_RABBITMQ="RABBITMQ" + +POSTGRES_LABEL="PostgreSQL" +ELASTICSEARCH_LABEL="Elasticsearch" +RABBITMQ_LABEL="Rabbitmq" + +ARTIFACTORY_LABEL="Artifactory" +JFMC_LABEL="Mission Control" +DISTRIBUTION_LABEL="Distribution" +XRAY_LABEL="Xray" + +POSTGRES_CONTAINER="postgres" +ELASTICSEARCH_CONTAINER="elasticsearch" +RABBITMQ_CONTAINER="rabbitmq" +REDIS_CONTAINER="redis" + +#Adding a small timeout before a read ensures it is positioned correctly in the screen +read_timeout=0.5 + +# Options related to data directory location +PROMPT_DATA_DIR_LOCATION="Installation Directory" +KEY_DATA_DIR_LOCATION="installer.data_dir" + +SYS_KEY_SHARED_NODE_HAENABLED="shared.node.haEnabled" +PROMPT_ADD_TO_CLUSTER="Are you adding an additional node to an existing product cluster?" +KEY_ADD_TO_CLUSTER="installer.ha" +VALID_VALUES_ADD_TO_CLUSTER="$FLAGS_Y_N" + +MESSAGE_POSTGRES_INSTALL="The installer can install a $POSTGRES_LABEL database, or you can connect to an existing compatible $POSTGRES_LABEL database\n(compatible databases: https://www.jfrog.com/confluence/display/JFROG/System+Requirements#SystemRequirements-RequirementsMatrix)" +PROMPT_POSTGRES_INSTALL="Do you want to install $POSTGRES_LABEL?" +KEY_POSTGRES_INSTALL="installer.install_postgresql" +VALID_VALUES_POSTGRES_INSTALL="$FLAGS_Y_N" + +# Postgres connection details +RPM_DEB_POSTGRES_HOME_DEFAULT="/var/opt/jfrog/postgres" +RPM_DEB_MESSAGE_STANDALONE_POSTGRES_DATA="$POSTGRES_LABEL home will have data and its configuration" +RPM_DEB_PROMPT_STANDALONE_POSTGRES_DATA="Type desired $POSTGRES_LABEL home location" +RPM_DEB_KEY_STANDALONE_POSTGRES_DATA="installer.postgresql.home" + +MESSAGE_DATABASE_URL="Provide the database connection details" +PROMPT_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://:/artifactory" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://:/mission_control?sslmode=disable" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://:/distribution?sslmode=disable" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://:/xraydb?sslmode=disable" + ;; + esac + if [ -z "$databaseURlExample" ]; then + echo -n "$POSTGRES_LABEL URL" # For consistency with username and password + return + fi + echo -n "$POSTGRES_LABEL url. Example: [$databaseURlExample]" +} +REGEX_DATABASE_URL(){ + local databaseURlExample= + case "$PRODUCT_NAME" in + $ARTIFACTORY_LABEL) + databaseURlExample="jdbc:postgresql://.*/artifactory.*" + ;; + $JFMC_LABEL) + databaseURlExample="postgresql://.*/mission_control.*" + ;; + $DISTRIBUTION_LABEL) + databaseURlExample="jdbc:postgresql://.*/distribution.*" + ;; + $XRAY_LABEL) + databaseURlExample="postgres://.*/xraydb.*" + ;; + esac + echo -n "^$databaseURlExample\$" +} +ERROR_MESSAGE_DATABASE_URL="Invalid $POSTGRES_LABEL URL" +KEY_DATABASE_URL="$SYS_KEY_SHARED_DATABASE_URL" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_USERNAME="$POSTGRES_LABEL username" +KEY_DATABASE_USERNAME="$SYS_KEY_SHARED_DATABASE_USERNAME" +#NOTE: It is important to display the label. Since the message may be hidden if URL is known +PROMPT_DATABASE_PASSWORD="$POSTGRES_LABEL password" +KEY_DATABASE_PASSWORD="$SYS_KEY_SHARED_DATABASE_PASSWORD" +IS_SENSITIVE_DATABASE_PASSWORD="$FLAG_Y" + +MESSAGE_STANDALONE_ELASTICSEARCH_INSTALL="The installer can install a $ELASTICSEARCH_LABEL database or you can connect to an existing compatible $ELASTICSEARCH_LABEL database" +PROMPT_STANDALONE_ELASTICSEARCH_INSTALL="Do you want to install $ELASTICSEARCH_LABEL?" +KEY_STANDALONE_ELASTICSEARCH_INSTALL="installer.install_elasticsearch" +VALID_VALUES_STANDALONE_ELASTICSEARCH_INSTALL="$FLAGS_Y_N" + +# Elasticsearch connection details +MESSAGE_ELASTICSEARCH_DETAILS="Provide the $ELASTICSEARCH_LABEL connection details" +PROMPT_ELASTICSEARCH_URL="$ELASTICSEARCH_LABEL URL" +KEY_ELASTICSEARCH_URL="$SYS_KEY_SHARED_ELASTICSEARCH_URL" + +PROMPT_ELASTICSEARCH_USERNAME="$ELASTICSEARCH_LABEL username" +KEY_ELASTICSEARCH_USERNAME="$SYS_KEY_SHARED_ELASTICSEARCH_USERNAME" + +PROMPT_ELASTICSEARCH_PASSWORD="$ELASTICSEARCH_LABEL password" +KEY_ELASTICSEARCH_PASSWORD="$SYS_KEY_SHARED_ELASTICSEARCH_PASSWORD" +IS_SENSITIVE_ELASTICSEARCH_PASSWORD="$FLAG_Y" + +# Cluster related questions +MESSAGE_CLUSTER_MASTER_KEY="Provide the cluster's master key. It can be found in the data directory of the first node under /etc/security/master.key" +PROMPT_CLUSTER_MASTER_KEY="Master Key" +KEY_CLUSTER_MASTER_KEY="$SYS_KEY_SHARED_SECURITY_MASTERKEY" +IS_SENSITIVE_CLUSTER_MASTER_KEY="$FLAG_Y" + +MESSAGE_JOIN_KEY="The Join key is the secret key used to establish trust between services in the JFrog Platform.\n(You can copy the Join Key from Admin > User Management > Settings)" +PROMPT_JOIN_KEY="Join Key" +KEY_JOIN_KEY="$SYS_KEY_SHARED_SECURITY_JOINKEY" +IS_SENSITIVE_JOIN_KEY="$FLAG_Y" +REGEX_JOIN_KEY="^[a-zA-Z0-9]{16,}\$" +ERROR_MESSAGE_JOIN_KEY="Invalid Join Key" + +# Rabbitmq related cluster information +MESSAGE_RABBITMQ_ACTIVE_NODE_NAME="Provide an active ${RABBITMQ_LABEL} node name. Run the command [ hostname -s ] on any of the existing nodes in the product cluster to get this" +PROMPT_RABBITMQ_ACTIVE_NODE_NAME="${RABBITMQ_LABEL} active node name" +KEY_RABBITMQ_ACTIVE_NODE_NAME="$SYS_KEY_RABBITMQ_ACTIVE_NODE_NAME" + +# Rabbitmq related cluster information (necessary only for docker-compose) +PROMPT_RABBITMQ_ACTIVE_NODE_IP="${RABBITMQ_LABEL} active node ip" +KEY_RABBITMQ_ACTIVE_NODE_IP="$SYS_KEY_RABBITMQ_ACTIVE_NODE_IP" + +MESSAGE_JFROGURL(){ + echo -e "The JFrog URL allows ${PRODUCT_NAME} to connect to a JFrog Platform Instance.\n(You can copy the JFrog URL from Administration > User Management > Settings > Connection details)" +} +PROMPT_JFROGURL="JFrog URL" +KEY_JFROGURL="$SYS_KEY_SHARED_JFROGURL" +REGEX_JFROGURL="^https?://.*:{0,}[0-9]{0,4}\$" +ERROR_MESSAGE_JFROGURL="Invalid JFrog URL" + + +# Set this to FLAG_Y on upgrade +IS_UPGRADE="${FLAG_N}" + +# This belongs in JFMC but is the ONLY one that needs it so keeping it here for now. Can be made into a method and overridden if necessary +MESSAGE_MULTIPLE_PG_SCHEME="Please setup $POSTGRES_LABEL with schema as described in https://www.jfrog.com/confluence/display/JFROG/Installing+Mission+Control" + +_getMethodOutputOrVariableValue() { + unset EFFECTIVE_MESSAGE + local keyToSearch=$1 + local effectiveMessage= + local result="0" + # logSilly "Searching for method: [$keyToSearch]" + LC_ALL=C type "$keyToSearch" > /dev/null 2>&1 || result="$?" + if [[ "$result" == "0" ]]; then + # logSilly "Found method for [$keyToSearch]" + EFFECTIVE_MESSAGE="$($keyToSearch)" + return + fi + eval EFFECTIVE_MESSAGE=\${$keyToSearch} + if [ ! -z "$EFFECTIVE_MESSAGE" ]; then + return + fi + # logSilly "Didn't find method or variable for [$keyToSearch]" +} + + +# REF https://misc.flogisoft.com/bash/tip_colors_and_formatting +cClear="\e[0m" +cBlue="\e[38;5;69m" +cRedDull="\e[1;31m" +cYellow="\e[1;33m" +cRedBright="\e[38;5;197m" +cBold="\e[1m" + + +_loggerGetModeRaw() { + local MODE="$1" + case $MODE in + INFO) + printf "" + ;; + DEBUG) + printf "%s" "[${MODE}] " + ;; + WARN) + printf "${cRedDull}%s%s${cClear}" "[" "${MODE}" "] " + ;; + ERROR) + printf "${cRedBright}%s%s${cClear}" "[" "${MODE}" "] " + ;; + esac +} + + +_loggerGetMode() { + local MODE="$1" + case $MODE in + INFO) + printf "${cBlue}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + DEBUG) + printf "%-7s" "[${MODE}]" + ;; + WARN) + printf "${cRedDull}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + ERROR) + printf "${cRedBright}%s%-5s%s${cClear}" "[" "${MODE}" "]" + ;; + esac +} + +# Capitalises the first letter of the message +_loggerGetMessage() { + local originalMessage="$*" + local firstChar=$(echo "${originalMessage:0:1}" | awk '{ print toupper($0) }') + local resetOfMessage="${originalMessage:1}" + echo "$firstChar$resetOfMessage" +} + +# The spec also says content should be left-trimmed but this is not necessary in our case. We don't reach the limit. +_loggerGetStackTrace() { + printf "%s%-30s%s" "[" "$1:$2" "]" +} + +_loggerGetThread() { + printf "%s" "[main]" +} + +_loggerGetServiceType() { + printf "%s%-5s%s" "[" "shell" "]" +} + +#Trace ID is not applicable to scripts +_loggerGetTraceID() { + printf "%s" "[]" +} + +logRaw() { + echo "" + printf "$1" + echo "" +} + +logBold(){ + echo "" + printf "${cBold}$1${cClear}" + echo "" +} + +# The date binary works differently based on whether it is GNU/BSD +is_date_supported=0 +date --version > /dev/null 2>&1 || is_date_supported=1 +IS_GNU=$(echo $is_date_supported) + +_loggerGetTimestamp() { + if [ "${IS_GNU}" == "0" ]; then + echo -n $(date -u +%FT%T.%3NZ) + else + echo -n $(date -u +%FT%T.000Z) + fi +} + +# https://www.shellscript.sh/tips/spinner/ +_spin() +{ + spinner="/|\\-/|\\-" + while : + do + for i in `seq 0 7` + do + echo -n "${spinner:$i:1}" + echo -en "\010" + sleep 1 + done + done +} + +showSpinner() { + # Start the Spinner: + _spin & + # Make a note of its Process ID (PID): + SPIN_PID=$! + # Kill the spinner on any signal, including our own exit. + trap "kill -9 $SPIN_PID" `seq 0 15` &> /dev/null || return 0 +} + +stopSpinner() { + local occurrences=$(ps -ef | grep -wc "${SPIN_PID}") + let "occurrences+=0" + # validate that it is present (2 since this search itself will show up in the results) + if [ $occurrences -gt 1 ]; then + kill -9 $SPIN_PID &>/dev/null || return 0 + wait $SPIN_ID &>/dev/null + fi +} + +_getEffectiveMessage(){ + local MESSAGE="$1" + local MODE=${2-"INFO"} + + if [ -z "$CONTEXT" ]; then + CONTEXT=$(caller) + fi + + _EFFECTIVE_MESSAGE= + if [ -z "$LOG_BEHAVIOR_ADD_META" ]; then + _EFFECTIVE_MESSAGE="$(_loggerGetModeRaw $MODE)$(_loggerGetMessage $MESSAGE)" + else + local SERVICE_TYPE="script" + local TRACE_ID="" + local THREAD="main" + + local CONTEXT_LINE=$(echo "$CONTEXT" | awk '{print $1}') + local CONTEXT_FILE=$(echo "$CONTEXT" | awk -F"/" '{print $NF}') + + _EFFECTIVE_MESSAGE="$(_loggerGetTimestamp) $(_loggerGetServiceType) $(_loggerGetMode $MODE) $(_loggerGetTraceID) $(_loggerGetStackTrace $CONTEXT_FILE $CONTEXT_LINE) $(_loggerGetThread) - $(_loggerGetMessage $MESSAGE)" + fi + CONTEXT= +} + +# Important - don't call any log method from this method. Will become an infinite loop. Use echo to debug +_logToFile() { + local MODE=${1-"INFO"} + local targetFile="$LOG_BEHAVIOR_ADD_REDIRECTION" + # IF the file isn't passed, abort + if [ -z "$targetFile" ]; then + return + fi + # IF this is not being run in verbose mode and mode is debug or lower, abort + if [ "${VERBOSE_MODE}" != "$FLAG_Y" ] && [ "${VERBOSE_MODE}" != "true" ] && [ "${VERBOSE_MODE}" != "debug" ]; then + if [ "$MODE" == "DEBUG" ] || [ "$MODE" == "SILLY" ]; then + return + fi + fi + + # Create the file if it doesn't exist + if [ ! -f "${targetFile}" ]; then + return + # touch $targetFile > /dev/null 2>&1 || true + fi + # # Make it readable + # chmod 640 $targetFile > /dev/null 2>&1 || true + + # Log contents + printf "%s\n" "$_EFFECTIVE_MESSAGE" >> "$targetFile" || true +} + +logger() { + if [ "$LOG_BEHAVIOR_ADD_NEW_LINE" == "$FLAG_Y" ]; then + echo "" + fi + _getEffectiveMessage "$@" + local MODE=${2-"INFO"} + printf "%s\n" "$_EFFECTIVE_MESSAGE" + _logToFile "$MODE" +} + +logDebug(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "$FLAG_Y" ] || [ "${VERBOSE_MODE}" == "true" ] || [ "${VERBOSE_MODE}" == "debug" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logSilly(){ + VERBOSE_MODE=${VERBOSE_MODE-"false"} + CONTEXT=$(caller) + if [ "${VERBOSE_MODE}" == "silly" ];then + logger "$1" "DEBUG" + else + logger "$1" "DEBUG" >&6 + fi + CONTEXT= +} + +logError() { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= +} + +errorExit () { + CONTEXT=$(caller) + logger "$1" "ERROR" + CONTEXT= + exit 1 +} + +warn () { + CONTEXT=$(caller) + logger "$1" "WARN" + CONTEXT= +} + +note () { + CONTEXT=$(caller) + logger "$1" "NOTE" + CONTEXT= +} + +bannerStart() { + title=$1 + echo + echo -e "\033[1m${title}\033[0m" + echo +} + +bannerSection() { + title=$1 + echo + echo -e "******************************** ${title} ********************************" + echo +} + +bannerSubSection() { + title=$1 + echo + echo -e "************** ${title} *******************" + echo +} + +bannerMessge() { + title=$1 + echo + echo -e "********************************" + echo -e "${title}" + echo -e "********************************" + echo +} + +setRed () { + local input="$1" + echo -e \\033[31m${input}\\033[0m +} +setGreen () { + local input="$1" + echo -e \\033[32m${input}\\033[0m +} +setYellow () { + local input="$1" + echo -e \\033[33m${input}\\033[0m +} + +logger_addLinebreak () { + echo -e "---\n" +} + +bannerImportant() { + title=$1 + local bold="\033[1m" + local noColour="\033[0m" + echo + echo -e "${bold}######################################## IMPORTANT ########################################${noColour}" + echo -e "${bold}${title}${noColour}" + echo -e "${bold}###########################################################################################${noColour}" + echo +} + +bannerEnd() { + #TODO pass a title and calculate length dynamically so that start and end look alike + echo + echo "*****************************************************************************" + echo +} + +banner() { + title=$1 + content=$2 + bannerStart "${title}" + echo -e "$content" +} + +# The logic below helps us redirect content we'd normally hide to the log file. + # + # We have several commands which clutter the console with output and so use + # `cmd > /dev/null` - this redirects the command's output to null. + # + # However, the information we just hid maybe useful for support. Using the code pattern + # `cmd >&6` (instead of `cmd> >/dev/null` ), the command's output is hidden from the console + # but redirected to the installation log file + # + +#Default value of 6 is just null +exec 6>>/dev/null +redirectLogsToFile() { + echo "" + # local file=$1 + + # [ ! -z "${file}" ] || return 0 + + # local logDir=$(dirname "$file") + + # if [ ! -f "${file}" ]; then + # [ -d "${logDir}" ] || mkdir -p ${logDir} || \ + # ( echo "WARNING : Could not create parent directory (${logDir}) to redirect console log : ${file}" ; return 0 ) + # fi + + # #6 now points to the log file + # exec 6>>${file} + # #reference https://unix.stackexchange.com/questions/145651/using-exec-and-tee-to-redirect-logs-to-stdout-and-a-log-file-in-the-same-time + # exec 2>&1 > >(tee -a "${file}") +} + +# Check if a give key contains any sensitive string as part of it +# Based on the result, the caller can decide its value can be displayed or not +# Sample usage : isKeySensitive "${key}" && displayValue="******" || displayValue=${value} +isKeySensitive(){ + local key=$1 + local sensitiveKeys="password|secret|key|token" + + if [ -z "${key}" ]; then + return 1 + else + local lowercaseKey=$(echo "${key}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + [[ "${lowercaseKey}" =~ ${sensitiveKeys} ]] && return 0 || return 1 + fi +} + +getPrintableValueOfKey(){ + local displayValue= + local key="$1" + if [ -z "$key" ]; then + # This is actually an incorrect usage of this method but any logging will cause unexpected content in the caller + echo -n "" + return + fi + + local value="$2" + isKeySensitive "${key}" && displayValue="$SENSITIVE_KEY_VALUE" || displayValue="${value}" + echo -n $displayValue +} + +_createConsoleLog(){ + if [ -z "${JF_PRODUCT_HOME}" ]; then + return + fi + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + mkdir -p "${JF_PRODUCT_HOME}/var/log" || true + if [ ! -f ${targetFile} ]; then + touch $targetFile > /dev/null 2>&1 || true + fi + chmod 640 $targetFile > /dev/null 2>&1 || true +} + +# Output from application's logs are piped to this method. It checks a configuration variable to determine if content should be logged to +# the common console.log file +redirectServiceLogsToFile() { + + local result="0" + # check if the function getSystemValue exists + LC_ALL=C type getSystemValue > /dev/null 2>&1 || result="$?" + if [[ "$result" != "0" ]]; then + warn "Couldn't find the systemYamlHelper. Skipping log redirection" + return 0 + fi + + getSystemValue "shared.consoleLog" "NOT_SET" + if [[ "${YAML_VALUE}" == "false" ]]; then + logger "Redirection is set to false. Skipping log redirection" + return 0; + fi + + if [ -z "${JF_PRODUCT_HOME}" ] || [ "${JF_PRODUCT_HOME}" == "" ]; then + warn "JF_PRODUCT_HOME is unavailable. Skipping log redirection" + return 0 + fi + + local targetFile="${JF_PRODUCT_HOME}/var/log/console.log" + + _createConsoleLog + + while read -r line; do + printf '%s\n' "${line}" >> $targetFile || return 0 # Don't want to log anything - might clutter the screen + done +} + +## Display environment variables starting with JF_ along with its value +## Value of sensitive keys will be displayed as "******" +## +## Sample Display : +## +## ======================== +## JF Environment variables +## ======================== +## +## JF_SHARED_NODE_ID : locahost +## JF_SHARED_JOINKEY : ****** +## +## +displayEnv() { + local JFEnv=$(printenv | grep ^JF_ 2>/dev/null) + local key= + local value= + + if [ -z "${JFEnv}" ]; then + return + fi + + cat << ENV_START_MESSAGE + +======================== +JF Environment variables +======================== +ENV_START_MESSAGE + + for entry in ${JFEnv}; do + key=$(echo "${entry}" | awk -F'=' '{print $1}') + value=$(echo "${entry}" | awk -F'=' '{print $2}') + + isKeySensitive "${key}" && value="******" || value=${value} + + printf "\n%-35s%s" "${key}" " : ${value}" + done + echo; +} + +_addLogRotateConfiguration() { + logDebug "Method ${FUNCNAME[0]}" + # mandatory inputs + local confFile="$1" + local logFile="$2" + + # Method available in _ioOperations.sh + LC_ALL=C type io_setYQPath > /dev/null 2>&1 || return 1 + + io_setYQPath + + # Method available in _systemYamlHelper.sh + LC_ALL=C type getSystemValue > /dev/null 2>&1 || return 1 + + local frequency="daily" + local archiveFolder="archived" + + local compressLogFiles= + getSystemValue "shared.logging.rotation.compress" "true" + if [[ "${YAML_VALUE}" == "true" ]]; then + compressLogFiles="compress" + fi + + getSystemValue "shared.logging.rotation.maxFiles" "10" + local noOfBackupFiles="${YAML_VALUE}" + + getSystemValue "shared.logging.rotation.maxSizeMb" "25" + local sizeOfFile="${YAML_VALUE}M" + + logDebug "Adding logrotate configuration for [$logFile] to [$confFile]" + + # Add configuration to file + local confContent=$(cat << LOGROTATECONF +$logFile { + $frequency + missingok + rotate $noOfBackupFiles + $compressLogFiles + notifempty + olddir $archiveFolder + dateext + extension .log + dateformat -%Y-%m-%d + size ${sizeOfFile} +} +LOGROTATECONF +) + echo "${confContent}" > ${confFile} || return 1 +} + +_operationIsBySameUser() { + local targetUser="$1" + local currentUserID=$(id -u) + local currentUserName=$(id -un) + + if [ $currentUserID == $targetUser ] || [ $currentUserName == $targetUser ]; then + echo -n "yes" + else + echo -n "no" + fi +} + +_addCronJobForLogrotate() { + logDebug "Method ${FUNCNAME[0]}" + + # Abort if logrotate is not available + [ "$(io_commandExists 'crontab')" != "yes" ] && warn "cron is not available" && return 1 + + # mandatory inputs + local productHome="$1" + local confFile="$2" + local cronJobOwner="$3" + + # We want to use our binary if possible. It may be more recent than the one in the OS + local logrotateBinary="$productHome/app/third-party/logrotate/logrotate" + + if [ ! -f "$logrotateBinary" ]; then + logrotateBinary="logrotate" + [ "$(io_commandExists 'logrotate')" != "yes" ] && warn "logrotate is not available" && return 1 + fi + local cmd="$logrotateBinary ${confFile} --state $productHome/var/etc/logrotate/logrotate-state" #--verbose + + id -u $cronJobOwner > /dev/null 2>&1 || { warn "User $cronJobOwner does not exist. Aborting logrotate configuration" && return 1; } + + # Remove the existing line + removeLogRotation "$productHome" "$cronJobOwner" || true + + # Run logrotate daily at 23:55 hours + local cronInterval="55 23 * * * $cmd" + + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + # If this is standalone mode, we cannot use -u - the user running this process may not have the necessary privileges + if [ "$standaloneMode" == "no" ]; then + (crontab -l -u $cronJobOwner 2>/dev/null; echo "$cronInterval") | crontab -u $cronJobOwner - + else + (crontab -l 2>/dev/null; echo "$cronInterval") | crontab - + fi +} + +## Configure logrotate for a product +## Failure conditions: +## If logrotation could not be setup for some reason +## Parameters: +## $1: The product name +## $2: The product home +## Depends on global: none +## Updates global: none +## Returns: NA + +configureLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + + # mandatory inputs + local productName="$1" + if [ -z $productName ]; then + warn "Incorrect usage. A product name is necessary for configuring log rotation" && return 1 + fi + + local productHome="$2" + if [ -z $productHome ]; then + warn "Incorrect usage. A product home folder is necessary for configuring log rotation" && return 1 + fi + + local logFile="${productHome}/var/log/console.log" + if [[ $(uname) == "Darwin" ]]; then + logger "Log rotation for [$logFile] has not been configured. Please setup manually" + return 0 + fi + + local userID="$3" + if [ -z $userID ]; then + warn "Incorrect usage. A userID is necessary for configuring log rotation" && return 1 + fi + + local groupID=${4:-$userID} + local logConfigOwner=${5:-$userID} + + logDebug "Configuring log rotation as user [$userID], group [$groupID], effective cron User [$logConfigOwner]" + + local errorMessage="Could not configure logrotate. Please configure log rotation of the file: [$logFile] manually" + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + # TODO move to recursive method + createDir "${productHome}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/log/archived" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + + # TODO move to recursive method + createDir "${productHome}/var/etc" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + createDir "${productHome}/var/etc/logrotate" "$logConfigOwner" || { warn "${errorMessage}" && return 1; } + + # conf file should be owned by the user running the script + createFile "${confFile}" "${logConfigOwner}" || { warn "Could not create configuration file [$confFile]" return 1; } + + _addLogRotateConfiguration "${confFile}" "${logFile}" "$userID" "$groupID" || { warn "${errorMessage}" && return 1; } + _addCronJobForLogrotate "${productHome}" "${confFile}" "${logConfigOwner}" || { warn "${errorMessage}" && return 1; } +} + +_pauseExecution() { + if [ "${VERBOSE_MODE}" == "debug" ]; then + + local breakPoint="$1" + if [ ! -z "$breakPoint" ]; then + printf "${cBlue}Breakpoint${cClear} [$breakPoint] " + echo "" + fi + printf "${cBlue}Press enter once you are ready to continue${cClear}" + read -s choice + echo "" + fi +} + +# removeLogRotation "$productHome" "$cronJobOwner" || true +removeLogRotation() { + logDebug "Method ${FUNCNAME[0]}" + if [[ $(uname) == "Darwin" ]]; then + logDebug "Not implemented for Darwin." + return 0 + fi + local productHome="$1" + local cronJobOwner="$2" + local standaloneMode=$(_operationIsBySameUser "$cronJobOwner") + + local confFile="${productHome}/var/etc/logrotate/logrotate.conf" + + if [ "$standaloneMode" == "no" ]; then + crontab -l -u $cronJobOwner 2>/dev/null | grep -v "$confFile" | crontab -u $cronJobOwner - + else + crontab -l 2>/dev/null | grep -v "$confFile" | crontab - + fi +} + +# NOTE: This method does not check the configuration to see if redirection is necessary. +# This is intentional. If we don't redirect, tomcat logs might get redirected to a folder/file +# that does not exist, causing the service itself to not start +setupTomcatRedirection() { + logDebug "Method ${FUNCNAME[0]}" + local consoleLog="${JF_PRODUCT_HOME}/var/log/console.log" + _createConsoleLog + export CATALINA_OUT="${consoleLog}" +} + +setupScriptLogsRedirection() { + logDebug "Method ${FUNCNAME[0]}" + if [ -z "${JF_PRODUCT_HOME}" ]; then + logDebug "No JF_PRODUCT_HOME. Returning" + return + fi + # Create the console.log file if it is not already present + # _createConsoleLog || true + # # Ensure any logs (logger/logError/warn) also get redirected to the console.log + # # Using installer.log as a temparory fix. Please change this to console.log once INST-291 is fixed + export LOG_BEHAVIOR_ADD_REDIRECTION="${JF_PRODUCT_HOME}/var/log/console.log" + export LOG_BEHAVIOR_ADD_META="$FLAG_Y" +} + +# Returns Y if this method is run inside a container +isRunningInsideAContainer() { + local check1=$(grep -sq 'docker\|kubepods' /proc/1/cgroup; echo $?) + local check2=$(grep -sq 'containers' /proc/self/mountinfo; echo $?) + if [[ $check1 == 0 || $check2 == 0 || -f "/.dockerenv" ]]; then + echo -n "$FLAG_Y" + else + echo -n "$FLAG_N" + fi +} + +POSTGRES_USER=999 +NGINX_USER=104 +NGINX_GROUP=107 +ES_USER=1000 +REDIS_USER=999 +MONGO_USER=999 +RABBITMQ_USER=999 +LOG_FILE_PERMISSION=640 +PID_FILE_PERMISSION=644 + +# Copy file +copyFile(){ + local source=$1 + local target=$2 + local mode=${3:-overwrite} + local enableVerbose=${4:-"${FLAG_N}"} + local verboseFlag="" + + if [ ! -z "${enableVerbose}" ] && [ "${enableVerbose}" == "${FLAG_Y}" ]; then + verboseFlag="-v" + fi + + if [[ ! ( $source && $target ) ]]; then + warn "Source and target is mandatory to copy file" + return 1 + fi + + if [[ -f "${target}" ]]; then + [[ "$mode" = "overwrite" ]] && ( cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}") || true + else + cp ${verboseFlag} -f "$source" "$target" || errorExit "Unable to copy file, command : cp -f ${source} ${target}" + fi +} + +# Copy files recursively from given source directory to destination directory +# This method wil copy but will NOT overwrite +# Destination will be created if its not available +copyFilesNoOverwrite(){ + local src=$1 + local dest=$2 + local enableVerboseCopy="${3:-${FLAG_Y}}" + + if [[ -z "${src}" || -z "${dest}" ]]; then + return + fi + + if [ -d "${src}" ] && [ "$(ls -A ${src})" ]; then + local relativeFilePath="" + local targetFilePath="" + + for file in $(find ${src} -type f 2>/dev/null) ; do + # Derive relative path and attach it to destination + # Example : + # src=/extra_config + # dest=/var/opt/jfrog/artifactory/etc + # file=/extra_config/config.xml + # relativeFilePath=config.xml + # targetFilePath=/var/opt/jfrog/artifactory/etc/config.xml + relativeFilePath=${file/${src}/} + targetFilePath=${dest}${relativeFilePath} + + createDir "$(dirname "$targetFilePath")" + copyFile "${file}" "${targetFilePath}" "no_overwrite" "${enableVerboseCopy}" + done + fi +} + +# TODO : WINDOWS ? +# Check the max open files and open processes set on the system +checkULimits () { + local minMaxOpenFiles=${1:-32000} + local minMaxOpenProcesses=${2:-1024} + local setValue=${3:-true} + local warningMsgForFiles=${4} + local warningMsgForProcesses=${5} + + logger "Checking open files and processes limits" + + local currentMaxOpenFiles=$(ulimit -n) + logger "Current max open files is $currentMaxOpenFiles" + if [ ${currentMaxOpenFiles} != "unlimited" ] && [ "$currentMaxOpenFiles" -lt "$minMaxOpenFiles" ]; then + if [ "${setValue}" ]; then + ulimit -n "${minMaxOpenFiles}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForFiles}" ] || warn "${warningMsgForFiles}" + else + errorExit "Max number of open files $currentMaxOpenFiles, is too low. Cannot run the application!" + fi + fi + + local currentMaxOpenProcesses=$(ulimit -u) + logger "Current max open processes is $currentMaxOpenProcesses" + if [ "$currentMaxOpenProcesses" != "unlimited" ] && [ "$currentMaxOpenProcesses" -lt "$minMaxOpenProcesses" ]; then + if [ "${setValue}" ]; then + ulimit -u "${minMaxOpenProcesses}" >/dev/null 2>&1 || warn "Max number of open files $currentMaxOpenFiles is low!" + [ -z "${warningMsgForProcesses}" ] || warn "${warningMsgForProcesses}" + else + errorExit "Max number of open files $currentMaxOpenProcesses, is too low. Cannot run the application!" + fi + fi +} + +createDirs() { + local appDataDir=$1 + local serviceName=$2 + local folders="backup bootstrap data etc logs work" + + [ -z "${appDataDir}" ] && errorExit "An application directory is mandatory to create its data structure" || true + [ -z "${serviceName}" ] && errorExit "A service name is mandatory to create service data structure" || true + + for folder in ${folders} + do + folder=${appDataDir}/${folder}/${serviceName} + if [ ! -d "${folder}" ]; then + logger "Creating folder : ${folder}" + mkdir -p "${folder}" || errorExit "Failed to create ${folder}" + fi + done +} + + +testReadWritePermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local test_file=${dir_to_check}/test-permissions + + # Write file + if echo test > ${test_file} 1> /dev/null 2>&1; then + # Write succeeded. Testing read... + if cat ${test_file} > /dev/null; then + rm -f ${test_file} + else + error=true + fi + else + error=true + fi + + if [ ${error} == true ]; then + return 1 + else + return 0 + fi +} + +# Test directory has read/write permissions for current user +testDirectoryPermissions () { + local dir_to_check=$1 + local error=false + + [ -d ${dir_to_check} ] || errorExit "'${dir_to_check}' is not a directory" + + local u_id=$(id -u) + local id_str="id ${u_id}" + + logger "Testing directory ${dir_to_check} has read/write permissions for user ${id_str}" + + if ! testReadWritePermissions ${dir_to_check}; then + error=true + fi + + if [ "${error}" == true ]; then + local stat_data=$(stat -Lc "Directory: %n, permissions: %a, owner: %U, group: %G" ${dir_to_check}) + logger "###########################################################" + logger "${dir_to_check} DOES NOT have proper permissions for user ${id_str}" + logger "${stat_data}" + logger "Mounted directory must have read/write permissions for user ${id_str}" + logger "###########################################################" + errorExit "Directory ${dir_to_check} has bad permissions for user ${id_str}" + fi + logger "Permissions for ${dir_to_check} are good" +} + +# Utility method to create a directory path recursively with chown feature as +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: Root directory from where the path can be created +## $2: List of recursive child directories separated by space +## $3: user who should own the directory. Optional +## $4: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA +# +# Usage: +# createRecursiveDir "/opt/jfrog/product/var" "bootstrap tomcat lib" "user_name" "group_name" +createRecursiveDir(){ + local rootDir=$1 + local pathDirs=$2 + local user=$3 + local group=${4:-${user}} + local fullPath= + + [ ! -z "${rootDir}" ] || return 0 + + createDir "${rootDir}" "${user}" "${group}" + + [ ! -z "${pathDirs}" ] || return 0 + + fullPath=${rootDir} + + for dir in ${pathDirs}; do + fullPath=${fullPath}/${dir} + createDir "${fullPath}" "${user}" "${group}" + done +} + +# Utility method to create a directory +# Failure conditions: +## Exits if unable to create a directory +# Parameters: +## $1: directory to create +## $2: user who should own the directory. Optional +## $3: group who should own the directory. Optional +# Depends on global: none +# Updates global: none +# Returns: NA + +createDir(){ + local dirName="$1" + local printMessage=no + logSilly "Method ${FUNCNAME[0]} invoked with [$dirName]" + [ -z "${dirName}" ] && return + + logDebug "Attempting to create ${dirName}" + mkdir -p "${dirName}" || errorExit "Unable to create directory: [${dirName}]" + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + # Earlier, this line would have returned 1 if it failed. Now it just warns. + # This is intentional. Earlier, this line would NOT be reached if the folder already existed. + # Since it will always come to this line and the script may be running as a non-root user, this method will just warn if + # setting permissions fails (so as to not affect any existing flows) + io_setOwnershipNonRecursive "$dirName" "$userID" "$groupID" || warn "Could not set owner of [$dirName] to [$userID:$groupID]" + fi + # logging message to print created dir with user and group + local logMessage=${4:-$printMessage} + if [[ "${logMessage}" == "yes" ]]; then + logger "Successfully created directory [${dirName}]. Owner: [${userID}:${groupID}]" + fi +} + +removeSoftLinkAndCreateDir () { + local dirName="$1" + local userID="$2" + local groupID="$3" + local logMessage="$4" + removeSoftLink "${dirName}" + createDir "${dirName}" "${userID}" "${groupID}" "${logMessage}" +} + +# Utility method to remove a soft link +removeSoftLink () { + local dirName="$1" + if [[ -L "${dirName}" ]]; then + targetLink=$(readlink -f "${dirName}") + logger "Removing the symlink [${dirName}] pointing to [${targetLink}]" + rm -f "${dirName}" + fi +} + +# Check Directory exist in the path +checkDirExists () { + local directoryPath="$1" + + [[ -d "${directoryPath}" ]] && echo -n "true" || echo -n "false" +} + + +# Utility method to create a file +# Failure conditions: +# Parameters: +## $1: file to create +# Depends on global: none +# Updates global: none +# Returns: NA + +createFile(){ + local fileName="$1" + logSilly "Method ${FUNCNAME[0]} [$fileName]" + [ -f "${fileName}" ] && return 0 + touch "${fileName}" || return 1 + + local userID="$2" + local groupID=${3:-$userID} + + # If UID/GID is passed, chown the folder + if [ ! -z "$userID" ] && [ ! -z "$groupID" ]; then + io_setOwnership "$fileName" "$userID" "$groupID" || return 1 + fi +} + +# Check File exist in the filePath +# IMPORTANT- DON'T ADD LOGGING to this method +checkFileExists () { + local filePath="$1" + + [[ -f "${filePath}" ]] && echo -n "true" || echo -n "false" +} + +# Check for directories contains any (files or sub directories) +# IMPORTANT- DON'T ADD LOGGING to this method +checkDirContents () { + local directoryPath="$1" + if [[ "$(ls -1 "${directoryPath}" | wc -l)" -gt 0 ]]; then + echo -n "true" + else + echo -n "false" + fi +} + +# Check contents exist in directory +# IMPORTANT- DON'T ADD LOGGING to this method +checkContentExists () { + local source="$1" + + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + echo -n "false" + else + echo -n "true" + fi +} + +# Resolve the variable +# IMPORTANT- DON'T ADD LOGGING to this method +evalVariable () { + local output="$1" + local input="$2" + + eval "${output}"=\${"${input}"} + eval echo \${"${output}"} +} + +# Usage: if [ "$(io_commandExists 'curl')" == "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_commandExists() { + local commandToExecute="$1" + hash "${commandToExecute}" 2>/dev/null + local rt=$? + if [ "$rt" == 0 ]; then echo -n "yes"; else echo -n "no"; fi +} + +# Usage: if [ "$(io_curlExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_curlExists() { + io_commandExists "curl" +} + + +io_hasMatch() { + logSilly "Method ${FUNCNAME[0]}" + local result=0 + logDebug "Executing [echo \"$1\" | grep \"$2\" >/dev/null 2>&1]" + echo "$1" | grep "$2" >/dev/null 2>&1 || result=1 + return $result +} + +# Utility method to check if the string passed (usually a connection url) corresponds to this machine itself +# Failure conditions: None +# Parameters: +## $1: string to check against +# Depends on global: none +# Updates global: IS_LOCALHOST with value "yes/no" +# Returns: NA + +io_getIsLocalhost() { + logSilly "Method ${FUNCNAME[0]}" + IS_LOCALHOST="$FLAG_N" + local inputString="$1" + logDebug "Parsing [$inputString] to check if we are dealing with this machine itself" + + io_hasMatch "$inputString" "localhost" && { + logDebug "Found localhost. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for localhost" + + local hostIP=$(io_getPublicHostIP) + io_hasMatch "$inputString" "$hostIP" && { + logDebug "Found $hostIP. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostIP" + + local hostID=$(io_getPublicHostID) + io_hasMatch "$inputString" "$hostID" && { + logDebug "Found $hostID. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostID" + + local hostName=$(io_getPublicHostName) + io_hasMatch "$inputString" "$hostName" && { + logDebug "Found $hostName. Returning [$FLAG_Y]" + IS_LOCALHOST="$FLAG_Y" && return; + } || logDebug "Did not find match for $hostName" + +} + +# Usage: if [ "$(io_tarExists)" != "yes" ] +# IMPORTANT- DON'T ADD LOGGING to this method +io_tarExists() { + io_commandExists "tar" +} + +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostIP() { + local OS_TYPE=$(uname) + local publicHostIP= + if [ "${OS_TYPE}" == "Darwin" ]; then + ipStatus=$(ifconfig en0 | grep "status" | awk '{print$2}') + if [ "${ipStatus}" == "active" ]; then + publicHostIP=$(ifconfig en0 | grep inet | grep -v inet6 | awk '{print $2}') + else + errorExit "Host IP could not be resolved!" + fi + elif [ "${OS_TYPE}" == "Linux" ]; then + publicHostIP=$(hostname -i 2>/dev/null || echo "127.0.0.1") + fi + publicHostIP=$(echo "${publicHostIP}" | awk '{print $1}') + echo -n "${publicHostIP}" +} + +# Will return the short host name (up to the first dot) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostName() { + echo -n "$(hostname -s)" +} + +# Will return the full host name (use this as much as possible) +# IMPORTANT- DON'T ADD LOGGING to this method +io_getPublicHostID() { + echo -n "$(hostname)" +} + +# Utility method to backup a file +# Failure conditions: NA +# Parameters: filePath +# Depends on global: none, +# Updates global: none +# Returns: NA +io_backupFile() { + logSilly "Method ${FUNCNAME[0]}" + fileName="$1" + if [ ! -f "${filePath}" ]; then + logDebug "No file: [${filePath}] to backup" + return + fi + dateTime=$(date +"%Y-%m-%d-%H-%M-%S") + targetFileName="${fileName}.backup.${dateTime}" + yes | \cp -f "$fileName" "${targetFileName}" + logger "File [${fileName}] backedup as [${targetFileName}]" +} + +# Reference https://stackoverflow.com/questions/4023830/how-to-compare-two-strings-in-dot-separated-version-format-in-bash/4025065#4025065 +is_number() { + case "$BASH_VERSION" in + 3.1.*) + PATTERN='\^\[0-9\]+\$' + ;; + *) + PATTERN='^[0-9]+$' + ;; + esac + + [[ "$1" =~ $PATTERN ]] +} + +io_compareVersions() { + if [[ $# != 2 ]] + then + echo "Usage: min_version current minimum" + return + fi + + A="${1%%.*}" + B="${2%%.*}" + + if [[ "$A" != "$1" && "$B" != "$2" && "$A" == "$B" ]] + then + io_compareVersions "${1#*.}" "${2#*.}" + else + if is_number "$A" && is_number "$B" + then + if [[ "$A" -eq "$B" ]]; then + echo "0" + elif [[ "$A" -gt "$B" ]]; then + echo "1" + elif [[ "$A" -lt "$B" ]]; then + echo "-1" + fi + fi + fi +} + +# Reference https://stackoverflow.com/questions/369758/how-to-trim-whitespace-from-a-bash-variable +# Strip all leading and trailing spaces +# IMPORTANT- DON'T ADD LOGGING to this method +io_trim() { + local var="$1" + # remove leading whitespace characters + var="${var#"${var%%[![:space:]]*}"}" + # remove trailing whitespace characters + var="${var%"${var##*[![:space:]]}"}" + echo -n "$var" +} + +# temporary function will be removing it ASAP +# search for string and replace text in file +replaceText_migration_hook () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s/${regexString}/${replaceText}/" "${file}" || warn "Failed to replace the text in ${file}" + fi +} + +# search for string and replace text in file +replaceText () { + local regexString="$1" + local replaceText="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + else + sed -i -e "s#${regexString}#${replaceText}#" "${file}" || warn "Failed to replace the text in ${file}" + logDebug "Replaced [$regexString] with [$replaceText] in [$file]" + fi +} + +# search for string and prepend text in file +prependText () { + local regexString="$1" + local text="$2" + local file="$3" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + else + sed -i -e '/'"${regexString}"'/i\'$'\n\\'"${text}"''$'\n' "${file}" || warn "Failed to prepend the text in ${file}" + fi +} + +# add text to beginning of the file +addText () { + local text="$1" + local file="$2" + + if [[ "$(checkFileExists "${file}")" != "true" ]]; then + return + fi + if [[ $(uname) == "Darwin" ]]; then + sed -i '' -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + else + sed -i -e '1s/^/'"${text}"'\'$'\n/' "${file}" || warn "Failed to add the text in ${file}" + fi +} + +io_replaceString () { + local value="$1" + local firstString="$2" + local secondString="$3" + local separator=${4:-"/"} + local updateValue= + if [[ $(uname) == "Darwin" ]]; then + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + else + updateValue=$(echo "${value}" | sed "s${separator}${firstString}${separator}${secondString}${separator}") + fi + echo -n "${updateValue}" +} + +_findYQ() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + local parentDir="$1" + if [ -z "$parentDir" ]; then + return + fi + logDebug "Executing command [find "${parentDir}" -name third-party -type d]" + local yq=$(find "${parentDir}" -name third-party -type d) + if [ -d "${yq}/yq" ]; then + export YQ_PATH="${yq}/yq" + fi +} + + +io_setYQPath() { + # logSilly "Method ${FUNCNAME[0]}" (Intentionally not logging. Does not add value) + if [ "$(io_commandExists 'yq')" == "yes" ]; then + return + fi + + if [ ! -z "${JF_PRODUCT_HOME}" ] && [ -d "${JF_PRODUCT_HOME}" ]; then + _findYQ "${JF_PRODUCT_HOME}" + fi + + if [ -z "${YQ_PATH}" ] && [ ! -z "${COMPOSE_HOME}" ] && [ -d "${COMPOSE_HOME}" ]; then + _findYQ "${COMPOSE_HOME}" + fi + # TODO We can remove this block after all the code is restructured. + if [ -z "${YQ_PATH}" ] && [ ! -z "${SCRIPT_HOME}" ] && [ -d "${SCRIPT_HOME}" ]; then + _findYQ "${SCRIPT_HOME}" + fi + +} + +io_getLinuxDistribution() { + LINUX_DISTRIBUTION= + + # Make sure running on Linux + [ $(uname -s) != "Linux" ] && return + + # Find out what Linux distribution we are on + + cat /etc/*-release | grep -i Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 6.x + cat /etc/issue.net | grep Red >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat || true + + # OS 7.x + cat /etc/*-release | grep -i centos >/dev/null 2>&1 && LINUX_DISTRIBUTION=CentOS && LINUX_DISTRIBUTION_VER="7" || true + + # OS 8.x + grep -q -i "release 8" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="8" || true + + # OS 7.x + grep -q -i "release 7" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="7" || true + + # OS 6.x + grep -q -i "release 6" /etc/redhat-release >/dev/null 2>&1 && LINUX_DISTRIBUTION_VER="6" || true + + cat /etc/*-release | grep -i Red | grep -i 'VERSION=7' >/dev/null 2>&1 && LINUX_DISTRIBUTION=RedHat && LINUX_DISTRIBUTION_VER="7" || true + + cat /etc/*-release | grep -i debian >/dev/null 2>&1 && LINUX_DISTRIBUTION=Debian || true + + cat /etc/*-release | grep -i ubuntu >/dev/null 2>&1 && LINUX_DISTRIBUTION=Ubuntu || true +} + +## Utility method to check ownership of folders/files +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If file is not owned by the user & group +## Parameters: + ## user + ## group + ## folder to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac +io_checkOwner () { + logSilly "Method ${FUNCNAME[0]}" + local osType=$(uname) + + if [ "${osType}" != "Linux" ]; then + logDebug "Unsupported OS. Skipping check" + return 0 + fi + + local file_to_check=$1 + local user_id_to_check=$2 + + + if [ -z "$user_id_to_check" ] || [ -z "$file_to_check" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group_id_to_check=${3:-$user_id_to_check} + local check_user_name=${4:-"no"} + + logDebug "Checking permissions on [$file_to_check] for user [$user_id_to_check] & group [$group_id_to_check]" + + local stat= + + if [ "${check_user_name}" == "yes" ]; then + stat=( $(stat -Lc "%U %G" ${file_to_check}) ) + else + stat=( $(stat -Lc "%u %g" ${file_to_check}) ) + fi + + local user_id=${stat[0]} + local group_id=${stat[1]} + + if [[ "${user_id}" != "${user_id_to_check}" ]] || [[ "${group_id}" != "${group_id_to_check}" ]] ; then + logDebug "Ownership mismatch. [${file_to_check}] is not owned by [${user_id_to_check}:${group_id_to_check}]" + return 1 + else + return 0 + fi +} + +## Utility method to change ownership of a file/folder - NON recursive +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnershipNonRecursive() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown ${user}:${group} ${targetFile}]" + chown ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to change ownership of a file. +## IMPORTANT +## If being called on a folder, should ONLY be called for fresh folders or may cause performance issues +## Failure conditions: + ## If invoked with incorrect inputs - FATAL + ## If chown operation fails - returns 1 +## Parameters: + ## user + ## group + ## file to chown +## Globals: none +## Returns: none +## NOTE: The method does NOTHING if the OS is Mac + +io_setOwnership() { + + local osType=$(uname) + if [ "${osType}" != "Linux" ]; then + return + fi + + local targetFile=$1 + local user=$2 + + if [ -z "$user" ] || [ -z "$targetFile" ]; then + errorExit "Invalid invocation of method. Missing mandatory inputs" + fi + + local group=${3:-$user} + logDebug "Method ${FUNCNAME[0]}. Executing [chown -R ${user}:${group} ${targetFile}]" + chown -R ${user}:${group} ${targetFile} || return 1 +} + +## Utility method to create third party folder structure necessary for Postgres +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## POSTGRESQL_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createPostgresDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${POSTGRESQL_DATA_ROOT}" ] && return 0 + + logDebug "Property [${POSTGRESQL_DATA_ROOT}] exists. Proceeding" + + createDir "${POSTGRESQL_DATA_ROOT}/data" + io_setOwnership "${POSTGRESQL_DATA_ROOT}" "${POSTGRES_USER}" "${POSTGRES_USER}" || errorExit "Setting ownership of [${POSTGRESQL_DATA_ROOT}] to [${POSTGRES_USER}:${POSTGRES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Nginx +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## NGINX_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createNginxDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${NGINX_DATA_ROOT}" ] && return 0 + + logDebug "Property [${NGINX_DATA_ROOT}] exists. Proceeding" + + createDir "${NGINX_DATA_ROOT}" + io_setOwnership "${NGINX_DATA_ROOT}" "${NGINX_USER}" "${NGINX_GROUP}" || errorExit "Setting ownership of [${NGINX_DATA_ROOT}] to [${NGINX_USER}:${NGINX_GROUP}] failed" +} + +## Utility method to create third party folder structure necessary for ElasticSearch +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## ELASTIC_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createElasticSearchDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${ELASTIC_DATA_ROOT}" ] && return 0 + + logDebug "Property [${ELASTIC_DATA_ROOT}] exists. Proceeding" + + createDir "${ELASTIC_DATA_ROOT}/data" + io_setOwnership "${ELASTIC_DATA_ROOT}" "${ES_USER}" "${ES_USER}" || errorExit "Setting ownership of [${ELASTIC_DATA_ROOT}] to [${ES_USER}:${ES_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Redis +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## REDIS_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRedisDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${REDIS_DATA_ROOT}" ] && return 0 + + logDebug "Property [${REDIS_DATA_ROOT}] exists. Proceeding" + + createDir "${REDIS_DATA_ROOT}" + io_setOwnership "${REDIS_DATA_ROOT}" "${REDIS_USER}" "${REDIS_USER}" || errorExit "Setting ownership of [${REDIS_DATA_ROOT}] to [${REDIS_USER}:${REDIS_USER}] failed" +} + +## Utility method to create third party folder structure necessary for Mongo +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## MONGODB_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createMongoDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${MONGODB_DATA_ROOT}" ] && return 0 + + logDebug "Property [${MONGODB_DATA_ROOT}] exists. Proceeding" + + createDir "${MONGODB_DATA_ROOT}/logs" + createDir "${MONGODB_DATA_ROOT}/configdb" + createDir "${MONGODB_DATA_ROOT}/db" + io_setOwnership "${MONGODB_DATA_ROOT}" "${MONGO_USER}" "${MONGO_USER}" || errorExit "Setting ownership of [${MONGODB_DATA_ROOT}] to [${MONGO_USER}:${MONGO_USER}] failed" +} + +## Utility method to create third party folder structure necessary for RabbitMQ +## Failure conditions: +## If creation of directory or assigning permissions fails +## Parameters: none +## Globals: +## RABBITMQ_DATA_ROOT +## Returns: none +## NOTE: The method does NOTHING if the folder already exists +io_createRabbitMQDir() { + logDebug "Method ${FUNCNAME[0]}" + [ -z "${RABBITMQ_DATA_ROOT}" ] && return 0 + + logDebug "Property [${RABBITMQ_DATA_ROOT}] exists. Proceeding" + + createDir "${RABBITMQ_DATA_ROOT}" + io_setOwnership "${RABBITMQ_DATA_ROOT}" "${RABBITMQ_USER}" "${RABBITMQ_USER}" || errorExit "Setting ownership of [${RABBITMQ_DATA_ROOT}] to [${RABBITMQ_USER}:${RABBITMQ_USER}] failed" +} + +# Add or replace a property in provided properties file +addOrReplaceProperty() { + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + local delimiter=${4:-"="} + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}\s*${delimiter}.*$" ${propertiesPath} > /dev/null 2>&1 + [ $? -ne 0 ] && echo -e "\n${propertyName}${delimiter}${propertyValue}" >> ${propertiesPath} + sed -i -e "s|^${propertyName}\s*${delimiter}.*$|${propertyName}${delimiter}${propertyValue}|g;" ${propertiesPath} +} + +# Set property only if its not set +io_setPropertyNoOverride(){ + local propertyName=$1 + local propertyValue=$2 + local propertiesPath=$3 + + # Return if any of the inputs are empty + [[ -z "$propertyName" || "$propertyName" == "" ]] && return + [[ -z "$propertyValue" || "$propertyValue" == "" ]] && return + [[ -z "$propertiesPath" || "$propertiesPath" == "" ]] && return + + grep "^${propertyName}:" ${propertiesPath} > /dev/null 2>&1 + if [ $? -ne 0 ]; then + echo -e "${propertyName}: ${propertyValue}" >> ${propertiesPath} || warn "Setting property ${propertyName}: ${propertyValue} in [ ${propertiesPath} ] failed" + else + logger "Skipping update of property : ${propertyName}" >&6 + fi +} + +# Add a line to a file if it doesn't already exist +addLine() { + local line_to_add=$1 + local target_file=$2 + logger "Trying to add line $1 to $2" >&6 2>&1 + cat "$target_file" | grep -F "$line_to_add" -wq >&6 2>&1 + if [ $? != 0 ]; then + logger "Line does not exist and will be added" >&6 2>&1 + echo $line_to_add >> $target_file || errorExit "Could not update $target_file" + fi +} + +# Utility method to check if a value (first parameter) exists in an array (2nd parameter) +# 1st parameter "value to find" +# 2nd parameter "The array to search in. Please pass a string with each value separated by space" +# Example: containsElement "y" "y Y n N" +containsElement () { + local searchElement=$1 + local searchArray=($2) + local found=1 + for elementInIndex in "${searchArray[@]}";do + if [[ $elementInIndex == $searchElement ]]; then + found=0 + fi + done + return $found +} + +# Utility method to get user's choice +# 1st parameter "what to ask the user" +# 2nd parameter "what choices to accept, separated by spaces" +# 3rd parameter "what is the default choice (to use if the user simply presses Enter)" +# Example 'getUserChoice "Are you feeling lucky? Punk!" "y n Y N" "y"' +getUserChoice(){ + configureLogOutput + read_timeout=${read_timeout:-0.5} + local choice="na" + local text_to_display=$1 + local choices=$2 + local default_choice=$3 + users_choice= + + until containsElement "$choice" "$choices"; do + echo "";echo ""; + sleep $read_timeout #This ensures correct placement of the question. + read -p "$text_to_display :" choice + : ${choice:=$default_choice} + done + users_choice=$choice + echo -e "\n$text_to_display: $users_choice" >&6 + sleep $read_timeout #This ensures correct logging +} + +setFilePermission () { + local permission=$1 + local file=$2 + chmod "${permission}" "${file}" || warn "Setting permission ${permission} to file [ ${file} ] failed" +} + + +#setting required paths +setAppDir (){ + SCRIPT_DIR=$(dirname $0) + SCRIPT_HOME="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + APP_DIR="`cd "${SCRIPT_HOME}";pwd`" +} + +ZIP_TYPE="zip" +COMPOSE_TYPE="compose" +HELM_TYPE="helm" +RPM_TYPE="rpm" +DEB_TYPE="debian" + +sourceScript () { + local file="$1" + + [ ! -z "${file}" ] || errorExit "target file is not passed to source a file" + + if [ ! -f "${file}" ]; then + errorExit "${file} file is not found" + else + source "${file}" || errorExit "Unable to source ${file}, please check if the user ${USER} has permissions to perform this action" + fi +} +# Source required helpers +initHelpers () { + local systemYamlHelper="${APP_DIR}/systemYamlHelper.sh" + local thirdPartyDir=$(find ${APP_DIR}/.. -name third-party -type d) + export YQ_PATH="${thirdPartyDir}/yq" + LIBXML2_PATH="${thirdPartyDir}/libxml2/bin/xmllint" + export LD_LIBRARY_PATH="${thirdPartyDir}/libxml2/lib" + sourceScript "${systemYamlHelper}" +} +# Check migration info yaml file available in the path +checkMigrationInfoYaml () { + + if [[ -f "${APP_DIR}/migrationHelmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationHelmInfo.yaml" + INSTALLER="${HELM_TYPE}" + elif [[ -f "${APP_DIR}/migrationZipInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationZipInfo.yaml" + INSTALLER="${ZIP_TYPE}" + elif [[ -f "${APP_DIR}/migrationRpmInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationRpmInfo.yaml" + INSTALLER="${RPM_TYPE}" + elif [[ -f "${APP_DIR}/migrationDebInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationDebInfo.yaml" + INSTALLER="${DEB_TYPE}" + elif [[ -f "${APP_DIR}/migrationComposeInfo.yaml" ]]; then + MIGRATION_SYSTEM_YAML_INFO="${APP_DIR}/migrationComposeInfo.yaml" + INSTALLER="${COMPOSE_TYPE}" + else + errorExit "File migration Info yaml does not exist in [${APP_DIR}]" + fi +} + +retrieveYamlValue () { + local yamlPath="$1" + local value="$2" + local output="$3" + local message="$4" + + [[ -z "${yamlPath}" ]] && errorExit "yamlPath is mandatory to get value from ${MIGRATION_SYSTEM_YAML_INFO}" + + getYamlValue "${yamlPath}" "${MIGRATION_SYSTEM_YAML_INFO}" "false" + value="${YAML_VALUE}" + if [[ -z "${value}" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "Empty value for ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + elif [[ "${output}" == "Skip" ]]; then + return + else + errorExit "${message}" + fi + fi +} + +checkEnv () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + # check Environment JF_PRODUCT_HOME is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_PRODUCT_HOME")" + if [[ -z "${NEW_DATA_DIR}" ]]; then + errorExit "Environment variable JF_PRODUCT_HOME is not set, this is required to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + getCustomDataDir_hook + NEW_DATA_DIR="${OLD_DATA_DIR}" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + else + # check Environment JF_ROOT_DATA_DIR is set before migration + OLD_DATA_DIR="$(evalVariable "OLD_DATA_DIR" "JF_ROOT_DATA_DIR")" + # check Environment JF_ROOT_DATA_DIR is set before migration + NEW_DATA_DIR="$(evalVariable "NEW_DATA_DIR" "JF_ROOT_DATA_DIR")" + if [[ -z "${NEW_DATA_DIR}" ]] && [[ -z "${OLD_DATA_DIR}" ]]; then + errorExit "Could not find ${PROMPT_DATA_DIR_LOCATION} to perform Migration" + fi + # appending var directory to $JF_PRODUCT_HOME + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi + +} + +getDataDir () { + + if [[ "${INSTALLER}" == "${ZIP_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}"|| "${INSTALLER}" == "${HELM_TYPE}" ]]; then + checkEnv + else + getCustomDataDir_hook + NEW_DATA_DIR="`cd "${APP_DIR}"/../../;pwd`" + NEW_DATA_DIR="${NEW_DATA_DIR}/var" + fi +} + +# Retrieve Product name from MIGRATION_SYSTEM_YAML_INFO +getProduct () { + retrieveYamlValue "migration.product" "${YAML_VALUE}" "Fail" "Empty value under ${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + PRODUCT="${YAML_VALUE}" + PRODUCT=$(echo "${PRODUCT}" | tr '[:upper:]' '[:lower:]' 2>/dev/null) + if [[ "${PRODUCT}" != "artifactory" && "${PRODUCT}" != "distribution" && "${PRODUCT}" != "xray" ]]; then + errorExit "migration.product in [${MIGRATION_SYSTEM_YAML_INFO}] is not correct, please set based on product as ARTIFACTORY or DISTRIBUTION" + fi + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + JF_USER="${PRODUCT}" + fi +} +# Compare product version with minProductVersion and maxProductVersion +migrateCheckVersion () { + local productVersion="$1" + local minProductVersion="$2" + local maxProductVersion="$3" + local productVersion618="6.18.0" + local unSupportedProductVersions7=("7.2.0 7.2.1") + + if [[ "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${maxProductVersion}")" -eq 1 ]]; then + logger "Migration not necessary. ${PRODUCT} is already ${productVersion}" + exit 11 + elif [[ "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${minProductVersion}")" -eq 1 ]]; then + if [[ ("$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 0 || "$(io_compareVersions "${productVersion}" "${productVersion618}")" -eq 1) && " ${unSupportedProductVersions7[@]} " =~ " ${CURRENT_VERSION} " ]]; then + touch /tmp/error; + errorExit "Current ${PRODUCT} version (${productVersion}) does not support migration to ${CURRENT_VERSION}" + else + bannerStart "Detected ${PRODUCT} ${productVersion}, initiating migration" + fi + else + logger "Current ${PRODUCT} ${productVersion} version is not supported for migration" + exit 1 + fi +} + +getProductVersion () { + local minProductVersion="$1" + local maxProductVersion="$2" + local newfilePath="$3" + local oldfilePath="$4" + local propertyInDocker="$5" + local property="$6" + local productVersion= + local status= + + if [[ "$INSTALLER" == "${COMPOSE_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + elif [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${propertyInDocker}" "${newfilePath}")" + status="fail" + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + exit 0 + fi + elif [[ "$INSTALLER" == "${HELM_TYPE}" ]]; then + if [[ -f "${oldfilePath}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + else + productVersion="$(cat "${oldfilePath}")" + fi + status="success" + else + productVersion="${CURRENT_VERSION}" + [[ -z "${productVersion}" || "${productVersion}" == "" ]] && logger "${PRODUCT} CURRENT_VERSION is not set" && exit 0 + fi + else + if [[ -f "${newfilePath}" ]]; then + productVersion="$(readKey "${property}" "${newfilePath}")" + status="fail" + elif [[ -f "${oldfilePath}" ]]; then + productVersion="$(readKey "${property}" "${oldfilePath}")" + status="success" + else + if [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + logger "File [${newfilePath}] not found to get current version." + else + logger "File [${oldfilePath}] or [${newfilePath}] not found to get current version." + fi + exit 0 + fi + fi + if [[ -z "${productVersion}" || "${productVersion}" == "" ]]; then + [[ "${status}" == "success" ]] && logger "No version found in file [${oldfilePath}]." + [[ "${status}" == "fail" ]] && logger "No version found in file [${newfilePath}]." + exit 0 + fi + + migrateCheckVersion "${productVersion}" "${minProductVersion}" "${maxProductVersion}" +} + +readKey () { + local property="$1" + local file="$2" + local version= + + while IFS='=' read -r key value || [ -n "${key}" ]; + do + [[ ! "${key}" =~ \#.* && ! -z "${key}" && ! -z "${value}" ]] + key="$(io_trim "${key}")" + if [[ "${key}" == "${property}" ]]; then + version="${value}" && check=true && break + else + check=false + fi + done < "${file}" + if [[ "${check}" == "false" ]]; then + return + fi + echo "${version}" +} + +# create Log directory +createLogDir () { + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" + fi +} + +# Creating migration log file +creationMigrateLog () { + local LOG_FILE_NAME="migration.log" + createLogDir + local MIGRATION_LOG_FILE="${NEW_DATA_DIR}/log/${LOG_FILE_NAME}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + MIGRATION_LOG_FILE="${SCRIPT_HOME}/${LOG_FILE_NAME}" + fi + touch "${MIGRATION_LOG_FILE}" + setFilePermission "${LOG_FILE_PERMISSION}" "${MIGRATION_LOG_FILE}" + exec &> >(tee -a "${MIGRATION_LOG_FILE}") +} +# Set path where system.yaml should create +setSystemYamlPath () { + SYSTEM_YAML_PATH="${NEW_DATA_DIR}/etc/system.yaml" + if [[ "${INSTALLER}" != "${HELM_TYPE}" ]]; then + logger "system.yaml will be created in path [${SYSTEM_YAML_PATH}]" + fi +} +# Create directory +createDirectory () { + local directory="$1" + local output="$2" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${directory}" + mkdir -p "${directory}" && check=true || check=false + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi + setOwnershipBasedOnInstaller "${directory}" +} + +setOwnershipBasedOnInstaller () { + local directory="$1" + if [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + chown -R ${USER_TO_CHECK}:${GROUP_TO_CHECK} "${directory}" || warn "Setting ownership on $directory failed" + elif [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + io_setOwnership "${directory}" "${JF_USER}" "${JF_USER}" + fi +} + +getUserAndGroup () { + local file="$1" + read uid gid <<<$(stat -c '%U %G' ${file}) + USER_TO_CHECK="${uid}" + GROUP_TO_CHECK="${gid}" +} + +# set ownership +getUserAndGroupFromFile () { + case $PRODUCT in + artifactory) + getUserAndGroup "/etc/opt/jfrog/artifactory/artifactory.properties" + ;; + distribution) + getUserAndGroup "${OLD_DATA_DIR}/etc/versions.properties" + ;; + xray) + getUserAndGroup "${OLD_DATA_DIR}/security/master.key" + ;; + esac +} + +# creating required directories +createRequiredDirs () { + bannerSubSection "CREATING REQUIRED DIRECTORIES" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${JF_USER}" "${JF_USER}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${JF_USER}" "${JF_USER}" "yes" + io_setOwnership "${NEW_DATA_DIR}" "${JF_USER}" "${JF_USER}" + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data/postgres" "${POSTGRES_USER}" "${POSTGRES_USER}" "yes" + fi + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + getUserAndGroupFromFile + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/etc/security" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/data" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/log/archived" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/work" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + removeSoftLinkAndCreateDir "${NEW_DATA_DIR}/backup" "${USER_TO_CHECK}" "${GROUP_TO_CHECK}" "yes" + fi +} + +# Check entry in map is format +checkMapEntry () { + local entry="$1" + + [[ "${entry}" != *"="* ]] && echo -n "false" || echo -n "true" +} +# Check value Empty and warn +warnIfEmpty () { + local filePath="$1" + local yamlPath="$2" + local check= + + if [[ -z "${filePath}" ]]; then + warn "Empty value in yamlpath [${yamlPath} in [${MIGRATION_SYSTEM_YAML_INFO}]" + check=false + else + check=true + fi + echo "${check}" +} + +logCopyStatus () { + local status="$1" + local logMessage="$2" + local warnMessage="$3" + + [[ "${status}" == "success" ]] && logger "${logMessage}" + [[ "${status}" == "fail" ]] && warn "${warnMessage}" +} +# copy contents from source to destination +copyCmd () { + local source="$1" + local target="$2" + local mode="$3" + local status= + + case $mode in + unique) + cp -up "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + specific) + cp -pf "${source}" "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied file [${source}] to [${target}]" "Failed to copy file [${source}] to [${target}]" + ;; + patternFiles) + cp -pf "${source}"* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied files matching [${source}*] to [${target}]" "Failed to copy files matching [${source}*] to [${target}]" + ;; + full) + cp -prf "${source}"/* "${target}"/ && status="success" || status="fail" + logCopyStatus "${status}" "Successfully copied directory contents from [${source}] to [${target}]" "Failed to copy directory contents from [${source}] to [${target}]" + ;; + esac +} +# Check contents exist in source before copying +copyOnContentExist () { + local source="$1" + local target="$2" + local mode="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + copyCmd "${source}" "${target}" "${mode}" + else + logger "No contents to copy from [${source}]" + fi +} + +# move source to destination +moveCmd () { + local source="$1" + local target="$2" + local status= + + mv -f "${source}" "${target}" && status="success" || status="fail" + [[ "${status}" == "success" ]] && logger "Successfully moved directory [${source}] to [${target}]" + [[ "${status}" == "fail" ]] && warn "Failed to move directory [${source}] to [${target}]" +} + +# symlink target to source +symlinkCmd () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + local check=false + + if [[ "${symlinkSubDir}" == "subDir" ]]; then + ln -sf "${source}"/* "${target}" && check=true || check=false + else + ln -sf "${source}" "${target}" && check=true || check=false + fi + + [[ "${check}" == "true" ]] && logger "Successfully symlinked directory [${target}] to old [${source}]" + [[ "${check}" == "false" ]] && warn "Symlink operation failed" +} +# Check contents exist in source before symlinking +symlinkOnExist () { + local source="$1" + local target="$2" + local symlinkSubDir="$3" + + if [[ "$(checkContentExists "${source}")" == "true" ]]; then + if [[ "${symlinkSubDir}" == "subDir" ]]; then + symlinkCmd "${source}" "${target}" "subDir" + else + symlinkCmd "${source}" "${target}" + fi + else + logger "No contents to symlink from [${source}]" + fi +} + +prependDir () { + local absolutePath="$1" + local fullPath="$2" + local sourcePath= + + if [[ "${absolutePath}" = \/* ]]; then + sourcePath="${absolutePath}" + else + sourcePath="${fullPath}" + fi + echo "${sourcePath}" +} + +getFirstEntry (){ + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $1}' +} + +getSecondEntry () { + local entry="$1" + + [[ -z "${entry}" ]] && return + echo "${entry}" | awk -F"=" '{print $2}' +} +# To get absolutePath +pathResolver () { + local directoryPath="$1" + local dataDir= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Warning" + dataDir="${YAML_VALUE}" + cd "${dataDir}" + else + cd "${OLD_DATA_DIR}" + fi + absoluteDir="`cd "${directoryPath}";pwd`" + echo "${absoluteDir}" +} + +checkPathResolver () { + local value="$1" + + if [[ "${value}" == \/* ]]; then + value="${value}" + else + value="$(pathResolver "${value}")" + fi + echo "${value}" +} + +propertyMigrate () { + local entry="$1" + local filePath="$2" + local fileName="$3" + local check=false + + local yamlPath="$(getFirstEntry "${entry}")" + local property="$(getSecondEntry "${entry}")" + if [[ -z "${property}" ]]; then + warn "Property is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${property}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + local keyValues=$(cat "${NEW_DATA_DIR}/${filePath}/${fileName}" | grep "^[^#]" | grep "[*=*]") + for i in ${keyValues}; do + key=$(echo "${i}" | awk -F"=" '{print $1}') + value=$(echo "${i}" | cut -f 2- -d '=') + [ -z "${key}" ] && continue + [ -z "${value}" ] && continue + if [[ "${key}" == "${property}" ]]; then + if [[ "${PRODUCT}" == "artifactory" ]]; then + value="$(migrateResolveDerbyPath "${key}" "${value}")" + value="$(migrateResolveHaDirPath "${key}" "${value}")" + if [[ "${INSTALLER}" != "${DOCKER_TYPE}" ]]; then + value="$(updatePostgresUrlString_Hook "${yamlPath}" "${value}")" + fi + fi + if [[ "${key}" == "context.url" ]]; then + local ip=$(echo "${value}" | awk -F/ '{print $3}' | sed 's/:.*//') + setSystemValue "shared.node.ip" "${ip}" "${SYSTEM_YAML_PATH}" + logger "Setting [shared.node.ip] with [${ip}] in system.yaml" + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" && logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" && check=true && break || check=false + fi + done + [[ "${check}" == "false" ]] && logger "Property [${property}] not found in file [${fileName}]" +} + +setHaEnabled_hook () { + echo "" +} + +migratePropertiesFiles () { + local fileList= + local filePath= + local fileName= + local map= + + retrieveYamlValue "migration.propertyFiles.files" "fileList" "Skip" + fileList="${YAML_VALUE}" + if [[ -z "${fileList}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF PROPERTY FILES" + for file in ${fileList}; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.propertyFiles.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.propertyFiles.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + if [[ "$(checkFileExists "${NEW_DATA_DIR}/${filePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + # setting haEnabled with true only if ha-node.properties is present + setHaEnabled_hook "${filePath}" + retrieveYamlValue "migration.propertyFiles.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + propertyMigrate "${entry}" "${filePath}" "${fileName}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=property" + fi + done + else + logger "File [${fileName}] was not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} + +createTargetDir () { + local mountDir="$1" + local target="$2" + + logger "Target directory not found [${mountDir}/${target}], creating it" + createDirectoryRecursive "${mountDir}" "${target}" "Warning" +} + +createDirectoryRecursive () { + local mountDir="$1" + local target="$2" + local output="$3" + local check=false + local message="Could not create directory ${directory}, please check if the user ${USER} has permissions to perform this action" + removeSoftLink "${mountDir}/${target}" + local directory=$(echo "${target}" | tr '/' ' ' ) + local targetDir="${mountDir}" + for dir in ${directory}; + do + targetDir="${targetDir}/${dir}" + mkdir -p "${targetDir}" && check=true || check=false + setOwnershipBasedOnInstaller "${targetDir}" + done + if [[ "${check}" == "false" ]]; then + if [[ "${output}" == "Warning" ]]; then + warn "${message}" + else + errorExit "${message}" + fi + fi +} + +copyOperation () { + local source="$1" + local target="$2" + local mode="$3" + local check=false + local targetDataDir= + local targetLink= + local date= + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + #remove source if it is a symlink + if [[ -L "${source}" ]]; then + targetLink=$(readlink -f "${source}") + logger "Removing the symlink [${source}] pointing to [${targetLink}]" + rm -f "${source}" + source=${targetLink} + fi + if [[ "$(checkDirExists "${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path" + return + fi + if [[ "$(checkDirContents "${source}")" != "true" ]]; then + logger "No contents to copy from [${source}]" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyOnContentExist "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copySpecificFiles () { + local source="$1" + local target="$2" + local mode="$3" + + # prepend OLD_DATA_DIR only if source is relative path + source="$(prependDir "${source}" "${OLD_DATA_DIR}/${source}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkFileExists "${source}")" != "true" ]]; then + logger "Source file [${source}] does not exist in path" + return + fi + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${source}" "${targetDataDir}/${target}" "${mode}" +} + +copyPatternMatchingFiles () { + local source="$1" + local target="$2" + local mode="$3" + local sourcePath="${4}" + + # prepend OLD_DATA_DIR only if source is relative path + sourcePath="$(prependDir "${sourcePath}" "${OLD_DATA_DIR}/${sourcePath}")" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + copyLogMessage "${mode}" + if [[ "$(checkDirExists "${sourcePath}")" != "true" ]]; then + logger "Source [${sourcePath}] directory not found in path" + return + fi + if ls "${sourcePath}/${source}"* 1> /dev/null 2>&1; then + if [[ "$(checkDirExists "${targetDataDir}/${target}")" != "true" ]]; then + createTargetDir "${targetDataDir}" "${target}" + fi + copyCmd "${sourcePath}/${source}" "${targetDataDir}/${target}" "${mode}" + else + logger "Source file [${sourcePath}/${source}*] does not exist in path" + fi +} + +copyLogMessage () { + local mode="$1" + case $mode in + specific) + logger "Copy file [${source}] to target [${targetDataDir}/${target}]" + ;; + patternFiles) + logger "Copy files matching [${sourcePath}/${source}*] to target [${targetDataDir}/${target}]" + ;; + full) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + unique) + logger "Copy directory contents from source [${source}] to target [${targetDataDir}/${target}]" + ;; + esac +} + +copyBannerMessages () { + local mode="$1" + local textMode="$2" + case $mode in + specific) + bannerSection "COPY ${textMode} FILES" + ;; + patternFiles) + bannerSection "COPY MATCHING ${textMode}" + ;; + full) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + unique) + bannerSection "COPY ${textMode} DIRECTORIES CONTENTS" + ;; + esac +} + +invokeCopyFunctions () { + local mode="$1" + local source="$2" + local target="$3" + + case $mode in + specific) + copySpecificFiles "${source}" "${target}" "${mode}" + ;; + patternFiles) + retrieveYamlValue "migration.${copyFormat}.sourcePath" "map" "Warning" + local sourcePath="${YAML_VALUE}" + copyPatternMatchingFiles "${source}" "${target}" "${mode}" "${sourcePath}" + ;; + full) + copyOperation "${source}" "${target}" "${mode}" + ;; + unique) + copyOperation "${source}" "${target}" "${mode}" + ;; + esac +} +# Copies contents from source directory and target directory +copyDataDirectories () { + local copyFormat="$1" + local mode="$2" + local map= + local source= + local target= + local textMode= + local targetDataDir= + local copyFormatValue= + + retrieveYamlValue "migration.${copyFormat}" "${copyFormat}" "Skip" + copyFormatValue="${YAML_VALUE}" + if [[ -z "${copyFormatValue}" ]]; then + return + fi + textMode=$(echo "${mode}" | tr '[:lower:]' '[:upper:]' 2>/dev/null) + copyBannerMessages "${mode}" "${textMode}" + retrieveYamlValue "migration.${copyFormat}.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + targetDataDir="${NEW_DATA_DIR}" + else + targetDataDir="`cd "${NEW_DATA_DIR}"/../;pwd`" + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeCopyFunctions "${mode}" "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +invokeMoveFunctions () { + local source="$1" + local target="$2" + local sourceDataDir= + local targetBasename= + # prepend OLD_DATA_DIR only if source is relative path + sourceDataDir=$(prependDir "${source}" "${OLD_DATA_DIR}/${source}") + targetBasename=$(dirname "${target}") + logger "Moving directory source [${sourceDataDir}] to target [${NEW_DATA_DIR}/${target}]" + if [[ "$(checkDirExists "${sourceDataDir}")" != "true" ]]; then + logger "Directory [${sourceDataDir}] not found in path to move" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${targetBasename}")" != "true" ]]; then + createTargetDir "${NEW_DATA_DIR}" "${targetBasename}" + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/${target}" + else + moveCmd "${sourceDataDir}" "${NEW_DATA_DIR}/tempDir" + moveCmd "${NEW_DATA_DIR}/tempDir" "${NEW_DATA_DIR}/${target}" + fi +} + +# Move source directory and target directory +moveDirectories () { + local moveDataDirectories= + local map= + local source= + local target= + + retrieveYamlValue "migration.moveDirectories" "moveDirectories" "Skip" + moveDirectories="${YAML_VALUE}" + if [[ -z "${moveDirectories}" ]]; then + return + fi + bannerSection "MOVE DIRECTORIES" + retrieveYamlValue "migration.moveDirectories.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + invokeMoveFunctions "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +# Trim masterKey if its generated using hex 32 +trimMasterKey () { + local masterKeyDir=/opt/jfrog/artifactory/var/etc/security + local oldMasterKey=$(<${masterKeyDir}/master.key) + local oldMasterKey_Length=$(echo ${#oldMasterKey}) + local newMasterKey= + if [[ ${oldMasterKey_Length} -gt 32 ]]; then + bannerSection "TRIM MASTERKEY" + newMasterKey=$(echo ${oldMasterKey:0:32}) + cp ${masterKeyDir}/master.key ${masterKeyDir}/backup_master.key + logger "Original masterKey is backed up : ${masterKeyDir}/backup_master.key" + rm -rf ${masterKeyDir}/master.key + echo ${newMasterKey} > ${masterKeyDir}/master.key + logger "masterKey is trimmed : ${masterKeyDir}/master.key" + fi +} + +copyDirectories () { + + copyDataDirectories "copyFiles" "full" + copyDataDirectories "copyUniqueFiles" "unique" + copyDataDirectories "copySpecificFiles" "specific" + copyDataDirectories "copyPatternMatchingFiles" "patternFiles" +} + +symlinkDir () { + local source="$1" + local target="$2" + local targetDir= + local basename= + local targetParentDir= + + targetDir="$(dirname "${target}")" + if [[ "${targetDir}" == "${source}" ]]; then + # symlink the sub directories + createDirectory "${NEW_DATA_DIR}/${target}" "Warning" + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" "subDir" + basename="$(basename "${target}")" + cd "${NEW_DATA_DIR}/${target}" && rm -f "${basename}" + fi + else + targetParentDir="$(dirname "${NEW_DATA_DIR}/${target}")" + createDirectory "${targetParentDir}" "Warning" + if [[ "$(checkDirExists "${targetParentDir}")" == "true" ]]; then + symlinkOnExist "${OLD_DATA_DIR}/${source}" "${NEW_DATA_DIR}/${target}" + fi + fi +} + +symlinkOperation () { + local source="$1" + local target="$2" + local check=false + local targetLink= + local date= + + # Check if source is a link and do symlink + if [[ -L "${OLD_DATA_DIR}/${source}" ]]; then + targetLink=$(readlink -f "${OLD_DATA_DIR}/${source}") + symlinkOnExist "${targetLink}" "${NEW_DATA_DIR}/${target}" + else + # check if source is directory and do symlink + if [[ "$(checkDirExists "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "Source [${source}] directory not found in path to symlink" + return + fi + if [[ "$(checkDirContents "${OLD_DATA_DIR}/${source}")" != "true" ]]; then + logger "No contents found in [${OLD_DATA_DIR}/${source}] to symlink" + return + fi + if [[ "$(checkDirExists "${NEW_DATA_DIR}/${target}")" != "true" ]]; then + logger "Target directory [${NEW_DATA_DIR}/${target}] does not exist to create symlink, creating it" + symlinkDir "${source}" "${target}" + else + rm -rf "${NEW_DATA_DIR}/${target}" && check=true || check=false + [[ "${check}" == "false" ]] && warn "Failed to remove contents in [${NEW_DATA_DIR}/${target}/]" + symlinkDir "${source}" "${target}" + fi + fi +} +# Creates a symlink path - Source directory to which the symbolic link should point. +symlinkDirectories () { + local linkFiles= + local map= + local source= + local target= + + retrieveYamlValue "migration.linkFiles" "linkFiles" "Skip" + linkFiles="${YAML_VALUE}" + if [[ -z "${linkFiles}" ]]; then + return + fi + bannerSection "SYMLINK DIRECTORIES" + retrieveYamlValue "migration.linkFiles.map" "map" "Warning" + map="${YAML_VALUE}" + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + source="$(getSecondEntry "${entry}")" + target="$(getFirstEntry "${entry}")" + logger "Symlink directory [${NEW_DATA_DIR}/${target}] to old [${OLD_DATA_DIR}/${source}]" + [[ -z "${source}" ]] && warn "source value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${target}" ]] && warn "target value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + symlinkOperation "${source}" "${target}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e target=source" + fi + echo ""; + done +} + +updateConnectionString () { + local yamlPath="$1" + local value="$2" + local mongoPath="shared.mongo.url" + local rabbitmqPath="shared.rabbitMq.url" + local postgresPath="shared.database.url" + local redisPath="shared.redis.connectionString" + local mongoConnectionString="mongo.connectionString" + local sourceKey= + local hostIp=$(io_getPublicHostIP) + local hostKey= + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + # Replace @postgres:,@mongodb:,@rabbitmq:,@redis: to @{hostIp}: (Compose Installer) + hostKey="@${hostIp}:" + case $yamlPath in + ${postgresPath}) + sourceKey="@postgres:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoPath}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${rabbitmqPath}) + sourceKey="@rabbitmq:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${redisPath}) + sourceKey="@redis:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + ${mongoConnectionString}) + sourceKey="@mongodb:" + value=$(io_replaceString "${value}" "${sourceKey}" "${hostKey}") + ;; + esac + fi + echo -n "${value}" +} + +yamlMigrate () { + local entry="$1" + local sourceFile="$2" + local value= + local yamlPath= + local key= + yamlPath="$(getFirstEntry "${entry}")" + key="$(getSecondEntry "${entry}")" + if [[ -z "${key}" ]]; then + warn "key is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + getYamlValue "${key}" "${sourceFile}" "false" + value="${YAML_VALUE}" + if [[ ! -z "${value}" ]]; then + value=$(updateConnectionString "${yamlPath}" "${value}") + fi + if [[ -z "${value}" ]]; then + logger "No value for [${key}] in [${sourceFile}]" + else + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the key [${key}] in system.yaml" + fi +} + +migrateYamlFile () { + local files= + local filePath= + local fileName= + local sourceFile= + local map= + retrieveYamlValue "migration.yaml.files" "files" "Skip" + files="${YAML_VALUE}" + if [[ -z "${files}" ]]; then + return + fi + bannerSection "MIGRATION OF YAML FILES" + for file in $files; + do + bannerSubSection "Processing Migration of $file" + retrieveYamlValue "migration.yaml.$file.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + retrieveYamlValue "migration.yaml.$file.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + [[ -z "${filePath}" && -z "${fileName}" ]] && continue + sourceFile="${NEW_DATA_DIR}/${filePath}/${fileName}" + if [[ "$(checkFileExists "${sourceFile}")" == "true" ]]; then + logger "File [${fileName}] found in path [${NEW_DATA_DIR}/${filePath}]" + retrieveYamlValue "migration.yaml.$file.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + yamlMigrate "${entry}" "${sourceFile}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done + else + logger "File [${fileName}] is not found in path [${NEW_DATA_DIR}/${filePath}] to migrate" + fi + done +} +# updates the key and value in system.yaml +updateYamlKeyValue () { + local entry="$1" + local value= + local yamlPath= + local key= + + yamlPath="$(getFirstEntry "${entry}")" + value="$(getSecondEntry "${entry}")" + if [[ -z "${value}" ]]; then + warn "value is empty in map [${entry}] in the file [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + if [[ -z "${yamlPath}" ]]; then + warn "yamlPath is empty for [${key}] in [${MIGRATION_SYSTEM_YAML_INFO}]" + return + fi + setSystemValue "${yamlPath}" "${value}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value [${value}] in system.yaml" +} + +updateSystemYamlFile () { + local updateYaml= + local map= + + retrieveYamlValue "migration.updateSystemYaml" "updateYaml" "Skip" + updateSystemYaml="${YAML_VALUE}" + if [[ -z "${updateSystemYaml}" ]]; then + return + fi + bannerSection "UPDATE SYSTEM YAML FILE WITH KEY AND VALUES" + retrieveYamlValue "migration.updateSystemYaml.map" "map" "Warning" + map="${YAML_VALUE}" + if [[ -z "${map}" ]]; then + return + fi + for entry in $map; + do + if [[ "$(checkMapEntry "${entry}")" == "true" ]]; then + updateYamlKeyValue "${entry}" + else + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e yamlPath=key" + fi + done +} + +backupFiles_hook () { + logSilly "Method ${FUNCNAME[0]}" +} + +backupDirectory () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyOnContentExist "${targetDir}" "${backupDirectory}/${dir}" "full" + fi +} + +removeOldDirectory () { + local backupDir="$1" + local entry="$2" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${entry}" "${OLD_DATA_DIR}/${entry}")" + local outputCheckDirExists="$(checkDirExists "${targetDir}")" + if [[ "${outputCheckDirExists}" != "true" ]]; then + logger "No [${targetDir}] directory found to delete" + echo ""; + return + fi + backupDirectory "${backupDir}" "${entry}" "${targetDir}" + rm -rf "${targetDir}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed directory [${targetDir}]" + [[ "${check}" == "false" ]] && warn "Failed to remove directory [${targetDir}]" + echo ""; +} + +cleanUpOldDataDirectories () { + local cleanUpOldDataDir= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldDataDir" "cleanUpOldDataDir" "Skip" + cleanUpOldDataDir="${YAML_VALUE}" + if [[ -z "${cleanUpOldDataDir}" ]]; then + return + fi + bannerSection "CLEAN UP OLD DATA DIRECTORIES" + retrieveYamlValue "migration.cleanUpOldDataDir.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old data configurations are backedup in [${backupDir}] directory ******" + backupFiles_hook "${backupDir}/${PRODUCT}" + for entry in $map; + do + removeOldDirectory "${backupDir}" "${entry}" + done +} + +backupFiles () { + local backupDir="$1" + local dir="$2" + local targetDir="$3" + local fileName="$4" + local effectiveUser= + local effectiveGroup= + + if [[ "${dir}" = \/* ]]; then + dir=$(echo "${dir/\//}") + fi + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" ]]; then + effectiveUser="${JF_USER}" + effectiveGroup="${JF_USER}" + elif [[ "${INSTALLER}" == "${DEB_TYPE}" || "${INSTALLER}" == "${RPM_TYPE}" ]]; then + effectiveUser="${USER_TO_CHECK}" + effectiveGroup="${GROUP_TO_CHECK}" + fi + + removeSoftLinkAndCreateDir "${backupDir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local backupDirectory="${backupDir}/${PRODUCT}" + removeSoftLinkAndCreateDir "${backupDirectory}" "${effectiveUser}" "${effectiveGroup}" "yes" + removeSoftLinkAndCreateDir "${backupDirectory}/${dir}" "${effectiveUser}" "${effectiveGroup}" "yes" + local outputCheckDirExists="$(checkDirExists "${backupDirectory}/${dir}")" + if [[ "${outputCheckDirExists}" == "true" ]]; then + copyCmd "${targetDir}/${fileName}" "${backupDirectory}/${dir}" "specific" + fi +} + +removeOldFiles () { + local backupDir="$1" + local directoryName="$2" + local fileName="$3" + local check=false + + # prepend OLD_DATA_DIR only if entry is relative path + local targetDir="$(prependDir "${directoryName}" "${OLD_DATA_DIR}/${directoryName}")" + local outputCheckFileExists="$(checkFileExists "${targetDir}/${fileName}")" + if [[ "${outputCheckFileExists}" != "true" ]]; then + logger "No [${targetDir}/${fileName}] file found to delete" + return + fi + backupFiles "${backupDir}" "${directoryName}" "${targetDir}" "${fileName}" + rm -f "${targetDir}/${fileName}" && check=true || check=false + [[ "${check}" == "true" ]] && logger "Successfully removed file [${targetDir}/${fileName}]" + [[ "${check}" == "false" ]] && warn "Failed to remove file [${targetDir}/${fileName}]" + echo ""; +} + +cleanUpOldFiles () { + local cleanUpFiles= + local map= + local entry= + + retrieveYamlValue "migration.cleanUpOldFiles" "cleanUpOldFiles" "Skip" + cleanUpOldFiles="${YAML_VALUE}" + if [[ -z "${cleanUpOldFiles}" ]]; then + return + fi + bannerSection "CLEAN UP OLD FILES" + retrieveYamlValue "migration.cleanUpOldFiles.map" "map" "Warning" + map="${YAML_VALUE}" + [[ -z "${map}" ]] && continue + date="$(date +%Y%m%d%H%M)" + backupDir="${NEW_DATA_DIR}/backup/backup-${date}" + bannerImportant "****** Old files are backedup in [${backupDir}] directory ******" + for entry in $map; + do + local outputCheckMapEntry="$(checkMapEntry "${entry}")" + if [[ "${outputCheckMapEntry}" != "true" ]]; then + warn "map entry [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}] is not in correct format, correct format i.e directoryName=fileName" + fi + local fileName="$(getSecondEntry "${entry}")" + local directoryName="$(getFirstEntry "${entry}")" + [[ -z "${fileName}" ]] && warn "File name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + [[ -z "${directoryName}" ]] && warn "Directory name value is empty for [${entry}] in [${MIGRATION_SYSTEM_YAML_INFO}]" && continue + removeOldFiles "${backupDir}" "${directoryName}" "${fileName}" + echo ""; + done +} + +startMigration () { + bannerSection "STARTING MIGRATION" +} + +endMigration () { + bannerSection "MIGRATION COMPLETED SUCCESSFULLY" +} + +initialize () { + setAppDir + _pauseExecution "setAppDir" + initHelpers + _pauseExecution "initHelpers" + checkMigrationInfoYaml + _pauseExecution "checkMigrationInfoYaml" + getProduct + _pauseExecution "getProduct" + getDataDir + _pauseExecution "getDataDir" +} + +main () { + case $PRODUCT in + artifactory) + migrateArtifactory + ;; + distribution) + migrateDistribution + ;; + xray) + migrationXray + ;; + esac + exit 0 +} + +# Ensures meta data is logged +LOG_BEHAVIOR_ADD_META="$FLAG_Y" + + +migrateResolveDerbyPath () { + local key="$1" + local value="$2" + + if [[ "${key}" == "url" && "${value}" == *"db.home"* ]]; then + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + derbyPath="/opt/jfrog/artifactory/var/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + else + derbyPath="${NEW_DATA_DIR}/data/artifactory/derby" + value=$(echo "${value}" | sed "s|{db.home}|$derbyPath|") + fi + fi + echo "${value}" +} + +migrateResolveHaDirPath () { + local key="$1" + local value="$2" + + if [[ "${INSTALLER}" == "${RPM_TYPE}" || "${INSTALLER}" == "${COMPOSE_TYPE}" || "${INSTALLER}" == "${HELM_TYPE}" || "${INSTALLER}" == "${DEB_TYPE}" ]]; then + if [[ "${key}" == "artifactory.ha.data.dir" || "${key}" == "artifactory.ha.backup.dir" ]]; then + value=$(checkPathResolver "${value}") + fi + fi + echo "${value}" +} +updatePostgresUrlString_Hook () { + local yamlPath="$1" + local value="$2" + local hostIp=$(io_getPublicHostIP) + local sourceKey="//postgresql:" + if [[ "${yamlPath}" == "shared.database.url" ]]; then + value=$(io_replaceString "${value}" "${sourceKey}" "//${hostIp}:" "#") + fi + echo "${value}" +} +# Check Artifactory product version +checkArtifactoryVersion () { + local minProductVersion="6.0.0" + local maxProductVersion="7.0.0" + local propertyInDocker="ARTIFACTORY_VERSION" + local property="artifactory.version" + + if [[ "${INSTALLER}" == "${COMPOSE_TYPE}" ]]; then + local newfilePath="${APP_DIR}/../.env" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${HELM_TYPE}" ]]; then + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + elif [[ "${INSTALLER}" == "${ZIP_TYPE}" ]]; then + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="${OLD_DATA_DIR}/etc/artifactory.properties" + else + local newfilePath="${NEW_DATA_DIR}/etc/artifactory/artifactory.properties" + local oldfilePath="/etc/opt/jfrog/artifactory/artifactory.properties" + fi + + getProductVersion "${minProductVersion}" "${maxProductVersion}" "${newfilePath}" "${oldfilePath}" "${propertyInDocker}" "${property}" +} + +getCustomDataDir_hook () { + retrieveYamlValue "migration.oldDataDir" "oldDataDir" "Fail" + OLD_DATA_DIR="${YAML_VALUE}" +} + +# Get protocol value of connector +getXmlConnectorProtocol () { + local i="$1" + local filePath="$2" + local fileName="$3" + local protocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@protocol' ${filePath}/${fileName} 2>/dev/null |awk -F"=" '{print $2}' | tr -d '"') + echo -e "${protocolValue}" +} + +# Get all attributes of connector +getXmlConnectorAttributes () { + local i="$1" + local filePath="$2" + local fileName="$3" + local connectorAttributes=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@*' ${filePath}/${fileName} 2>/dev/null) + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + echo "${connectorAttributes}" +} + +# Get port value of connector +getXmlConnectorPort () { + local i="$1" + local filePath="$2" + local fileName="$3" + local portValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@port' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${portValue}" +} + +# Get maxThreads value of connector +getXmlConnectorMaxThreads () { + local i="$1" + local filePath="$2" + local fileName="$3" + local maxThreadValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@maxThreads' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${maxThreadValue}" +} +# Get sendReasonPhrase value of connector +getXmlConnectorSendReasonPhrase () { + local i="$1" + local filePath="$2" + local fileName="$3" + local sendReasonPhraseValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sendReasonPhrase' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + echo -e "${sendReasonPhraseValue}" +} +# Get relaxedPathChars value of connector +getXmlConnectorRelaxedPathChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedPathCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedPathChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedPathCharsValue=$(io_trim "${relaxedPathCharsValue}") + echo -e "${relaxedPathCharsValue}" +} +# Get relaxedQueryChars value of connector +getXmlConnectorRelaxedQueryChars () { + local i="$1" + local filePath="$2" + local fileName="$3" + local relaxedQueryCharsValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@relaxedQueryChars' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + # strip leading and trailing spaces + relaxedQueryCharsValue=$(io_trim "${relaxedQueryCharsValue}") + echo -e "${relaxedQueryCharsValue}" +} + +# Updating system.yaml with Connector port +setConnectorPort () { + local yamlPath="$1" + local valuePort="$2" + local portYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${valuePort}" ]]; then + warn "port value is empty, could not migrate to system.yaml" + return + fi + ## Getting port yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" portYamlPath "Warning" + portYamlPath="${YAML_VALUE}" + if [[ -z "${portYamlPath}" ]]; then + return + fi + setSystemValue "${portYamlPath}" "${valuePort}" "${SYSTEM_YAML_PATH}" + logger "Setting [${portYamlPath}] with value [${valuePort}] in system.yaml" +} + +# Updating system.yaml with Connector maxThreads +setConnectorMaxThread () { + local yamlPath="$1" + local threadValue="$2" + local maxThreadYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${threadValue}" ]]; then + return + fi + ## Getting max Threads yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" maxThreadYamlPath "Warning" + maxThreadYamlPath="${YAML_VALUE}" + if [[ -z "${maxThreadYamlPath}" ]]; then + return + fi + setSystemValue "${maxThreadYamlPath}" "${threadValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${maxThreadYamlPath}] with value [${threadValue}] in system.yaml" +} + +# Updating system.yaml with Connector sendReasonPhrase +setConnectorSendReasonPhrase () { + local yamlPath="$1" + local sendReasonPhraseValue="$2" + local sendReasonPhraseYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${sendReasonPhraseValue}" ]]; then + return + fi + ## Getting sendReasonPhrase yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" sendReasonPhraseYamlPath "Warning" + sendReasonPhraseYamlPath="${YAML_VALUE}" + if [[ -z "${sendReasonPhraseYamlPath}" ]]; then + return + fi + setSystemValue "${sendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${sendReasonPhraseYamlPath}] with value [${sendReasonPhraseValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedPathChars +setConnectorRelaxedPathChars () { + local yamlPath="$1" + local relaxedPathCharsValue="$2" + local relaxedPathCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedPathCharsValue}" ]]; then + return + fi + ## Getting relaxedPathChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedPathCharsYamlPath "Warning" + relaxedPathCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedPathCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedPathCharsYamlPath}" "${relaxedPathCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedPathCharsYamlPath}] with value [${relaxedPathCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connector relaxedQueryChars +setConnectorRelaxedQueryChars () { + local yamlPath="$1" + local relaxedQueryCharsValue="$2" + local relaxedQueryCharsYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${relaxedQueryCharsValue}" ]]; then + return + fi + ## Getting relaxedQueryChars yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" relaxedQueryCharsYamlPath "Warning" + relaxedQueryCharsYamlPath="${YAML_VALUE}" + if [[ -z "${relaxedQueryCharsYamlPath}" ]]; then + return + fi + setSystemValue "${relaxedQueryCharsYamlPath}" "${relaxedQueryCharsValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${relaxedQueryCharsYamlPath}] with value [${relaxedQueryCharsValue}] in system.yaml" +} + +# Updating system.yaml with Connectors configurations +setConnectorExtraConfig () { + local yamlPath="$1" + local connectorAttributes="$2" + local extraConfigPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${connectorAttributes}" ]]; then + return + fi + ## Getting extraConfig yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConfig "Warning" + extraConfigPath="${YAML_VALUE}" + if [[ -z "${extraConfigPath}" ]]; then + return + fi + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setSystemValue "${extraConfigPath}" "${connectorAttributes}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConfigPath}] with connector attributes in system.yaml" +} + +# Updating system.yaml with extra Connectors +setExtraConnector () { + local yamlPath="$1" + local extraConnector="$2" + local extraConnectorYamlPath= + if [[ -z "${yamlPath}" ]]; then + return + fi + if [[ -z "${extraConnector}" ]]; then + return + fi + ## Getting extraConnecotr yaml path from migration info yaml + retrieveYamlValue "${yamlPath}" extraConnectorYamlPath "Warning" + extraConnectorYamlPath="${YAML_VALUE}" + if [[ -z "${extraConnectorYamlPath}" ]]; then + return + fi + getYamlValue "${extraConnectorYamlPath}" "${SYSTEM_YAML_PATH}" "false" + local connectorExtra="${YAML_VALUE}" + if [[ -z "${connectorExtra}" ]]; then + setSystemValue "${extraConnectorYamlPath}" "${extraConnector}" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + else + setSystemValue "${extraConnectorYamlPath}" "\"${connectorExtra} ${extraConnector}\"" "${SYSTEM_YAML_PATH}" + logger "Setting [${extraConnectorYamlPath}] with extra connectors in system.yaml" + fi +} + +# Migrate extra connectors to system.yaml +migrateExtraConnectors () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local excludeDefaultPort="$4" + local i="$5" + local extraConfig= + local extraConnector= + if [[ "${excludeDefaultPort}" == "yes" ]]; then + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" && "${portValue}" != "${DEFAULT_RT_PORT}" ]] || continue + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + done + else + extraConnector=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']' ${filePath}/${fileName} 2>/dev/null) + setExtraConnector "${EXTRA_CONFIG_YAMLPATH}" "${extraConnector}" + fi +} + +# Migrate connector configurations +migrateConnectorConfig () { + local i="$1" + local protocolType="$2" + local portValue="$3" + local connectorPortYamlPath="$4" + local connectorMaxThreadYamlPath="$5" + local connectorAttributesYamlPath="$6" + local filePath="$7" + local fileName="$8" + local connectorSendReasonPhraseYamlPath="$9" + local connectorRelaxedPathCharsYamlPath="${10}" + local connectorRelaxedQueryCharsYamlPath="${11}" + + # migrate port + setConnectorPort "${connectorPortYamlPath}" "${portValue}" + + # migrate maxThreads + local maxThreadValue=$(getXmlConnectorMaxThreads "$i" "${filePath}" "${fileName}") + setConnectorMaxThread "${connectorMaxThreadYamlPath}" "${maxThreadValue}" + + # migrate sendReasonPhrase + local sendReasonPhraseValue=$(getXmlConnectorSendReasonPhrase "$i" "${filePath}" "${fileName}") + setConnectorSendReasonPhrase "${connectorSendReasonPhraseYamlPath}" "${sendReasonPhraseValue}" + + # migrate relaxedPathChars + local relaxedPathCharsValue=$(getXmlConnectorRelaxedPathChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedPathChars "${connectorRelaxedPathCharsYamlPath}" "\"${relaxedPathCharsValue}\"" + # migrate relaxedQueryChars + local relaxedQueryCharsValue=$(getXmlConnectorRelaxedQueryChars "$i" "${filePath}" "${fileName}") + setConnectorRelaxedQueryChars "${connectorRelaxedQueryCharsYamlPath}" "\"${relaxedQueryCharsValue}\"" + + # migrate all attributes to extra config except port , maxThread , sendReasonPhrase ,relaxedPathChars and relaxedQueryChars + local connectorAttributes=$(getXmlConnectorAttributes "$i" "${filePath}" "${fileName}") + connectorAttributes=$(echo "${connectorAttributes}" | sed 's/port="'${portValue}'"//g' | sed 's/maxThreads="'${maxThreadValue}'"//g' | sed 's/sendReasonPhrase="'${sendReasonPhraseValue}'"//g' | sed 's/relaxedPathChars="\'${relaxedPathCharsValue}'\"//g' | sed 's/relaxedQueryChars="\'${relaxedQueryCharsValue}'\"//g') + # strip leading and trailing spaces + connectorAttributes=$(io_trim "${connectorAttributes}") + setConnectorExtraConfig "${connectorAttributesYamlPath}" "${connectorAttributes}" +} + +# Check for default port 8040 and 8081 in connectors and migrate +migrateConnectorPort () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + local connectorPortYamlPath="$5" + local connectorMaxThreadYamlPath="$6" + local connectorAttributesYamlPath="$7" + local connectorSendReasonPhraseYamlPath="$8" + local connectorRelaxedPathCharsYamlPath="$9" + local connectorRelaxedQueryCharsYamlPath="${10}" + local portYamlPath= + local maxThreadYamlPath= + local status= + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" == *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + RT_DEFAULTPORT_STATUS=success + else + AC_DEFAULTPORT_STATUS=success + fi + migrateConnectorConfig "${i}" "${protocolType}" "${portValue}" "${connectorPortYamlPath}" "${connectorMaxThreadYamlPath}" "${connectorAttributesYamlPath}" "${filePath}" "${fileName}" "${connectorSendReasonPhraseYamlPath}" "${connectorRelaxedPathCharsYamlPath}" "${connectorRelaxedQueryCharsYamlPath}" + done +} + +# migrate to extra, connector having default port and protocol is AJP +migrateDefaultPortIfAjp () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local defaultPort="$4" + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] && continue + [[ "${portValue}" != "${defaultPort}" ]] && continue + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + done + +} + +# Comparing max threads in connectors +compareMaxThreads () { + local firstConnectorMaxThread="$1" + local firstConnectorNode="$2" + local secondConnectorMaxThread="$3" + local secondConnectorNode="$4" + local filePath="$5" + local fileName="$6" + + # choose higher maxThreads connector as Artifactory. + if [[ "${firstConnectorMaxThread}" -gt ${secondConnectorMaxThread} || "${firstConnectorMaxThread}" -eq ${secondConnectorMaxThread} ]]; then + # maxThread is higher in firstConnector, + # Taking firstConnector as Artifactory and SecondConnector as Access + # maxThread is equal in both connector,considering firstConnector as Artifactory and SecondConnector as Access + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + else + # maxThread is higher in SecondConnector, + # Taking SecondConnector as Artifactory and firstConnector as Access + local rtPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +# Check max threads exist to compare +maxThreadsExistToCompare () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local firstConnectorMaxThread= + local secondConnectorMaxThread= + local firstConnectorNode= + local secondConnectorNode= + local status=success + local firstnode=fail + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ ${protocolType} == *AJP* ]]; then + # Migrate Connectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + fi + # store maxthreads value of each connector + if [[ ${firstnode} == "fail" ]]; then + firstConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + firstConnectorNode="${i}" + firstnode=success + else + secondConnectorMaxThread=$(getXmlConnectorMaxThreads "${i}" "${filePath}" "${fileName}") + secondConnectorNode="${i}" + fi + done + [[ -z "${firstConnectorMaxThread}" ]] && status=fail + [[ -z "${secondConnectorMaxThread}" ]] && status=fail + # maxThreads is set, now compare MaxThreads + if [[ "${status}" == "success" ]]; then + compareMaxThreads "${firstConnectorMaxThread}" "${firstConnectorNode}" "${secondConnectorMaxThread}" "${secondConnectorNode}" "${filePath}" "${fileName}" + else + # Assume first connector is RT, maxThreads is not set in both connectors + local rtPortValue=$(getXmlConnectorPort "${firstConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${firstConnectorNode}" "${protocolType}" "${rtPortValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + local acPortValue=$(getXmlConnectorPort "${secondConnectorNode}" "${filePath}" "${fileName}") + migrateConnectorConfig "${secondConnectorNode}" "${protocolType}" "${acPortValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi +} + +migrateExtraBasedOnNonAjpCount () { + local nonAjpCount="$1" + local filePath="$2" + local fileName="$3" + local connectorCount="$4" + local i="$5" + + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + if [[ "${protocolType}" == *AJP* ]]; then + if [[ "${nonAjpCount}" -eq 1 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "no" "${i}" + continue + else + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + continue + fi + fi +} + +# find RT and AC Connector +findRtAndAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local initialAjpCount=0 + local nonAjpCount=0 + + # get the count of non AJP + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${protocolType}" != *AJP* ]] || continue + nonAjpCount=$((initialAjpCount+1)) + initialAjpCount="${nonAjpCount}" + done + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access and artifactory connectors + # Mark port as 8040 for access + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + done + elif [[ "${nonAjpCount}" -eq 2 ]]; then + # compare maxThreads in both connectors + maxThreadsExistToCompare "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${nonAjpCount}" -gt 2 ]]; then + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # setting with default port in system.yaml + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# get the count of non AJP +getCountOfNonAjp () { + local port="$1" + local connectorCount="$2" + local filePath=$3 + local fileName=$4 + local initialNonAjpCount=0 + + for ((i = 1 ; i <= "${connectorCount}" ; i++)); + do + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + local protocolType=$(getXmlConnectorProtocol "$i" "${filePath}" "${fileName}") + [[ "${portValue}" != "${port}" ]] || continue + [[ "${protocolType}" != *AJP* ]] || continue + local nonAjpCount=$((initialNonAjpCount+1)) + initialNonAjpCount="${nonAjpCount}" + done + echo -e "${nonAjpCount}" +} + +# Find for access connector +findAcConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_RT_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as access connector and mark port as that of connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take RT properties into access with 8040 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_RT_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add RT connector details as access connector and mark port as 8040 + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + setConnectorPort "${AC_PORT_YAMLPATH}" "${DEFAULT_ACCESS_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +# Find for artifactory connector +findRtConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + + # get the count of non AJP + local nonAjpCount=$(getCountOfNonAjp "${DEFAULT_ACCESS_PORT}" "${connectorCount}" "${filePath}" "${fileName}") + if [[ "${nonAjpCount}" -eq 1 ]]; then + # Add the connector found as RT connector + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" != "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + fi + done + elif [[ "${nonAjpCount}" -gt 1 ]]; then + # Take access properties into artifactory with 8081 + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + migrateExtraBasedOnNonAjpCount "${nonAjpCount}" "${filePath}" "${fileName}" "${connectorCount}" "$i" + local portValue=$(getXmlConnectorPort "$i" "${filePath}" "${fileName}") + if [[ "${portValue}" == "${DEFAULT_ACCESS_PORT}" ]]; then + migrateConnectorConfig "$i" "${protocolType}" "${portValue}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${filePath}" "${fileName}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + fi + done + elif [[ "${nonAjpCount}" -eq 0 ]]; then + # Add access connector details as RT connector and mark as ${DEFAULT_RT_PORT} + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + setConnectorPort "${RT_PORT_YAMLPATH}" "${DEFAULT_RT_PORT}" + # migrateExtraConnectors + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + fi +} + +checkForTlsConnector () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + for ((i = 1 ; i <= "${connectorCount}" ; i++)) + do + local sslProtocolValue=$($LIBXML2_PATH --xpath '//Server/Service/Connector['$i']/@sslProtocol' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${sslProtocolValue}" == "TLS" ]]; then + bannerImportant "NOTE: Ignoring TLS connector during migration, modify the system yaml to enable TLS. Original server.xml is saved in path [${filePath}/${fileName}]" + TLS_CONNECTOR_EXISTS=${FLAG_Y} + continue + fi + done +} + +# set custom tomcat server Listeners to system.yaml +setListenerConnector () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + for ((i = 1 ; i <= "${listenerCount}" ; i++)) + do + local listenerConnector=$($LIBXML2_PATH --xpath '//Server/Listener['$i']' ${filePath}/${fileName} 2>/dev/null) + local listenerClassName=$($LIBXML2_PATH --xpath '//Server/Listener['$i']/@className' ${filePath}/${fileName} 2>/dev/null | awk -F"=" '{print $2}' | tr -d '"') + if [[ "${listenerClassName}" == *Apr* ]]; then + setExtraConnector "${EXTRA_LISTENER_CONFIG_YAMLPATH}" "${listenerConnector}" + fi + done +} +# add custom tomcat server Listeners +addTomcatServerListeners () { + local filePath="$1" + local fileName="$2" + local listenerCount="$3" + if [[ "${listenerCount}" == "0" ]]; then + logger "No listener connectors found in the [${filePath}/${fileName}],skipping migration of listener connectors" + else + setListenerConnector "${filePath}" "${fileName}" "${listenerCount}" + setSystemValue "${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}" "true" "${SYSTEM_YAML_PATH}" + logger "Setting [${RT_TOMCAT_HTTPSCONNECTOR_ENABLED}] with value [true] in system.yaml" + fi +} + +# server.xml migration operations +xmlMigrateOperation () { + local filePath="$1" + local fileName="$2" + local connectorCount="$3" + local listenerCount="$4" + RT_DEFAULTPORT_STATUS=fail + AC_DEFAULTPORT_STATUS=fail + TLS_CONNECTOR_EXISTS=${FLAG_N} + + # Check for connector with TLS , if found ignore migrating it + checkForTlsConnector "${filePath}" "${fileName}" "${connectorCount}" + if [[ "${TLS_CONNECTOR_EXISTS}" == "${FLAG_Y}" ]]; then + return + fi + addTomcatServerListeners "${filePath}" "${fileName}" "${listenerCount}" + # Migrate RT default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" "${RT_PORT_YAMLPATH}" "${RT_MAXTHREADS_YAMLPATH}" "${RT_EXTRACONFIG_YAMLPATH}" "${RT_SENDREASONPHRASE_YAMLPATH}" "${RT_RELAXEDPATHCHARS_YAMLPATH}" "${RT_RELAXEDQUERYCHARS_YAMLPATH}" + # Migrate to extra if RT default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_RT_PORT}" + # Migrate AC default port from connectors + migrateConnectorPort "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" "${AC_PORT_YAMLPATH}" "${AC_MAXTHREADS_YAMLPATH}" "${AC_EXTRACONFIG_YAMLPATH}" "${AC_SENDREASONPHRASE_YAMLPATH}" + # Migrate to extra if access default ports are AJP + migrateDefaultPortIfAjp "${filePath}" "${fileName}" "${connectorCount}" "${DEFAULT_ACCESS_PORT}" + + if [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # RT and AC default port found + logger "Artifactory 8081 and Access 8040 default port are found" + migrateExtraConnectors "${filePath}" "${fileName}" "${connectorCount}" "yes" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "success" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # Only AC default port found,find RT connector + logger "Found Access default 8040 port" + findRtConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "success" ]]; then + # Only RT default port found,find AC connector + logger "Found Artifactory default 8081 port" + findAcConnector "${filePath}" "${fileName}" "${connectorCount}" + elif [[ "${AC_DEFAULTPORT_STATUS}" == "fail" && "${RT_DEFAULTPORT_STATUS}" == "fail" ]]; then + # RT and AC default port not found, find connector + logger "Artifactory 8081 and Access 8040 default port are not found" + findRtAndAcConnector "${filePath}" "${fileName}" "${connectorCount}" + fi +} + +# get count of connectors +getXmlConnectorCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Service/Connector)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# get count of listener connectors +getTomcatServerListenersCount () { + local filePath="$1" + local fileName="$2" + local count=$($LIBXML2_PATH --xpath 'count(/Server/Listener)' ${filePath}/${fileName}) + echo -e "${count}" +} + +# Migrate server.xml configuration to system.yaml +migrateXmlFile () { + local xmlFiles= + local fileName= + local filePath= + local sourceFilePath= + DEFAULT_ACCESS_PORT="8040" + DEFAULT_RT_PORT="8081" + AC_PORT_YAMLPATH="migration.xmlFiles.serverXml.access.port" + AC_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.access.maxThreads" + AC_SENDREASONPHRASE_YAMLPATH="migration.xmlFiles.serverXml.access.sendReasonPhrase" + AC_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.access.extraConfig" + RT_PORT_YAMLPATH="migration.xmlFiles.serverXml.artifactory.port" + RT_MAXTHREADS_YAMLPATH="migration.xmlFiles.serverXml.artifactory.maxThreads" + RT_SENDREASONPHRASE_YAMLPATH='migration.xmlFiles.serverXml.artifactory.sendReasonPhrase' + RT_RELAXEDPATHCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedPathChars' + RT_RELAXEDQUERYCHARS_YAMLPATH='migration.xmlFiles.serverXml.artifactory.relaxedQueryChars' + RT_EXTRACONFIG_YAMLPATH="migration.xmlFiles.serverXml.artifactory.extraConfig" + ROUTER_PORT_YAMLPATH="migration.xmlFiles.serverXml.router.port" + EXTRA_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.config" + EXTRA_LISTENER_CONFIG_YAMLPATH="migration.xmlFiles.serverXml.extra.listener" + RT_TOMCAT_HTTPSCONNECTOR_ENABLED="artifactory.tomcat.httpsConnector.enabled" + + retrieveYamlValue "migration.xmlFiles" "xmlFiles" "Skip" + xmlFiles="${YAML_VALUE}" + if [[ -z "${xmlFiles}" ]]; then + return + fi + bannerSection "PROCESSING MIGRATION OF XML FILES" + retrieveYamlValue "migration.xmlFiles.serverXml.fileName" "fileName" "Warning" + fileName="${YAML_VALUE}" + if [[ -z "${fileName}" ]]; then + return + fi + bannerSubSection "Processing Migration of $fileName" + retrieveYamlValue "migration.xmlFiles.serverXml.filePath" "filePath" "Warning" + filePath="${YAML_VALUE}" + if [[ -z "${filePath}" ]]; then + return + fi + # prepend NEW_DATA_DIR only if filePath is relative path + sourceFilePath=$(prependDir "${filePath}" "${NEW_DATA_DIR}/${filePath}") + if [[ "$(checkFileExists "${sourceFilePath}/${fileName}")" == "true" ]]; then + logger "File [${fileName}] is found in path [${sourceFilePath}]" + local connectorCount=$(getXmlConnectorCount "${sourceFilePath}" "${fileName}") + if [[ "${connectorCount}" == "0" ]]; then + logger "No connectors found in the [${filePath}/${fileName}],skipping migration of xml configuration" + return + fi + local listenerCount=$(getTomcatServerListenersCount "${sourceFilePath}" "${fileName}") + xmlMigrateOperation "${sourceFilePath}" "${fileName}" "${connectorCount}" "${listenerCount}" + else + logger "File [${fileName}] is not found in path [${sourceFilePath}] to migrate" + fi +} + +compareArtifactoryUser () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + + if [[ "${oldPropertyValue}" != "${newPropertyValue}" ]]; then + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" + else + logger "No change in property [${property}] value in [${sourceFile}] to migrate" + fi +} + +migrateReplicator () { + local property="$1" + local oldPropertyValue="$2" + local yamlPath="$3" + + setSystemValue "${yamlPath}" "${oldPropertyValue}" "${SYSTEM_YAML_PATH}" + logger "Setting [${yamlPath}] with value of the property [${property}] in system.yaml" +} + +compareJavaOptions () { + local property="$1" + local oldPropertyValue="$2" + local newPropertyValue="$3" + local yamlPath="$4" + local sourceFile="$5" + local oldJavaOption= + local newJavaOption= + local extraJavaOption= + local check=false + local success=true + local status=true + + oldJavaOption=$(echo "${oldPropertyValue}" | awk 'BEGIN{FS=OFS="\""}{for(i=2;i.+)\.{{ include "artifactory.fullname" . }} {{ include "artifactory.fullname" . }} +{{ tpl (include "artifactory.nginx.hosts" .) . }}; + +if ($http_x_forwarded_proto = '') { + set $http_x_forwarded_proto $scheme; +} +set $host_port {{ .Values.nginx.https.externalPort }}; +if ( $scheme = "http" ) { + set $host_port {{ .Values.nginx.http.externalPort }}; +} +## Application specific logs +## access_log /var/log/nginx/artifactory-access.log timing; +## error_log /var/log/nginx/artifactory-error.log; +rewrite ^/artifactory/?$ / redirect; +if ( $repo != "" ) { + rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/$repo/$1/$2 break; +} +chunked_transfer_encoding on; +client_max_body_size 0; + +location / { + proxy_read_timeout 900; + proxy_pass_header Server; + proxy_cookie_path ~*^/.* /; + proxy_pass {{ include "artifactory.scheme" . }}://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalPort }}/; + {{- if .Values.nginx.service.ssloffload}} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host; + {{- else }} + proxy_set_header X-JFrog-Override-Base-Url $http_x_forwarded_proto://$host:$host_port; + proxy_set_header X-Forwarded-Port $server_port; + {{- end }} + proxy_set_header X-Forwarded-Proto $http_x_forwarded_proto; + proxy_set_header Host $http_host; + proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; + {{- if .Values.nginx.disableProxyBuffering}} + proxy_http_version 1.1; + proxy_request_buffering off; + proxy_buffering off; + {{- end }} + add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; + location /artifactory/ { + if ( $request_uri ~ ^/artifactory/(.*)$ ) { + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/$1; + } + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.artifactory.externalArtifactoryPort }}/artifactory/; + } + location /pipelines/ { + proxy_http_version 1.1; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_set_header Host $http_host; + {{- if .Values.router.tlsEnabled }} + proxy_pass https://{{ include "artifactory.fullname" . }}:{{ .Values.router.internalPort }}; + {{- else }} + proxy_pass http://{{ include "artifactory.fullname" . }}:{{ .Values.router.internalPort }}; + {{- end }} + } +} +} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/nginx-main-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/nginx-main-conf.yaml new file mode 100644 index 0000000000..6ee7f98f9e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/nginx-main-conf.yaml @@ -0,0 +1,83 @@ +# Main Nginx configuration file +worker_processes 4; + +{{- if .Values.nginx.logs.stderr }} +error_log stderr {{ .Values.nginx.logs.level }}; +{{- else -}} +error_log {{ .Values.nginx.persistence.mountPath }}/logs/error.log {{ .Values.nginx.logs.level }}; +{{- end }} +pid /var/run/nginx.pid; + +{{- if .Values.artifactory.ssh.enabled }} +## SSH Server Configuration +stream { + server { + {{- if .Values.nginx.singleStackIPv6Cluster }} + listen [::]:{{ .Values.nginx.ssh.internalPort }}; + {{- else -}} + listen {{ .Values.nginx.ssh.internalPort }}; + {{- end }} + proxy_pass {{ include "artifactory.fullname" . }}:{{ .Values.artifactory.ssh.externalPort }}; + } +} +{{- end }} + +events { + worker_connections 1024; +} + +http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + + variables_hash_max_size 1024; + variables_hash_bucket_size 64; + server_names_hash_max_size 4096; + server_names_hash_bucket_size 128; + types_hash_max_size 2048; + types_hash_bucket_size 64; + proxy_read_timeout 2400s; + client_header_timeout 2400s; + client_body_timeout 2400s; + proxy_connect_timeout 75s; + proxy_send_timeout 2400s; + proxy_buffer_size 128k; + proxy_buffers 40 128k; + proxy_busy_buffers_size 128k; + proxy_temp_file_write_size 250m; + proxy_http_version 1.1; + client_body_buffer_size 128k; + + log_format main '$remote_addr - $remote_user [$time_local] "$request" ' + '$status $body_bytes_sent "$http_referer" ' + '"$http_user_agent" "$http_x_forwarded_for"'; + + log_format timing 'ip = $remote_addr ' + 'user = \"$remote_user\" ' + 'local_time = \"$time_local\" ' + 'host = $host ' + 'request = \"$request\" ' + 'status = $status ' + 'bytes = $body_bytes_sent ' + 'upstream = \"$upstream_addr\" ' + 'upstream_time = $upstream_response_time ' + 'request_time = $request_time ' + 'referer = \"$http_referer\" ' + 'UA = \"$http_user_agent\"'; + + {{- if .Values.nginx.logs.stdout }} + access_log /dev/stdout timing; + {{- else -}} + access_log {{ .Values.nginx.persistence.mountPath }}/logs/access.log timing; + {{- end }} + + sendfile on; + #tcp_nopush on; + + keepalive_timeout 65; + + #gzip on; + + include /etc/nginx/conf.d/*.conf; + +} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/system.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/system.yaml new file mode 100644 index 0000000000..053207fd01 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/files/system.yaml @@ -0,0 +1,156 @@ +router: + serviceRegistry: + insecure: {{ .Values.router.serviceRegistry.insecure }} +shared: +{{- if .Values.artifactory.coldStorage.enabled }} + jfrogColdStorage: + coldInstanceEnabled: true +{{- end }} +{{ tpl (include "artifactory.metrics" .) . }} + logging: + consoleLog: + enabled: {{ .Values.artifactory.consoleLog }} + extraJavaOpts: > + -Dartifactory.graceful.shutdown.max.request.duration.millis={{ mul .Values.artifactory.terminationGracePeriodSeconds 1000 }} + -Dartifactory.access.client.max.connections={{ .Values.access.tomcat.connector.maxThreads }} + {{- with .Values.artifactory.javaOpts }} + {{- if .corePoolSize }} + -Dartifactory.async.corePoolSize={{ .corePoolSize }} + {{- end }} + {{- if .xms }} + -Xms{{ .xms }} + {{- end }} + {{- if .xmx }} + -Xmx{{ .xmx }} + {{- end }} + {{- if .jmx.enabled }} + -Dcom.sun.management.jmxremote + -Dcom.sun.management.jmxremote.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.rmi.port={{ .jmx.port }} + -Dcom.sun.management.jmxremote.ssl={{ .jmx.ssl }} + {{- if .jmx.host }} + -Djava.rmi.server.hostname={{ tpl .jmx.host $ }} + {{- else }} + -Djava.rmi.server.hostname={{ template "artifactory.fullname" $ }} + {{- end }} + {{- if .jmx.authenticate }} + -Dcom.sun.management.jmxremote.authenticate=true + -Dcom.sun.management.jmxremote.access.file={{ .jmx.accessFile }} + -Dcom.sun.management.jmxremote.password.file={{ .jmx.passwordFile }} + {{- else }} + -Dcom.sun.management.jmxremote.authenticate=false + {{- end }} + {{- end }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- if or .Values.database.type .Values.postgresql.enabled }} + database: + allowNonPostgresql: {{ .Values.database.allowNonPostgresql }} + {{- if .Values.postgresql.enabled }} + type: postgresql + url: "jdbc:postgresql://{{ .Release.Name }}-postgresql:{{ .Values.postgresql.service.port }}/{{ .Values.postgresql.postgresqlDatabase }}" + driver: org.postgresql.Driver + username: "{{ .Values.postgresql.postgresqlUsername }}" + {{- else }} + type: "{{ .Values.database.type }}" + driver: "{{ .Values.database.driver }}" + {{- end }} + {{- end }} +artifactory: +{{- if or .Values.artifactory.haDataDir.enabled .Values.artifactory.haBackupDir.enabled }} + node: + {{- if .Values.artifactory.haDataDir.path }} + haDataDir: {{ .Values.artifactory.haDataDir.path }} + {{- end }} + {{- if .Values.artifactory.haBackupDir.path }} + haBackupDir: {{ .Values.artifactory.haBackupDir.path }} + {{- end }} +{{- end }} + database: + maxOpenConnections: {{ .Values.artifactory.database.maxOpenConnections }} + tomcat: + maintenanceConnector: + port: {{ .Values.artifactory.tomcat.maintenanceConnector.port }} + connector: + maxThreads: {{ .Values.artifactory.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.artifactory.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.artifactory.tomcat.connector.extraConfig }} +frontend: + session: + timeMinutes: {{ .Values.frontend.session.timeoutMinutes | quote }} +access: + runOnArtifactoryTomcat: {{ .Values.access.runOnArtifactoryTomcat | default false }} + database: + maxOpenConnections: {{ .Values.access.database.maxOpenConnections }} + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + extraJavaOpts: > + {{- if .Values.splitServicesToContainers }} + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=70 + {{- end }} + {{- with .Values.access.javaOpts }} + {{- if .other }} + {{ .other }} + {{- end }} + {{- end }} + {{- end }} + tomcat: + connector: + maxThreads: {{ .Values.access.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.access.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.access.tomcat.connector.extraConfig }} +{{- if .Values.mc.enabled }} +mc: + enabled: true + database: + maxOpenConnections: {{ .Values.mc.database.maxOpenConnections }} + idgenerator: + maxOpenConnections: {{ .Values.mc.idgenerator.maxOpenConnections }} + tomcat: + connector: + maxThreads: {{ .Values.mc.tomcat.connector.maxThreads }} + sendReasonPhrase: {{ .Values.mc.tomcat.connector.sendReasonPhrase }} + extraConfig: {{ .Values.mc.tomcat.connector.extraConfig }} +{{- end }} +metadata: + database: + maxOpenConnections: {{ .Values.metadata.database.maxOpenConnections }} +{{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +jfconnect: + enabled: true +{{- else }} +jfconnect: + enabled: false +jfconnect_service: + enabled: false +{{- end }} +{{- if and .Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} +federation: + enabled: true + embedded: {{ .Values.federation.embedded }} + extraJavaOpts: {{ .Values.federation.extraJavaOpts }} + port: {{ .Values.federation.internalPort }} +rtfs: + database: + driver: org.postgresql.Driver + type: postgresql + username: {{ .Values.federation.database.username }} + password: {{ .Values.federation.database.password }} + url: jdbc:postgresql://{{ .Values.federation.database.host }}:{{ .Values.federation.database.port }}/{{ .Values.federation.database.name }} +{{- else }} +federation: + enabled: false +{{- end }} +{{- if .Values.event.webhooks }} +event: + webhooks: {{ toYaml .Values.event.webhooks | nindent 6 }} +{{- end }} +{{- if .Values.evidence.enabled }} +evidence: + enabled: true +{{- else }} +evidence: + enabled: false +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/logo/artifactory-logo.png b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/logo/artifactory-logo.png new file mode 100644 index 0000000000..fe6c23c5a7 Binary files /dev/null and b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/logo/artifactory-logo.png differ diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml new file mode 100644 index 0000000000..7bccf330d9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=200 + -Dartifactory.async.poolMaxQueueSize=100000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=200 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 800 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 200 + +access: + tomcat: + connector: + maxThreads: 200 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 200 + +metadata: + database: + maxOpenConnections: 200 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge.yaml new file mode 100644 index 0000000000..be477939b4 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-2xlarge.yaml @@ -0,0 +1,126 @@ +############################################################## +# The 2xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 6 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "4" + memory: 20Gi + limits: + # cpu: "20" + memory: 24Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 2Gi + limits: + # cpu: 2 + memory: 4Gi + +router: + resources: + requests: + cpu: "1" + memory: 1Gi + limits: + # cpu: "6" + memory: 2Gi + +frontend: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 1Gi + +metadata: + resources: + requests: + cpu: "1" + memory: 500Mi + limits: + # cpu: "5" + memory: 2Gi + +event: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +observability: + resources: + requests: + cpu: 200m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +jfconnect: + resources: + requests: + cpu: 100m + memory: 100Mi + limits: + # cpu: "1" + memory: 250Mi + +nginx: + replicaCount: 3 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "6Gi" + limits: + # cpu: "14" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "5000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 256Gi + cpu: "64" + limits: + memory: 256Gi + # cpu: "128" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large-extra-config.yaml new file mode 100644 index 0000000000..d97a85c9fd --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=80 + -Dartifactory.async.poolMaxQueueSize=20000 + -Dartifactory.http.client.max.total.connections=100 + -Dartifactory.http.client.max.connections.per.route=100 + -Dartifactory.access.client.max.connections=125 + -Dartifactory.metadata.event.operator.threads=4 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=524288 + -XX:MaxDirectMemorySize=512m + tomcat: + connector: + maxThreads: 500 + extraConfig: 'acceptCount="800" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 100 + +access: + tomcat: + connector: + maxThreads: 125 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 100 + +metadata: + database: + maxOpenConnections: 100 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large.yaml new file mode 100644 index 0000000000..80326a8e43 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-large.yaml @@ -0,0 +1,126 @@ +############################################################## +# The large sizing +# This size is intended for large organizations. It can be increased with adding replicas or moving to the xlarge sizing +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 3 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 10Gi + limits: + # cpu: "14" + memory: 12Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "8" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 1 + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 200m + memory: 400Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "1" + memory: "500Mi" + limits: + # cpu: "4" + memory: "1Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "600" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 64Gi + cpu: "16" + limits: + memory: 64Gi + # cpu: "32" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium-extra-config.yaml new file mode 100644 index 0000000000..1c294c0439 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium.yaml new file mode 100644 index 0000000000..8b72150419 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-medium.yaml @@ -0,0 +1,126 @@ +############################################################## +# The medium sizing +# This size is just 2 replicas of the small size. Vertical sizing of all services is not changed +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 2 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "200" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 32Gi + cpu: "8" + limits: + memory: 32Gi + # cpu: "16" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small-extra-config.yaml new file mode 100644 index 0000000000..1c294c0439 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=40 + -Dartifactory.async.poolMaxQueueSize=10000 + -Dartifactory.http.client.max.total.connections=50 + -Dartifactory.http.client.max.connections.per.route=50 + -Dartifactory.access.client.max.connections=75 + -Dartifactory.metadata.event.operator.threads=3 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=262144 + -XX:MaxDirectMemorySize=256m + tomcat: + connector: + maxThreads: 300 + extraConfig: 'acceptCount="600" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 50 + +access: + tomcat: + connector: + maxThreads: 75 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 50 + +metadata: + database: + maxOpenConnections: 50 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small.yaml new file mode 100644 index 0000000000..eb8d7239d8 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-small.yaml @@ -0,0 +1,124 @@ +############################################################## +# The small sizing +# This is the size recommended for running Artifactory for small teams +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 4Gi + limits: + # cpu: "10" + memory: 5Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 100m + memory: 250Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 100m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 100m + memory: 200Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "100m" + memory: "100Mi" + limits: + # cpu: "2" + memory: "500Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "100" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 16Gi + cpu: "4" + limits: + memory: 16Gi + # cpu: "10" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml new file mode 100644 index 0000000000..00e6099f20 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge-extra-config.yaml @@ -0,0 +1,41 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=65 + -Dartifactory.async.corePoolSize=160 + -Dartifactory.async.poolMaxQueueSize=50000 + -Dartifactory.http.client.max.total.connections=150 + -Dartifactory.http.client.max.connections.per.route=150 + -Dartifactory.access.client.max.connections=150 + -Dartifactory.metadata.event.operator.threads=5 + -XX:MaxMetaspaceSize=512m + -Djdk.nio.maxCachedBufferSize=1048576 + -XX:MaxDirectMemorySize=1024m + tomcat: + connector: + maxThreads: 600 + extraConfig: 'acceptCount="1200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 150 + +access: + tomcat: + connector: + maxThreads: 150 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 150 + +metadata: + database: + maxOpenConnections: 150 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge.yaml new file mode 100644 index 0000000000..e77152ee1e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xlarge.yaml @@ -0,0 +1,126 @@ +############################################################## +# The xlarge sizing +# This size is intended for very large organizations. It can be increased with adding replicas +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 4 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "2" + memory: 14Gi + limits: + # cpu: "14" + memory: 16Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "16" + - name : JF_SHARED_NODE_HAENABLED + value: "true" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 2Gi + limits: + # cpu: 1 + memory: 3Gi + +router: + resources: + requests: + cpu: 200m + memory: 500Mi + limits: + # cpu: "4" + memory: 1Gi + +frontend: + resources: + requests: + cpu: 200m + memory: 300Mi + limits: + # cpu: "3" + memory: 1Gi + +metadata: + resources: + requests: + cpu: 200m + memory: 200Mi + limits: + # cpu: "4" + memory: 1Gi + +event: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 100m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 2 + disableProxyBuffering: true + resources: + requests: + cpu: "4" + memory: "4Gi" + limits: + # cpu: "12" + memory: "8Gi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "2000" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 128Gi + cpu: "32" + limits: + memory: 128Gi + # cpu: "64" \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml new file mode 100644 index 0000000000..39709b691c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall-extra-config.yaml @@ -0,0 +1,42 @@ +#################################################################################### +# [WARNING] The configuration mentioned in this file are taken inside system.yaml +# hence this configuration will be overridden when enabling systemYamlOverride +#################################################################################### +artifactory: + javaOpts: + other: > + -XX:InitialRAMPercentage=40 + -XX:MaxRAMPercentage=70 + -Dartifactory.async.corePoolSize=10 + -Dartifactory.async.poolMaxQueueSize=2000 + -Dartifactory.http.client.max.total.connections=20 + -Dartifactory.http.client.max.connections.per.route=20 + -Dartifactory.access.client.max.connections=15 + -Dartifactory.metadata.event.operator.threads=2 + -XX:MaxMetaspaceSize=400m + -XX:CompressedClassSpaceSize=96m + -Djdk.nio.maxCachedBufferSize=131072 + -XX:MaxDirectMemorySize=128m + tomcat: + connector: + maxThreads: 50 + extraConfig: 'acceptCount="200" acceptorThreadCount="2" compression="off" connectionLinger="-1" connectionTimeout="120000" enableLookups="false"' + + database: + maxOpenConnections: 15 + +access: + tomcat: + connector: + maxThreads: 15 + javaOpts: + other: > + -XX:InitialRAMPercentage=20 + -XX:MaxRAMPercentage=60 + database: + maxOpenConnections: 15 + +metadata: + database: + maxOpenConnections: 15 + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall.yaml new file mode 100644 index 0000000000..246f830a01 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/sizing/artifactory-xsmall.yaml @@ -0,0 +1,125 @@ +############################################################## +# The xsmall sizing +# This is the minimum size recommended for running Artifactory +############################################################## +splitServicesToContainers: true +artifactory: + # Enterprise and above licenses are required for setting replicaCount greater than 1. + # Count should be equal or above the total number of licenses available for artifactory. + replicaCount: 1 + + # Require multiple Artifactory pods to run on separate nodes + podAntiAffinity: + type: "hard" + + resources: + requests: + cpu: "1" + memory: 3Gi + limits: + # cpu: "10" + memory: 4Gi + + extraEnvironmentVariables: + - name: MALLOC_ARENA_MAX + value: "2" + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + +access: + resources: + requests: + cpu: 500m + memory: 1.5Gi + limits: + # cpu: 1 + memory: 2Gi + +router: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "1" + memory: 500Mi + +frontend: + resources: + requests: + cpu: 50m + memory: 150Mi + limits: + # cpu: "2" + memory: 250Mi + +metadata: + resources: + requests: + cpu: 50m + memory: 100Mi + limits: + # cpu: "2" + memory: 1Gi + +event: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +observability: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +jfconnect: + resources: + requests: + cpu: 50m + memory: 50Mi + limits: + # cpu: 500m + memory: 250Mi + +nginx: + replicaCount: 1 + disableProxyBuffering: true + resources: + requests: + cpu: "50m" + memory: "50Mi" + limits: + # cpu: "1" + memory: "250Mi" + +postgresql: + postgresqlExtendedConf: + maxConnections: "50" + primary: + affinity: + # Require PostgreSQL pod to run on a different node than Artifactory pods + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchExpressions: + - key: app + operator: In + values: + - artifactory + topologyKey: kubernetes.io/hostname + resources: + requests: + memory: 8Gi + cpu: "2" + limits: + memory: 8Gi + # cpu: "8" + diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/NOTES.txt new file mode 100644 index 0000000000..76652ac98d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/NOTES.txt @@ -0,0 +1,106 @@ +Congratulations. You have just deployed JFrog Artifactory! +{{- if .Values.artifactory.masterKey }} +{{- if and (not .Values.artifactory.masterKeySecretName) (eq .Values.artifactory.masterKey "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF") }} + + +***************************************** WARNING ****************************************** +* Your Artifactory master key is still set to the provided example: * +* artifactory.masterKey=FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF * +* * +* You should change this to your own generated key: * +* $ export MASTER_KEY=$(openssl rand -hex 32) * +* $ echo ${MASTER_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.masterKey=${MASTER_KEY}' * +* * +* Alternatively, you can use a pre-existing secret with a key called master-key with * +* '--set artifactory.masterKeySecretName=${SECRET_NAME}' * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.joinKey }} +{{- if eq .Values.artifactory.joinKey "EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE" }} + + +***************************************** WARNING ****************************************** +* Your Artifactory join key is still set to the provided example: * +* artifactory.joinKey=EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE * +* * +* You should change this to your own generated key: * +* $ export JOIN_KEY=$(openssl rand -hex 32) * +* $ echo ${JOIN_KEY} * +* * +* Pass the created master key to helm with '--set artifactory.joinKey=${JOIN_KEY}' * +* * +******************************************************************************************** +{{- end }} +{{- end }} + +{{- if .Values.artifactory.setSecurityContext }} +****************************************** WARNING ********************************************** +* From chart version 107.84.x, `setSecurityContext` has been renamed to `podSecurityContext`, * + please change your values.yaml before upgrade , For more Info , refer to 107.84.x changelog * +************************************************************************************************* +{{- end }} + +{{- if and (or (or (or (or (or ( or ( or ( or (or (or ( or (or .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName) .Values.systemYamlOverride.existingSecret) (or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled)) .Values.aws.licenseConfigSecretName) .Values.artifactory.persistence.customBinarystoreXmlSecret) .Values.access.customCertificatesSecretName) .Values.systemYamlOverride.existingSecret) .Values.artifactory.license.secret) .Values.artifactory.userPluginSecrets) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey)) (and .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName)) (or .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName)) .Values.artifactory.unifiedSecretInstallation }} +****************************************** WARNING ************************************************************************************************** +* The unifiedSecretInstallation flag is currently enabled, which creates the unified secret. The existing secrets will continue as separate secrets.* +* Update the values.yaml with the existing secrets to add them to the unified secret. * +***************************************************************************************************************************************************** +{{- end }} + +1. Get the Artifactory URL by running these commands: + + {{- if .Values.ingress.enabled }} + {{- range .Values.ingress.hosts }} + http://{{ . }} + {{- end }} + + {{- else if contains "NodePort" .Values.nginx.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "artifactory.nginx.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT/ + + {{- else if contains "LoadBalancer" .Values.nginx.service.type }} + + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of the service by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "artifactory.nginx.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory.nginx.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP/ + + {{- else if contains "ClusterIP" .Values.nginx.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "component={{ .Values.nginx.name }}" -o jsonpath="{.items[0].metadata.name}") + echo http://127.0.0.1:{{ .Values.nginx.externalPortHttp }} + kubectl port-forward --namespace {{ .Release.Namespace }} $POD_NAME {{ .Values.nginx.externalPortHttp }}:{{ .Values.nginx.internalPortHttp }} + + {{- end }} + +2. Open Artifactory in your browser + Default credential for Artifactory: + user: admin + password: password + +{{ if .Values.artifactory.javaOpts.jmx.enabled }} +JMX configuration: +{{- if not (contains "LoadBalancer" .Values.artifactory.service.type) }} +If you want to access JMX from you computer with jconsole, you should set ".Values.artifactory.service.type=LoadBalancer" !!! +{{ end }} + +1. Get the Artifactory service IP: +export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "artifactory.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + +2. Map the service name to the service IP in /etc/hosts: +sudo sh -c "echo \"${SERVICE_IP} {{ template "artifactory.fullname" . }}\" >> /etc/hosts" + +3. Launch jconsole: +jconsole {{ template "artifactory.fullname" . }}:{{ .Values.artifactory.javaOpts.jmx.port }} +{{- end }} + +{{- if and .Values.nginx.enabled .Values.ingress.hosts }} +***************************************** WARNING ***************************************************************************** +* when nginx is enabled , .Values.ingress.hosts will be deprecated in upcoming releases * +* It is recommended to use nginx.hosts instead ingress.hosts +******************************************************************************************************************************* +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_helpers.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_helpers.tpl new file mode 100644 index 0000000000..7cea041f7e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_helpers.tpl @@ -0,0 +1,528 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "artifactory.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Expand the name nginx service. +*/}} +{{- define "artifactory.nginx.name" -}} +{{- default .Chart.Name .Values.nginx.name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "artifactory.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified nginx name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "artifactory.nginx.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.nginx.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "artifactory.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} +{{ default (include "artifactory.fullname" .) .Values.serviceAccount.name }} +{{- else -}} +{{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "artifactory.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate SSL certificates +*/}} +{{- define "artifactory.gen-certs" -}} +{{- $altNames := list ( printf "%s.%s" (include "artifactory.fullname" .) .Release.Namespace ) ( printf "%s.%s.svc" (include "artifactory.fullname" .) .Release.Namespace ) -}} +{{- $ca := genCA "artifactory-ca" 365 -}} +{{- $cert := genSignedCert ( include "artifactory.fullname" . ) nil $altNames 365 $ca -}} +tls.crt: {{ $cert.Cert | b64enc }} +tls.key: {{ $cert.Key | b64enc }} +{{- end -}} + +{{/* +Scheme (http/https) based on Access or Router TLS enabled/disabled +*/}} +{{- define "artifactory.scheme" -}} +{{- if or .Values.access.accessConfig.security.tls .Values.router.tlsEnabled -}} +{{- printf "%s" "https" -}} +{{- else -}} +{{- printf "%s" "http" -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKey value +*/}} +{{- define "artifactory.joinKey" -}} +{{- if .Values.global.joinKey -}} +{{- .Values.global.joinKey -}} +{{- else if .Values.artifactory.joinKey -}} +{{- .Values.artifactory.joinKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectToken value +*/}} +{{- define "artifactory.jfConnectToken" -}} +{{- .Values.artifactory.jfConnectToken -}} +{{- end -}} + +{{/* +Resolve masterKey value +*/}} +{{- define "artifactory.masterKey" -}} +{{- if .Values.global.masterKey -}} +{{- .Values.global.masterKey -}} +{{- else if .Values.artifactory.masterKey -}} +{{- .Values.artifactory.masterKey -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve joinKeySecretName value +*/}} +{{- define "artifactory.joinKeySecretName" -}} +{{- if .Values.global.joinKeySecretName -}} +{{- .Values.global.joinKeySecretName -}} +{{- else if .Values.artifactory.joinKeySecretName -}} +{{- .Values.artifactory.joinKeySecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve jfConnectTokenSecretName value +*/}} +{{- define "artifactory.jfConnectTokenSecretName" -}} +{{- if .Values.artifactory.jfConnectTokenSecretName -}} +{{- .Values.artifactory.jfConnectTokenSecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve masterKeySecretName value +*/}} +{{- define "artifactory.masterKeySecretName" -}} +{{- if .Values.global.masterKeySecretName -}} +{{- .Values.global.masterKeySecretName -}} +{{- else if .Values.artifactory.masterKeySecretName -}} +{{- .Values.artifactory.masterKeySecretName -}} +{{- else -}} +{{ include "artifactory.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Resolve imagePullSecrets value +*/}} +{{- define "artifactory.imagePullSecrets" -}} +{{- if .Values.global.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.global.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- else if .Values.imagePullSecrets }} +imagePullSecrets: +{{- range .Values.imagePullSecrets }} + - name: {{ . }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainersBegin value +*/}} +{{- define "artifactory.customInitContainersBegin" -}} +{{- if .Values.global.customInitContainersBegin -}} +{{- .Values.global.customInitContainersBegin -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainersBegin -}} +{{- .Values.artifactory.customInitContainersBegin -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.customInitContainers" -}} +{{- if .Values.global.customInitContainers -}} +{{- .Values.global.customInitContainers -}} +{{- end -}} +{{- if .Values.artifactory.customInitContainers -}} +{{- .Values.artifactory.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.customVolumes" -}} +{{- if .Values.global.customVolumes -}} +{{- .Values.global.customVolumes -}} +{{- end -}} +{{- if .Values.artifactory.customVolumes -}} +{{- .Values.artifactory.customVolumes -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumeMounts value +*/}} +{{- define "artifactory.customVolumeMounts" -}} +{{- if .Values.global.customVolumeMounts -}} +{{- .Values.global.customVolumeMounts -}} +{{- end -}} +{{- if .Values.artifactory.customVolumeMounts -}} +{{- .Values.artifactory.customVolumeMounts -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.customSidecarContainers" -}} +{{- if .Values.global.customSidecarContainers -}} +{{- .Values.global.customSidecarContainers -}} +{{- end -}} +{{- if .Values.artifactory.customSidecarContainers -}} +{{- .Values.artifactory.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory chart image names +*/}} +{{- define "artifactory.getImageInfoByValue" -}} +{{- $dot := index . 0 }} +{{- $indexReference := index . 1 }} +{{- $registryName := index $dot.Values $indexReference "image" "registry" -}} +{{- $repositoryName := index $dot.Values $indexReference "image" "repository" -}} +{{- $tag := default $dot.Chart.AppVersion (index $dot.Values $indexReference "image" "tag") | toString -}} +{{- if $dot.Values.global }} + {{- if and $dot.Values.splitServicesToContainers $dot.Values.global.versions.router (eq $indexReference "router") }} + {{- $tag = $dot.Values.global.versions.router | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.initContainers (eq $indexReference "initContainers") }} + {{- $tag = $dot.Values.global.versions.initContainers | toString -}} + {{- end -}} + {{- if and $dot.Values.global.versions.artifactory (or (eq $indexReference "artifactory") (eq $indexReference "nginx") ) }} + {{- $tag = $dot.Values.global.versions.artifactory | toString -}} + {{- end -}} + {{- if $dot.Values.global.imageRegistry }} + {{- printf "%s/%s:%s" $dot.Values.global.imageRegistry $repositoryName $tag -}} + {{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} + {{- end -}} +{{- else -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag -}} +{{- end -}} +{{- end -}} + +{{/* +Return the proper artifactory app version +*/}} +{{- define "artifactory.app.version" -}} +{{- $tag := (splitList ":" ((include "artifactory.getImageInfoByValue" (list . "artifactory" )))) | last | toString -}} +{{- printf "%s" $tag -}} +{{- end -}} + +{{/* +Custom certificate copy command +*/}} +{{- define "artifactory.copyCustomCerts" -}} +echo "Copy custom certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; +for file in $(ls -1 /tmp/certs/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted; fi done; +if [ -f {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt ]; then mv -v {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/tls.crt {{ .Values.artifactory.persistence.mountPath }}/etc/security/keys/trusted/ca.crt; fi; +{{- end -}} + +{{/* +Circle of trust certificates copy command +*/}} +{{- define "artifactory.copyCircleOfTrustCertsCerts" -}} +echo "Copy circle of trust certificates to {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted"; +mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; +for file in $(ls -1 /tmp/circleoftrustcerts/* | grep -v .key | grep -v ":" | grep -v grep); do if [ -f "${file}" ]; then cp -v ${file} {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; fi done; +{{- end -}} + +{{/* +Resolve requiredServiceTypes value +*/}} +{{- define "artifactory.router.requiredServiceTypes" -}} +{{- $requiredTypes := "jfrt,jfac" -}} +{{- if not .Values.access.enabled -}} + {{- $requiredTypes = "jfrt" -}} +{{- end -}} +{{- if .Values.observability.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfob" -}} +{{- end -}} +{{- if .Values.metadata.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmd" -}} +{{- end -}} +{{- if .Values.event.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevt" -}} +{{- end -}} +{{- if .Values.frontend.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jffe" -}} +{{- end -}} +{{- if .Values.jfconnect.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfcon" -}} +{{- end -}} +{{- if .Values.evidence.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfevd" -}} +{{- end -}} +{{- if .Values.mc.enabled -}} + {{- $requiredTypes = printf "%s,%s" $requiredTypes "jfmc" -}} +{{- end -}} +{{- $requiredTypes -}} +{{- end -}} + +{{/* +Check if the image is artifactory pro or not +*/}} +{{- define "artifactory.isImageProType" -}} +{{- if not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository) -}} +{{ true }} +{{- else -}} +{{ false }} +{{- end -}} +{{- end -}} + +{{/* +Check if the artifactory is using derby database +*/}} +{{- define "artifactory.isUsingDerby" -}} +{{- if and (eq (default "derby" .Values.database.type) "derby") (not .Values.postgresql.enabled) -}} +{{ true }} +{{- else -}} +{{ false }} +{{- end -}} +{{- end -}} + +{{/* +nginx scheme (http/https) +*/}} +{{- define "nginx.scheme" -}} +{{- if .Values.nginx.http.enabled -}} +{{- printf "%s" "http" -}} +{{- else -}} +{{- printf "%s" "https" -}} +{{- end -}} +{{- end -}} + +{{/* +nginx command +*/}} +{{- define "nginx.command" -}} +{{- if .Values.nginx.customCommand }} +{{ toYaml .Values.nginx.customCommand }} +{{- end }} +{{- end -}} + +{{/* +nginx port (8080/8443) based on http/https enabled +*/}} +{{- define "nginx.port" -}} +{{- if .Values.nginx.http.enabled -}} +{{- .Values.nginx.http.internalPort -}} +{{- else -}} +{{- .Values.nginx.https.internalPort -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customInitContainers value +*/}} +{{- define "artifactory.nginx.customInitContainers" -}} +{{- if .Values.nginx.customInitContainers -}} +{{- .Values.nginx.customInitContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve customVolumes value +*/}} +{{- define "artifactory.nginx.customVolumes" -}} +{{- if .Values.nginx.customVolumes -}} +{{- .Values.nginx.customVolumes -}} +{{- end -}} +{{- end -}} + + +{{/* +Resolve customVolumeMounts nginx value +*/}} +{{- define "artifactory.nginx.customVolumeMounts" -}} +{{- if .Values.nginx.customVolumeMounts -}} +{{- .Values.nginx.customVolumeMounts -}} +{{- end -}} +{{- end -}} + + +{{/* +Resolve customSidecarContainers value +*/}} +{{- define "artifactory.nginx.customSidecarContainers" -}} +{{- if .Values.nginx.customSidecarContainers -}} +{{- .Values.nginx.customSidecarContainers -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve Artifactory pod node selector value +*/}} +{{- define "artifactory.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.artifactory.nodeSelector }} +{{ toYaml .Values.artifactory.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve Nginx pods node selector value +*/}} +{{- define "nginx.nodeSelector" -}} +nodeSelector: +{{- if .Values.global.nodeSelector }} +{{ toYaml .Values.global.nodeSelector | indent 2 }} +{{- else if .Values.nginx.nodeSelector }} +{{ toYaml .Values.nginx.nodeSelector | indent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Resolve unifiedCustomSecretVolumeName value +*/}} +{{- define "artifactory.unifiedCustomSecretVolumeName" -}} +{{- printf "%s-%s" (include "artifactory.name" .) ("unified-secret-volume") | trunc 63 -}} +{{- end -}} + +{{/* +Check the Duplication of volume names for secrets. If unifiedSecretInstallation is enabled then the method is checking for volume names, +if the volume exists in customVolume then an extra volume with the same name will not be getting added in unifiedSecretInstallation case. +*/}} +{{- define "artifactory.checkDuplicateUnifiedCustomVolume" -}} +{{- if or .Values.global.customVolumes .Values.artifactory.customVolumes -}} +{{- $val := (tpl (include "artifactory.customVolumes" .) .) | toJson -}} +{{- contains (include "artifactory.unifiedCustomSecretVolumeName" .) $val | toString -}} +{{- else -}} +{{- printf "%s" "false" -}} +{{- end -}} +{{- end -}} + +{{/* +Calculate the systemYaml from structured and unstructured text input +*/}} +{{- define "artifactory.finalSystemYaml" -}} +{{ tpl (mergeOverwrite (include "artifactory.systemYaml" . | fromYaml) .Values.artifactory.extraSystemYaml | toYaml) . }} +{{- end -}} + +{{/* +Calculate the systemYaml from the unstructured text input +*/}} +{{- define "artifactory.systemYaml" -}} +{{ include (print $.Template.BasePath "/_system-yaml-render.tpl") . }} +{{- end -}} + +{{/* +Metrics enabled +*/}} +{{- define "metrics.enabled" -}} +shared: + metrics: + enabled: true +{{- end }} + +{{/* +Resolve unified secret prepend release name +*/}} +{{- define "artifactory.unifiedSecretPrependReleaseName" -}} +{{- if .Values.artifactory.unifiedSecretPrependReleaseName }} +{{- printf "%s" (include "artifactory.fullname" .) -}} +{{- else }} +{{- printf "%s" (include "artifactory.name" .) -}} +{{- end }} +{{- end }} + +{{/* +Resolve artifactory metrics +*/}} +{{- define "artifactory.metrics" -}} +{{- if .Values.artifactory.openMetrics -}} +{{- if .Values.artifactory.openMetrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.openMetrics.filebeat }} +{{- if .Values.artifactory.openMetrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.openMetrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- else if .Values.artifactory.metrics -}} +{{- if .Values.artifactory.metrics.enabled -}} +{{ include "metrics.enabled" . }} +{{- if .Values.artifactory.metrics.filebeat }} +{{- if .Values.artifactory.metrics.filebeat.enabled }} +{{ include "metrics.enabled" . }} + filebeat: +{{ tpl (.Values.artifactory.metrics.filebeat | toYaml) . | indent 6 }} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Resolve nginx hosts value +*/}} +{{- define "artifactory.nginx.hosts" -}} +{{- if .Values.ingress.hosts }} +{{- range .Values.ingress.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- else if .Values.nginx.hosts }} +{{- range .Values.nginx.hosts -}} + {{- if contains "." . -}} + {{ "" | indent 0 }} ~(?.+)\.{{ . }} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_system-yaml-render.tpl b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_system-yaml-render.tpl new file mode 100644 index 0000000000..deaa773ea9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/_system-yaml-render.tpl @@ -0,0 +1,5 @@ +{{- if .Values.artifactory.systemYaml -}} +{{- tpl .Values.artifactory.systemYaml . -}} +{{- else -}} +{{ (tpl ( $.Files.Get "files/system.yaml" ) .) }} +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/additional-resources.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/additional-resources.yaml new file mode 100644 index 0000000000..c4d06f08ad --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/additional-resources.yaml @@ -0,0 +1,3 @@ +{{ if .Values.additionalResources }} +{{ tpl .Values.additionalResources . }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/admin-bootstrap-creds.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/admin-bootstrap-creds.yaml new file mode 100644 index 0000000000..eb2d613c6c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/admin-bootstrap-creds.yaml @@ -0,0 +1,15 @@ +{{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} +{{- if and .Values.artifactory.admin.password (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-bootstrap-creds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + bootstrap.creds: {{ (printf "%s@%s=%s" .Values.artifactory.admin.username .Values.artifactory.admin.ip .Values.artifactory.admin.password) | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-access-config.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-access-config.yaml new file mode 100644 index 0000000000..4fcf85d94c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-access-config.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.access.accessConfig (not .Values.artifactory.unifiedSecretInstallation) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" . }}-access-config + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +stringData: + access.config.patch.yml: | +{{ tpl (toYaml .Values.access.accessConfig) . | indent 4 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-binarystore-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-binarystore-secret.yaml new file mode 100644 index 0000000000..6b721dd4c7 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-binarystore-secret.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.artifactory.persistence.customBinarystoreXmlSecret) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-binarystore + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + binarystore.xml: |- +{{- if .Values.artifactory.persistence.binarystoreXml }} +{{ tpl .Values.artifactory.persistence.binarystoreXml . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/binarystore.xml" ) . | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-configmaps.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-configmaps.yaml new file mode 100644 index 0000000000..359fa07d25 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-configmaps.yaml @@ -0,0 +1,13 @@ +{{ if .Values.artifactory.configMaps }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-configmaps + labels: + app: {{ template "artifactory.fullname" . }} + chart: {{ .Chart.Name }}-{{ .Chart.Version }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ tpl .Values.artifactory.configMaps . | indent 2 }} +{{ end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-custom-secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-custom-secrets.yaml new file mode 100644 index 0000000000..4b73e79fc3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-custom-secrets.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.artifactory.customSecrets (not .Values.artifactory.unifiedSecretInstallation) }} +{{- range .Values.artifactory.customSecrets }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" $ }}-{{ .name }} + labels: + app: "{{ template "artifactory.name" $ }}" + chart: "{{ template "artifactory.chart" $ }}" + component: "{{ $.Values.artifactory.name }}" + heritage: {{ $.Release.Service | quote }} + release: {{ $.Release.Name | quote }} +type: Opaque +stringData: + {{ .key }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-database-secrets.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-database-secrets.yaml new file mode 100644 index 0000000000..f98d422e95 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-database-secrets.yaml @@ -0,0 +1,24 @@ +{{- if and (not .Values.database.secrets) (not .Values.postgresql.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +{{- if or .Values.database.url .Values.database.user .Values.database.password }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" . }}-database-creds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +type: Opaque +data: + {{- with .Values.database.url }} + db-url: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.user }} + db-user: {{ tpl . $ | b64enc | quote }} + {{- end }} + {{- with .Values.database.password }} + db-password: {{ tpl . $ | b64enc | quote }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml new file mode 100644 index 0000000000..72dee6bb84 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-gcp-credentials-secret.yaml @@ -0,0 +1,16 @@ +{{- if not .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} +{{- if and (.Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled) (not .Values.artifactory.unifiedSecretInstallation) }} +kind: Secret +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-gcpcreds + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +stringData: + gcp.credentials.json: |- +{{ tpl .Values.artifactory.persistence.googleStorage.gcpServiceAccount.config . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-hpa.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-hpa.yaml new file mode 100644 index 0000000000..01f8a9fb70 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-hpa.yaml @@ -0,0 +1,29 @@ +{{- if .Values.autoscaling.enabled }} + {{- if semverCompare ">=v1.23.0-0" .Capabilities.KubeVersion.Version }} +apiVersion: autoscaling/v2 + {{- else }} +apiVersion: autoscaling/v2beta2 + {{- end }} +kind: HorizontalPodAutoscaler +metadata: + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + name: {{ template "artifactory.fullname" . }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: StatefulSet + name: {{ template "artifactory.fullname" . }} + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + - type: Resource + resource: + name: cpu + target: + type: Utilization + averageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-installer-info.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-installer-info.yaml new file mode 100644 index 0000000000..cfb95b67d3 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-installer-info.yaml @@ -0,0 +1,16 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-installer-info + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + installer-info.json: | +{{- if .Values.installerInfo -}} +{{- tpl .Values.installerInfo . | nindent 4 -}} +{{- else -}} +{{ (tpl ( .Files.Get "files/installer-info.json" | nindent 4 ) .) }} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-license-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-license-secret.yaml new file mode 100644 index 0000000000..ba83aaf24d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-license-secret.yaml @@ -0,0 +1,16 @@ +{{ if and (not .Values.artifactory.unifiedSecretInstallation) (not .Values.artifactory.license.secret) (not .Values.artifactory.license.licenseKey) }} +{{- with .Values.artifactory.license.licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "artifactory.fullname" $ }}-license + labels: + app: {{ template "artifactory.name" $ }} + chart: {{ template "artifactory.chart" $ }} + heritage: {{ $.Release.Service }} + release: {{ $.Release.Name }} +type: Opaque +data: + artifactory.lic: {{ . | b64enc | quote }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-migration-scripts.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-migration-scripts.yaml new file mode 100644 index 0000000000..4b1ba40276 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-migration-scripts.yaml @@ -0,0 +1,18 @@ +{{- if .Values.artifactory.migration.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-migration-scripts + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + migrate.sh: | +{{ .Files.Get "files/migrate.sh" | indent 4 }} + migrationHelmInfo.yaml: | +{{ .Files.Get "files/migrationHelmInfo.yaml" | indent 4 }} + migrationStatus.sh: | +{{ .Files.Get "files/migrationStatus.sh" | indent 4 }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-networkpolicy.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-networkpolicy.yaml new file mode 100644 index 0000000000..d24203dc99 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-networkpolicy.yaml @@ -0,0 +1,34 @@ +{{- range .Values.networkpolicy }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ template "artifactory.fullname" $ }}-{{ .name }}-networkpolicy + labels: + app: {{ template "artifactory.name" $ }} + chart: {{ template "artifactory.chart" $ }} + release: {{ $.Release.Name }} + heritage: {{ $.Release.Service }} +spec: +{{- if .podSelector }} + podSelector: +{{ .podSelector | toYaml | trimSuffix "\n" | indent 4 -}} +{{ else }} + podSelector: {} +{{- end }} + policyTypes: + {{- if .ingress }} + - Ingress + {{- end }} + {{- if .egress }} + - Egress + {{- end }} +{{- if .ingress }} + ingress: +{{ .ingress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +{{- if .egress }} + egress: +{{ .egress | toYaml | trimSuffix "\n" | indent 2 -}} +{{- end }} +--- +{{- end -}} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-nfs-pvc.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-nfs-pvc.yaml new file mode 100644 index 0000000000..75d6d0c53d --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-nfs-pvc.yaml @@ -0,0 +1,101 @@ +{{- if eq .Values.artifactory.persistence.type "nfs" }} +### Artifactory HA data +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory.fullname" . }}-data-pv + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory.name" . }}-data-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haDataMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-data-pvc + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory.name" . }}-data-pv + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} +--- +### Artifactory HA backup +apiVersion: v1 +kind: PersistentVolume +metadata: + name: {{ template "artifactory.fullname" . }}-backup-pv + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + id: {{ template "artifactory.name" . }}-backup-pv + type: nfs-volume +spec: + {{- if .Values.artifactory.persistence.nfs.mountOptions }} + mountOptions: +{{ toYaml .Values.artifactory.persistence.nfs.mountOptions | indent 4 }} + {{- end }} + capacity: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + accessModes: + - ReadWriteOnce + persistentVolumeReclaimPolicy: Retain + nfs: + server: {{ .Values.artifactory.persistence.nfs.ip }} + path: "{{ .Values.artifactory.persistence.nfs.haBackupMount }}" + readOnly: false +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: {{ template "artifactory.fullname" . }}-backup-pvc + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + type: nfs-volume +spec: + accessModes: + - ReadWriteOnce + storageClassName: "" + resources: + requests: + storage: {{ .Values.artifactory.persistence.nfs.capacity }} + selector: + matchLabels: + id: {{ template "artifactory.name" . }}-backup-pv + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-pdb.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-pdb.yaml new file mode 100644 index 0000000000..68876d23b0 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/artifactory-pdb.yaml @@ -0,0 +1,24 @@ +{{- if .Values.artifactory.minAvailable -}} +{{- if semverCompare "= 107.79.x), just set databaseUpgradeReady=true \n" .Values.databaseUpgradeReady | quote }} +{{- end }} +{{- with .Values.artifactory.statefulset.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +{{- if and (eq (include "artifactory.isUsingDerby" .) "true") (gt (.Values.artifactory.replicaCount | int64) 1) }} + {{- fail "Derby database is not supported in HA mode" }} +{{- end }} +{{- if .Values.artifactory.postStartCommand }} + {{- fail ".Values.artifactory.postStartCommand is not supported and should be replaced with .Values.artifactory.lifecycle.postStart.exec.command" }} +{{- end }} +{{- if eq .Values.artifactory.persistence.type "aws-s3" }} + {{- fail "\nPersistence storage type 'aws-s3' is deprecated and is not supported and should be replaced with 'aws-s3-v3'" }} +{{- end }} +{{- if or .Values.artifactory.persistence.googleStorage.identity .Values.artifactory.persistence.googleStorage.credential }} + {{- fail "\nGCP Bucket Authentication with Identity and Credential is deprecated" }} +{{- end }} +{{- if (eq (.Values.artifactory.setSecurityContext | toString) "false" ) }} + {{- fail "\n You need to set security context at the pod level. .Values.artifactory.setSecurityContext is no longer supported. Replace it with .Values.artifactory.podSecurityContext" }} +{{- end }} +{{- if or .Values.artifactory.uid .Values.artifactory.gid }} +{{- if or (not (eq (.Values.artifactory.uid | toString) "1030" )) (not (eq (.Values.artifactory.gid | toString) "1030" )) }} + {{- fail "\n .Values.artifactory.uid and .Values.artifactory.gid are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.runAsUser .Values.artifactory.podSecurityContext.runAsGroup and .Values.artifactory.podSecurityContext.fsGroup" }} +{{- end }} +{{- end }} +{{- if or .Values.artifactory.fsGroupChangePolicy .Values.artifactory.seLinuxOptions }} + {{- fail "\n .Values.artifactory.fsGroupChangePolicy and .Values.artifactory.seLinuxOptions are no longer supported. You need to set these values at the pod security context level. Replace them with .Values.artifactory.podSecurityContext.fsGroupChangePolicy and .Values.artifactory.podSecurityContext.seLinuxOptions" }} +{{- end }} +{{- if .Values.initContainerImage }} + {{- fail "\n .Values.initContainerImage is no longer supported. Replace it with .Values.initContainers.image.registry .Values.initContainers.image.repository and .Values.initContainers.image.tag" }} +{{- end }} +spec: + serviceName: {{ template "artifactory.name" . }} + replicas: {{ .Values.artifactory.replicaCount }} + updateStrategy: {{- toYaml .Values.artifactory.updateStrategy | nindent 4 }} + selector: + matchLabels: + app: {{ template "artifactory.name" . }} + role: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} + template: + metadata: + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + role: {{ template "artifactory.name" . }} + component: {{ .Values.artifactory.name }} + release: {{ .Release.Name }} + {{- with .Values.artifactory.labels }} +{{ toYaml . | indent 8 }} + {{- end }} + annotations: + {{- if not .Values.artifactory.unifiedSecretInstallation }} + checksum/database-secrets: {{ include (print $.Template.BasePath "/artifactory-database-secrets.yaml") . | sha256sum }} + checksum/binarystore: {{ include (print $.Template.BasePath "/artifactory-binarystore-secret.yaml") . | sha256sum }} + checksum/systemyaml: {{ include (print $.Template.BasePath "/artifactory-system-yaml.yaml") . | sha256sum }} + {{- if .Values.access.accessConfig }} + checksum/access-config: {{ include (print $.Template.BasePath "/artifactory-access-config.yaml") . | sha256sum }} + {{- end }} + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + checksum/gcpcredentials: {{ include (print $.Template.BasePath "/artifactory-gcp-credentials-secret.yaml") . | sha256sum }} + {{- end }} + {{- if not (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + checksum/admin-creds: {{ include (print $.Template.BasePath "/admin-bootstrap-creds.yaml") . | sha256sum }} + {{- end }} + {{- else }} + checksum/artifactory-unified-secret: {{ include (print $.Template.BasePath "/artifactory-unified-secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.artifactory.annotations }} +{{ toYaml . | indent 8 }} + {{- end }} + spec: + {{- if .Values.artifactory.schedulerName }} + schedulerName: {{ .Values.artifactory.schedulerName | quote }} + {{- end }} + {{- if .Values.artifactory.priorityClass.existingPriorityClass }} + priorityClassName: {{ .Values.artifactory.priorityClass.existingPriorityClass }} + {{- else -}} + {{- if .Values.artifactory.priorityClass.create }} + priorityClassName: {{ default (include "artifactory.fullname" .) .Values.artifactory.priorityClass.name }} + {{- end }} + {{- end }} + serviceAccountName: {{ template "artifactory.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ add .Values.artifactory.terminationGracePeriodSeconds 10 }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.podSecurityContext.enabled }} + securityContext: {{- omit .Values.artifactory.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.artifactory.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.artifactory.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if or .Values.artifactory.customInitContainersBegin .Values.global.customInitContainersBegin }} +{{ tpl (include "artifactory.customInitContainersBegin" .) . | indent 6 }} + {{- end }} + {{- if .Values.artifactory.persistence.enabled }} + {{- if .Values.artifactory.deleteDBPropertiesOnStartup }} + - name: "delete-db-properties" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - 'rm -fv {{ .Values.artifactory.persistence.mountPath }}/etc/db.properties' + volumeMounts: + - name: artifactory-volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- end }} + {{- end }} + {{- if or (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) .Values.artifactory.admin.password }} + - name: "access-bootstrap-creds" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > + echo "Preparing {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -Lrf /tmp/access/bootstrap.creds {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + chmod 600 {{ .Values.artifactory.persistence.mountPath }}/etc/access/bootstrap.creds; + volumeMounts: + - name: artifactory-volume + mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + {{- if or (not .Values.artifactory.unifiedSecretInstallation) (and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey) }} + - name: access-bootstrap-creds + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/access/bootstrap.creds" + {{- if and .Values.artifactory.admin.secret .Values.artifactory.admin.dataKey }} + subPath: {{ .Values.artifactory.admin.dataKey }} + {{- else }} + subPath: "bootstrap.creds" + {{- end }} + {{- end }} + - name: 'copy-system-configurations' + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - '/bin/bash' + - '-c' + - > + if [[ -e "{{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml" ]]; then chmod 644 {{ .Values.artifactory.persistence.mountPath }}/etc/filebeat.yaml; fi; + echo "Copy system.yaml to {{ .Values.artifactory.persistence.mountPath }}/etc"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access/keys/trusted; + {{- if .Values.systemYamlOverride.existingSecret }} + cp -fv /tmp/etc/{{ .Values.systemYamlOverride.dataKey }} {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- else }} + cp -fv /tmp/etc/system.yaml {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml; + {{- end }} + echo "Copy binarystore.xml file"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory; + cp -fv /tmp/etc/artifactory/binarystore.xml {{ .Values.artifactory.persistence.mountPath }}/etc/artifactory/binarystore.xml; + {{- if .Values.access.accessConfig }} + echo "Copy access.config.patch.yml to {{ .Values.artifactory.persistence.mountPath }}/etc/access"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/access; + cp -fv /tmp/etc/access.config.patch.yml {{ .Values.artifactory.persistence.mountPath }}/etc/access/access.config.patch.yml; + {{- end }} + {{- if .Values.access.resetAccessCAKeys }} + echo "Resetting Access CA Keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + touch {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/reset_ca_keys; + {{- end }} + {{- if .Values.access.customCertificatesSecretName }} + echo "Copying custom certificates to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys; + cp -fv /tmp/etc/tls.crt {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.crt; + cp -fv /tmp/etc/tls.key {{ .Values.artifactory.persistence.mountPath }}/bootstrap/etc/access/keys/ca.private.key; + {{- end }} + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + echo "Copy joinKey to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security; + echo -n ${ARTIFACTORY_JOIN_KEY} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/access/etc/security/join.key; + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectTokenSecretName }} + echo "Copy jfConnectToken to {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/; + echo -n ${ARTIFACTORY_JFCONNECT_TOKEN} > {{ .Values.artifactory.persistence.mountPath }}/bootstrap/jfconnect/registration_token; + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + echo "Copy masterKey to {{ .Values.artifactory.persistence.mountPath }}/etc/security"; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/etc/security; + echo -n ${ARTIFACTORY_MASTER_KEY} > {{ .Values.artifactory.persistence.mountPath }}/etc/security/master.key; + {{- end }} + env: + {{- if or .Values.artifactory.joinKey .Values.global.joinKey .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + - name: ARTIFACTORY_JOIN_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.joinKeySecretName .Values.global.joinKeySecretName }} + name: {{ include "artifactory.joinKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: join-key + {{- end }} + {{- if or .Values.artifactory.jfConnectToken .Values.artifactory.jfConnectSecretName }} + - name: ARTIFACTORY_JFCONNECT_TOKEN + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.jfConnectTokenSecretName }} + name: {{ include "artifactory.jfConnectTokenSecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: jfconnect-token + {{- end }} + {{- if or .Values.artifactory.masterKey .Values.global.masterKey .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + - name: ARTIFACTORY_MASTER_KEY + valueFrom: + secretKeyRef: + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.masterKeySecretName .Values.global.masterKeySecretName }} + name: {{ include "artifactory.masterKeySecretName" . }} + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: master-key + {{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.systemYamlOverride.existingSecret }} + - name: systemyaml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + {{- if .Values.systemYamlOverride.existingSecret }} + mountPath: "/tmp/etc/{{.Values.systemYamlOverride.dataKey}}" + subPath: {{ .Values.systemYamlOverride.dataKey }} + {{- else }} + mountPath: "/tmp/etc/system.yaml" + subPath: "system.yaml" + {{- end }} + + ######################## Binarystore ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Access config ########################## + {{- if .Values.access.accessConfig }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + - name: access-config + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/access.config.patch.yml" + subPath: "access.config.patch.yml" + {{- end }} + + ######################## Access certs external secret ########################## + {{- if .Values.access.customCertificatesSecretName }} + - name: access-certs + mountPath: "/tmp/etc/tls.crt" + subPath: tls.crt + - name: access-certs + mountPath: "/tmp/etc/tls.key" + subPath: tls.key + {{- end }} + + {{- if or .Values.artifactory.customCertificates.enabled .Values.global.customCertificates.enabled }} + - name: copy-custom-certificates + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCustomCerts" . | indent 10 }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: ca-certs + mountPath: "/tmp/certs" + {{- end }} + + {{- if .Values.artifactory.circleOfTrustCertificatesSecret }} + - name: copy-circle-of-trust-certificates + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - 'bash' + - '-c' + - > +{{ include "artifactory.copyCircleOfTrustCertsCerts" . | indent 10 }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath }} + - name: circle-of-trust-certs + mountPath: "/tmp/circleoftrustcerts" + {{- end }} + + {{- if .Values.waitForDatabase }} + {{- if .Values.postgresql.enabled }} + - name: "wait-for-db" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + resources: +{{ toYaml .Values.initContainers.resources | indent 10 }} + command: + - /bin/bash + - -c + - | + echo "Waiting for postgresql to come up" + ready=false; + while ! $ready; do echo waiting; + timeout 2s bash -c " + {{- if .Values.artifactory.migration.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.migration.preStartCommand . }}; + {{- end }} + scriptsPath="/opt/jfrog/artifactory/app/bin"; + mkdir -p $scriptsPath; + echo "Copy migration scripts and Run migration"; + cp -fv /tmp/migrate.sh $scriptsPath/migrate.sh; + cp -fv /tmp/migrationHelmInfo.yaml $scriptsPath/migrationHelmInfo.yaml; + cp -fv /tmp/migrationStatus.sh $scriptsPath/migrationStatus.sh; + mkdir -p {{ .Values.artifactory.persistence.mountPath }}/log; + bash $scriptsPath/migrationStatus.sh {{ include "artifactory.app.version" . }} {{ .Values.artifactory.migration.timeoutSeconds }} > >(tee {{ .Values.artifactory.persistence.mountPath }}/log/helm-migration.log) 2>&1; + env: + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: migration-scripts + mountPath: "/tmp/migrate.sh" + subPath: migrate.sh + - name: migration-scripts + mountPath: "/tmp/migrationHelmInfo.yaml" + subPath: migrationHelmInfo.yaml + - name: migration-scripts + mountPath: "/tmp/migrationStatus.sh" + subPath: migrationStatus.sh + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystore Xml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: "binarystore.xml" + + ######################## Artifactory persistence google storage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## CustomVolumeMounts ########################## + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} +{{- end }} + {{- if .Values.hostAliases }} + hostAliases: +{{ toYaml .Values.hostAliases | indent 6 }} + {{- end }} + containers: + {{- if .Values.splitServicesToContainers }} + - name: {{ .Values.router.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "router") }} + imagePullPolicy: {{ .Values.router.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/router/app/bin/entrypoint-router.sh + {{- with .Values.router.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_ROUTER_TOPOLOGY_LOCAL_REQUIREDSERVICETYPES + value: {{ include "artifactory.router.requiredServiceTypes" . }} +{{- with .Values.router.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - name: http + containerPort: {{ .Values.router.internalPort }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.router.persistence.mountPath | quote }} +{{- with .Values.router.customVolumeMounts }} +{{ tpl . $ | indent 8 }} +{{- end }} + resources: +{{ toYaml .Values.router.resources | indent 10 }} + {{- if .Values.router.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.router.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.router.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.router.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.router.livenessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.enabled }} + - name: {{ .Values.frontend.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/third-party/node/bin/node /opt/jfrog/artifactory/app/frontend/bin/server/dist/bundle.js /opt/jfrog/artifactory/app/frontend + {{- with .Values.frontend.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} +{{- with .Values.frontend.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.frontend.resources | indent 10 }} + {{- if .Values.frontend.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.frontend.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.frontend.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.frontend.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.evidence.enabled }} + - name: {{ .Values.evidence.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/evidence/bin/jf-evidence start + {{- with .Values.evidence.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.evidence.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.evidence.internalPort }} + name: http-evidence + - containerPort: {{ .Values.evidence.externalPort }} + name: grpc-evidence + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.evidence.resources | indent 10 }} + {{- if .Values.evidence.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.evidence.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.evidence.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.evidence.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.metadata.enabled }} + - name: {{ .Values.metadata.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/metadata/bin/jf-metadata start + {{- with .Values.metadata.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.metadata.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.metadata.resources | indent 10 }} + {{- if .Values.metadata.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.metadata.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.metadata.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.metadata.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.event.enabled }} + - name: {{ .Values.event.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/event/bin/jf-event start + {{- with .Values.event.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.event.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.event.resources | indent 10 }} + {{- if .Values.event.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.event.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.event.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.event.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.jfconnect.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" .Values.artifactory.image.repository)) }} + - name: {{ .Values.jfconnect.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/jfconnect/bin/jf-connect start + {{- with .Values.jfconnect.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.jfconnect.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.jfconnect.resources | indent 10 }} + {{- if .Values.jfconnect.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.jfconnect.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.jfconnect.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.jfconnect.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if and .Values.access.enabled (not (.Values.access.runOnArtifactoryTomcat | default false)) }} + - name: {{ .Values.access.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.access.resources }} + resources: +{{ toYaml .Values.access.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + {{- if .Values.access.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.access.preStartCommand . }}; + {{- end }} + exec /opt/jfrog/artifactory/app/access/bin/entrypoint-access.sh + {{- with .Values.access.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.access.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.access.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.access.startupProbe.config . | indent 10 }} + {{- end }} + {{- if semverCompare " + exec /opt/jfrog/artifactory/app/third-party/java/bin/java {{ .Values.federation.extraJavaOpts }} -jar /opt/jfrog/artifactory/app/rtfs/lib/jf-rtfs + {{- with .Values.federation.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name + # TODO - Password,Url,Username - should be derived from env variable +{{- with .Values.federation.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.federation.resources | indent 10 }} + {{- if .Values.federation.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.federation.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.federation.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.federation.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- if .Values.observability.enabled }} + - name: {{ .Values.observability.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + exec /opt/jfrog/artifactory/app/observability/bin/jf-observability start + {{- with .Values.observability.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + - name: JF_SHARED_NODE_ID + valueFrom: + fieldRef: + fieldPath: metadata.name +{{- with .Values.observability.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + volumeMounts: + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + resources: +{{ toYaml .Values.observability.resources | indent 10 }} + {{- if .Values.observability.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.observability.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.observability.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.observability.livenessProbe.config . | indent 10 }} + {{- end }} + {{- end }} + {{- end }} + - name: {{ .Values.artifactory.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "artifactory") }} + imagePullPolicy: {{ .Values.artifactory.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.artifactory.resources }} + resources: +{{ toYaml .Values.artifactory.resources | indent 10 }} + {{- end }} + command: + - '/bin/bash' + - '-c' + - > + set -e; + if [ -d /artifactory_extra_conf ] && [ -d /artifactory_bootstrap ]; then + echo "Copying bootstrap config from /artifactory_extra_conf to /artifactory_bootstrap"; + cp -Lrfv /artifactory_extra_conf/ /artifactory_bootstrap/; + fi; + {{- if .Values.artifactory.configMapName }} + echo "Copying bootstrap configs"; + cp -Lrf /bootstrap/* /artifactory_bootstrap/; + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + echo "Copying plugins"; + cp -Lrf /tmp/plugin/*/* /artifactory_bootstrap/plugins; + {{- end }} + {{- range .Values.artifactory.copyOnEveryStartup }} + {{- $targetPath := printf "%s/%s" $.Values.artifactory.persistence.mountPath .target }} + {{- $baseDirectory := regexFind ".*/" $targetPath }} + mkdir -p {{ $baseDirectory }}; + cp -Lrf {{ .source }} {{ $.Values.artifactory.persistence.mountPath }}/{{ .target }}; + {{- end }} + {{- if .Values.artifactory.preStartCommand }} + echo "Running custom preStartCommand command"; + {{ tpl .Values.artifactory.preStartCommand . }}; + {{- end }} + exec /entrypoint-artifactory.sh + {{- with .Values.artifactory.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + env: + {{- if and (gt (.Values.artifactory.replicaCount | int64) 1) (eq (include "artifactory.isImageProType" .) "true") (eq (include "artifactory.isUsingDerby" .) "false") }} + - name : JF_SHARED_NODE_HAENABLED + value: "true" + {{- end }} + {{- if .Values.aws.license.enabled }} + - name: IS_AWS_LICENSE + value: "true" + - name: AWS_REGION + value: {{ .Values.aws.region | quote }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.aws.licenseConfigSecretName }} + key: iam_role + {{- end }} + {{- end }} + {{- if .Values.splitServicesToContainers }} + - name : JF_ROUTER_ENABLED + value: "true" + - name : JF_ROUTER_SERVICE_ENABLED + value: "false" + - name : JF_EVENT_ENABLED + value: "false" + - name : JF_METADATA_ENABLED + value: "false" + - name : JF_FRONTEND_ENABLED + value: "false" + - name: JF_FEDERATION_ENABLED + value: "false" + - name : JF_OBSERVABILITY_ENABLED + value: "false" + - name : JF_JFCONNECT_SERVICE_ENABLED + value: "false" + - name : JF_EVIDENCE_ENABLED + value: "false" + {{- if not (.Values.access.runOnArtifactoryTomcat | default false) }} + - name : JF_ACCESS_ENABLED + value: "false" + {{- end}} + {{- end}} + {{- if and (not .Values.waitForDatabase) (not .Values.postgresql.enabled) }} + - name: SKIP_WAIT_FOR_EXTERNAL_DB + value: "true" + {{- end }} + {{- if or .Values.database.secrets.user .Values.database.user }} + - name: JF_SHARED_DATABASE_USERNAME + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.user }} + name: {{ tpl .Values.database.secrets.user.name . }} + key: {{ tpl .Values.database.secrets.user.key . }} + {{- else if .Values.database.user }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-user + {{- end }} + {{- end }} + {{ if or .Values.database.secrets.password .Values.database.password .Values.postgresql.enabled }} + - name: JF_SHARED_DATABASE_PASSWORD + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.password }} + name: {{ tpl .Values.database.secrets.password.name . }} + key: {{ tpl .Values.database.secrets.password.key . }} + {{- else if .Values.database.password }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-password + {{- else if .Values.postgresql.enabled }} + name: {{ .Release.Name }}-postgresql + key: postgresql-password + {{- end }} + {{- end }} + {{- if or .Values.database.secrets.url .Values.database.url }} + - name: JF_SHARED_DATABASE_URL + valueFrom: + secretKeyRef: + {{- if .Values.database.secrets.url }} + name: {{ tpl .Values.database.secrets.url.name . }} + key: {{ tpl .Values.database.secrets.url.key . }} + {{- else if .Values.database.url }} + {{- if not .Values.artifactory.unifiedSecretInstallation }} + name: {{ template "artifactory.fullname" . }}-database-creds + {{- else }} + name: "{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret" + {{- end }} + key: db-url + {{- end }} + {{- end }} +{{- with .Values.artifactory.extraEnvironmentVariables }} +{{ tpl (toYaml .) $ | indent 8 }} +{{- end }} + ports: + - containerPort: {{ .Values.artifactory.internalPort }} + name: http + - containerPort: {{ .Values.artifactory.internalArtifactoryPort }} + name: http-internal + - containerPort: {{ .Values.federation.internalPort }} + name: http-rtfs + {{- if .Values.artifactory.javaOpts.jmx.enabled }} + - containerPort: {{ .Values.artifactory.javaOpts.jmx.port }} + name: tcp-jmx + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.artifactory.ssh.internalPort }} + name: tcp-ssh + {{- end }} + volumeMounts: + {{- if .Values.artifactory.customPersistentVolumeClaim }} + - name: {{ .Values.artifactory.customPersistentVolumeClaim.name }} + mountPath: {{ .Values.artifactory.customPersistentVolumeClaim.mountPath }} + {{- end }} + {{- if .Values.aws.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" + {{- end }} + {{- if .Values.artifactory.userPluginSecrets }} + - name: bootstrap-plugins + mountPath: "/artifactory_bootstrap/plugins/" + {{- range .Values.artifactory.userPluginSecrets }} + - name: {{ tpl . $ }} + mountPath: "/tmp/plugin/{{ tpl . $ }}" + {{- end }} + {{- end }} + - name: artifactory-volume + mountPath: {{ .Values.artifactory.persistence.mountPath | quote }} + + ######################## Artifactory config map ########################## + {{- if .Values.artifactory.configMapName }} + - name: bootstrap-config + mountPath: "/bootstrap/" + {{- end }} + + ######################## Artifactory persistence nfs ########################## + {{- if eq .Values.artifactory.persistence.type "nfs" }} + - name: artifactory-data + mountPath: "{{ .Values.artifactory.persistence.nfs.dataDir }}" + - name: artifactory-backup + mountPath: "{{ .Values.artifactory.persistence.nfs.backupDir }}" + {{- else }} + + ######################## Artifactory persistence binarystoreXml ########################## + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.customBinarystoreXmlSecret }} + - name: binarystore-xml + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/tmp/etc/artifactory/binarystore.xml" + subPath: binarystore.xml + + ######################## Artifactory persistence googleStorage ########################## + {{- if .Values.artifactory.persistence.googleStorage.gcpServiceAccount.enabled }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.persistence.googleStorage.gcpServiceAccount.customSecretName }} + - name: gcpcreds-json + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/gcp.credentials.json" + subPath: gcp.credentials.json + {{- end }} + {{- end }} + + ######################## Artifactory license ########################## + {{- if or .Values.artifactory.license.secret .Values.artifactory.license.licenseKey }} + {{- if or (not .Values.artifactory.unifiedSecretInstallation) .Values.artifactory.license.secret }} + - name: artifactory-license + {{- else }} + - name: {{ include "artifactory.unifiedCustomSecretVolumeName" . }} + {{- end }} + mountPath: "/artifactory_bootstrap/artifactory.cluster.license" + {{- if .Values.artifactory.license.secret }} + subPath: {{ .Values.artifactory.license.dataKey }} + {{- else if .Values.artifactory.license.licenseKey }} + subPath: artifactory.lic + {{- end }} + {{- end }} + + - name: installer-info + mountPath: "/artifactory_bootstrap/info/installer-info.json" + subPath: installer-info.json + {{- if or .Values.artifactory.customVolumeMounts .Values.global.customVolumeMounts }} +{{ tpl (include "artifactory.customVolumeMounts" .) . | indent 8 }} + {{- end }} + {{- if .Values.artifactory.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.artifactory.startupProbe.config . | indent 10 }} + {{- end }} + {{- if and (not .Values.splitServicesToContainers) (semverCompare "=1.18.0-0" .Capabilities.KubeVersion.GitVersion) }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.defaultBackend.enabled }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + defaultBackend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end }} + rules: +{{- if .Values.ingress.hosts }} + {{- if .Capabilities.APIVersions.Has "networking.k8s.io/v1" }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $artifactoryServicePort }} + {{- end }} + {{- if and $.Values.federation.enabled (not (regexMatch "^.*(oss|cpp-ce|jcr).*$" $.Values.artifactory.image.repository)) }} + - path: {{ $.Values.ingress.rtfsPath }} + pathType: ImplementationSpecific + backend: + service: + name: {{ $serviceName }} + port: + number: {{ $.Values.federation.internalPort }} + {{- end }} + {{- end }} + {{- else }} + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host | quote }} + http: + paths: + - path: {{ $.Values.ingress.routerPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- if not $.Values.ingress.disableRouterBypass }} + - path: {{ $.Values.ingress.artifactoryPath }} + backend: + serviceName: {{ $serviceName }} + servicePort: {{ $artifactoryServicePort }} + {{- end }} + {{- end }} + {{- end }} +{{- end -}} + {{- with .Values.ingress.additionalRules }} +{{ tpl . $ | indent 2 }} + {{- end }} + + {{- if .Values.ingress.tls }} + tls: +{{ toYaml .Values.ingress.tls | indent 4 }} + {{- end -}} + +{{- if .Values.customIngress }} +--- +{{ .Values.customIngress | toYaml | trimSuffix "\n" }} +{{- end -}} +{{- end -}} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/logger-configmap.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/logger-configmap.yaml new file mode 100644 index 0000000000..41a078b024 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/logger-configmap.yaml @@ -0,0 +1,63 @@ +{{- if or .Values.artifactory.loggers .Values.artifactory.catalinaLoggers }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-logger + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + tail-log.sh: | + #!/bin/sh + + LOG_DIR=$1 + LOG_NAME=$2 + PID= + + # Wait for log dir to appear + while [ ! -d ${LOG_DIR} ]; do + sleep 1 + done + + cd ${LOG_DIR} + + LOG_PREFIX=$(echo ${LOG_NAME} | sed 's/.log$//g') + + # Find the log to tail + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + + # Wait for the log file + while [ -z "${LOG_FILE}" ]; do + sleep 1 + LOG_FILE=$(ls -1t ./${LOG_PREFIX}.log 2>/dev/null) + done + + echo "Log file ${LOG_FILE} is ready!" + + # Get inode number + INODE_ID=$(ls -i ${LOG_FILE}) + + # echo "Tailing ${LOG_FILE}" + tail -F ${LOG_FILE} & + PID=$! + + # Loop forever to see if a new log was created + while true; do + # Check inode number + NEW_INODE_ID=$(ls -i ${LOG_FILE}) + + # If inode number changed, this means log was rotated and need to start a new tail + if [ "${INODE_ID}" != "${NEW_INODE_ID}" ]; then + kill -9 ${PID} 2>/dev/null + INODE_ID="${NEW_INODE_ID}" + + # Start a new tail + tail -F ${LOG_FILE} & + PID=$! + fi + sleep 1 + done + +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-artifactory-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-artifactory-conf.yaml new file mode 100644 index 0000000000..3434489943 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-artifactory-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customArtifactoryConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-artifactory-conf + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + artifactory.conf: | +{{- if .Values.nginx.artifactoryConf }} +{{ tpl .Values.nginx.artifactoryConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-artifactory-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-certificate-secret.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-certificate-secret.yaml new file mode 100644 index 0000000000..f13d401747 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-certificate-secret.yaml @@ -0,0 +1,14 @@ +{{- if and (not .Values.nginx.tlsSecretName) .Values.nginx.enabled .Values.nginx.https.enabled }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-certificate + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: +{{ ( include "artifactory.gen-certs" . ) | indent 2 }} +{{- end }} diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-conf.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-conf.yaml new file mode 100644 index 0000000000..31219d58a9 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-conf.yaml @@ -0,0 +1,18 @@ +{{- if and (not .Values.nginx.customConfigMap) .Values.nginx.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "artifactory.fullname" . }}-nginx-conf + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +data: + nginx.conf: | +{{- if .Values.nginx.mainConf }} +{{ tpl .Values.nginx.mainConf . | indent 4 }} +{{- else }} +{{ tpl ( .Files.Get "files/nginx-main-conf.yaml" ) . | indent 4 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-deployment.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-deployment.yaml new file mode 100644 index 0000000000..774bedccaf --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-deployment.yaml @@ -0,0 +1,223 @@ +{{- if .Values.nginx.enabled -}} +{{- $serviceName := include "artifactory.fullname" . -}} +{{- $servicePort := .Values.artifactory.externalPort -}} +apiVersion: apps/v1 +kind: {{ .Values.nginx.kind }} +metadata: + name: {{ template "artifactory.nginx.fullname" . }} + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 4 }} +{{- end }} +{{- with .Values.nginx.deployment.annotations }} + annotations: +{{ toYaml . | indent 4 }} +{{- end }} +spec: +{{- if eq .Values.nginx.kind "StatefulSet" }} + serviceName: {{ template "artifactory.nginx.fullname" . }} +{{- end }} +{{- if ne .Values.nginx.kind "DaemonSet" }} + replicas: {{ .Values.nginx.replicaCount }} +{{- end }} + selector: + matchLabels: + app: {{ template "artifactory.name" . }} + release: {{ .Release.Name }} + component: {{ .Values.nginx.name }} + template: + metadata: + annotations: + checksum/nginx-conf: {{ include (print $.Template.BasePath "/nginx-conf.yaml") . | sha256sum }} + checksum/nginx-artifactory-conf: {{ include (print $.Template.BasePath "/nginx-artifactory-conf.yaml") . | sha256sum }} + {{- range $key, $value := .Values.nginx.annotations }} + {{ $key }}: {{ $value | quote }} + {{- end }} + labels: + app: {{ template "artifactory.name" . }} + chart: {{ template "artifactory.chart" . }} + component: {{ .Values.nginx.name }} + heritage: {{ .Release.Service }} + release: {{ .Release.Name }} +{{- if .Values.nginx.labels }} +{{ toYaml .Values.nginx.labels | indent 8 }} +{{- end }} + spec: + {{- if .Values.nginx.podSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.podSecurityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + serviceAccountName: {{ template "artifactory.serviceAccountName" . }} + terminationGracePeriodSeconds: {{ .Values.nginx.terminationGracePeriodSeconds }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} +{{- include "artifactory.imagePullSecrets" . | indent 6 }} + {{- end }} + {{- if .Values.nginx.priorityClassName }} + priorityClassName: {{ .Values.nginx.priorityClassName | quote }} + {{- end }} + {{- if .Values.nginx.topologySpreadConstraints }} + topologySpreadConstraints: +{{ tpl (toYaml .Values.nginx.topologySpreadConstraints) . | indent 8 }} + {{- end }} + initContainers: + {{- if .Values.nginx.customInitContainers }} +{{ tpl (include "artifactory.nginx.customInitContainers" .) . | indent 6 }} + {{- end }} + - name: "setup" + image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + {{- if .Values.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + command: + - '/bin/sh' + - '-c' + - > + rm -rfv {{ .Values.nginx.persistence.mountPath }}/lost+found; + mkdir -p {{ .Values.nginx.persistence.mountPath }}/logs; + resources: + {{- toYaml .Values.initContainers.resources | nindent 10 }} + volumeMounts: + - mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + name: nginx-volume + containers: + - name: {{ .Values.nginx.name }} + image: {{ include "artifactory.getImageInfoByValue" (list . "nginx") }} + imagePullPolicy: {{ .Values.nginx.image.pullPolicy }} + {{- if .Values.nginx.containerSecurityContext.enabled }} + securityContext: {{- omit .Values.nginx.containerSecurityContext "enabled" | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.nginx.customCommand }} + command: +{{- tpl (include "nginx.command" .) . | indent 10 }} + {{- end }} + ports: +{{ if .Values.nginx.customPorts }} +{{ toYaml .Values.nginx.customPorts | indent 8 }} +{{ end }} + # DEPRECATION NOTE: The following is to maintain support for values pre 1.3.1 and + # will be cleaned up in a later version + {{- if .Values.nginx.http }} + {{- if .Values.nginx.http.enabled }} + - containerPort: {{ .Values.nginx.http.internalPort }} + name: http + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttp }} + name: http-internal + {{- end }} + {{- if .Values.nginx.https }} + {{- if .Values.nginx.https.enabled }} + - containerPort: {{ .Values.nginx.https.internalPort }} + name: https + {{- end }} + {{- else }} # DEPRECATED + - containerPort: {{ .Values.nginx.internalPortHttps }} + name: https-internal + {{- end }} + {{- if .Values.artifactory.ssh.enabled }} + - containerPort: {{ .Values.nginx.ssh.internalPort }} + name: tcp-ssh + {{- end }} + {{- with .Values.nginx.lifecycle }} + lifecycle: +{{ toYaml . | indent 10 }} + {{- end }} + volumeMounts: + - name: nginx-conf + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + - name: nginx-artifactory-conf + mountPath: "{{ .Values.nginx.persistence.mountPath }}/conf.d/" + - name: nginx-volume + mountPath: {{ .Values.nginx.persistence.mountPath | quote }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + mountPath: "{{ .Values.nginx.persistence.mountPath }}/ssl" + {{- end }} + {{- if .Values.nginx.customVolumeMounts }} +{{ tpl (include "artifactory.nginx.customVolumeMounts" .) . | indent 8 }} + {{- end }} + resources: +{{ toYaml .Values.nginx.resources | indent 10 }} + {{- if .Values.nginx.startupProbe.enabled }} + startupProbe: +{{ tpl .Values.nginx.startupProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.readinessProbe.enabled }} + readinessProbe: +{{ tpl .Values.nginx.readinessProbe.config . | indent 10 }} + {{- end }} + {{- if .Values.nginx.livenessProbe.enabled }} + livenessProbe: +{{ tpl .Values.nginx.livenessProbe.config . | indent 10 }} + {{- end }} + {{- $mountPath := .Values.nginx.persistence.mountPath }} + {{- range .Values.nginx.loggers }} + - name: {{ . | replace "_" "-" | replace "." "-" }} + image: {{ include "artifactory.getImageInfoByValue" (list $ "initContainers") }} + imagePullPolicy: {{ $.Values.initContainers.image.pullPolicy }} + command: + - tail + args: + - '-F' + - '{{ $mountPath }}/logs/{{ . }}' + volumeMounts: + - name: nginx-volume + mountPath: {{ $mountPath }} + resources: +{{ toYaml $.Values.nginx.loggersResources | indent 10 }} + {{- end }} + {{- if .Values.nginx.customSidecarContainers }} +{{ tpl (include "artifactory.nginx.customSidecarContainers" .) . | indent 6 }} + {{- end }} + {{- if or .Values.nginx.nodeSelector .Values.global.nodeSelector }} +{{ tpl (include "nginx.nodeSelector" .) . | indent 6 }} + {{- end }} + {{- with .Values.nginx.affinity }} + affinity: +{{ toYaml . | indent 8 }} + {{- end }} + {{- with .Values.nginx.tolerations }} + tolerations: +{{ toYaml . | indent 8 }} + {{- end }} + volumes: + {{- if .Values.nginx.customVolumes }} +{{ tpl (include "artifactory.nginx.customVolumes" .) . | indent 6 }} + {{- end }} + - name: nginx-conf + configMap: + {{- if .Values.nginx.customConfigMap }} + name: {{ .Values.nginx.customConfigMap }} + {{- else }} + name: {{ template "artifactory.fullname" . }}-nginx-conf + {{- end }} + - name: nginx-artifactory-conf + configMap: + {{- if .Values.nginx.customArtifactoryConfigMap }} + name: {{ .Values.nginx.customArtifactoryConfigMap }} + {{- else }} + name: {{ template "artifactory.fullname" . }}-nginx-artifactory-conf + {{- end }} + - name: nginx-volume + {{- if .Values.nginx.persistence.enabled }} + persistentVolumeClaim: + claimName: {{ .Values.nginx.persistence.existingClaim | default (include "artifactory.nginx.fullname" .) }} + {{- else }} + emptyDir: {} + {{- end }} + {{- if .Values.nginx.https.enabled }} + - name: ssl-certificates + secret: + {{- if .Values.nginx.tlsSecretName }} + secretName: {{ .Values.nginx.tlsSecretName }} + {{- else }} + secretName: {{ template "artifactory.fullname" . }}-nginx-certificate + {{- end }} + {{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-pdb.yaml b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-pdb.yaml new file mode 100644 index 0000000000..dff0c23a3e --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/charts/artifactory/templates/nginx-pdb.yaml @@ -0,0 +1,23 @@ +{{- if .Values.nginx.enabled -}} +{{- if semverCompare "; + # kubernetes.io/tls-acme: "true" + # nginx.ingress.kubernetes.io/proxy-body-size: "0" + labels: {} + # traffic-type: external + # traffic-type: internal + tls: [] + ## Secrets must be manually created in the namespace. + # - secretName: chart-example-tls + # hosts: + # - artifactory.domain.example + + ## Additional ingress rules + additionalRules: [] + ## This is an experimental feature, enabling this feature will route all traffic through the Router. + disableRouterBypass: false +## Allows to add custom ingress +customIngress: "" +networkpolicy: [] +## Allows all ingress and egress +# - name: artifactory +# podSelector: +# matchLabels: +# app: artifactory +# egress: +# - {} +# ingress: +# - {} +## Uncomment to allow only artifactory pods to communicate with postgresql (if postgresql.enabled is true) +# - name: postgresql +# podSelector: +# matchLabels: +# app: postgresql +# ingress: +# - from: +# - podSelector: +# matchLabels: +# app: artifactory + +## Apply horizontal pod auto scaling on artifactory pods +## Ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/ +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 3 + targetCPUUtilizationPercentage: 70 +## You can use a pre-existing secret with keys license_token and iam_role by specifying licenseConfigSecretName +## Example : Create a generic secret using `kubectl create secret generic --from-literal=license_token=${TOKEN} --from-literal=iam_role=${ROLE_ARN}` +aws: + license: + enabled: false + licenseConfigSecretName: + region: us-east-1 +## Container Security Context +## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +## @param containerSecurityContext.enabled Enabled containers' Security Context +## @param containerSecurityContext.runAsNonRoot Set container's Security Context runAsNonRoot +## @param containerSecurityContext.privileged Set container's Security Context privileged +## @param containerSecurityContext.allowPrivilegeEscalation Set container's Security Context allowPrivilegeEscalation +## @param containerSecurityContext.capabilities.drop List of capabilities to be dropped +## @param containerSecurityContext.seccompProfile.type Set container's Security Context seccomp profile +## +containerSecurityContext: + enabled: true + runAsNonRoot: true + privileged: false + allowPrivilegeEscalation: false + seccompProfile: + type: RuntimeDefault + capabilities: + drop: + - ALL +## The following router settings are to configure only when splitServicesToContainers set to true +router: + name: router + image: + registry: releases-docker.jfrog.io + repository: jfrog/router + tag: 7.118.3 + pullPolicy: IfNotPresent + serviceRegistry: + ## Service registry (Access) TLS verification skipped if enabled + insecure: false + internalPort: 8082 + externalPort: 8082 + tlsEnabled: false + ## Extra environment variables that can be used to tune router to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for router container + lifecycle: + ## From Artifactory versions 7.52.x, Wait for Artifactory to complete any open uploads or downloads before terminating + preStop: + exec: + command: ["sh", "-c", "while [[ $(curl --fail --silent --connect-timeout 2 http://localhost:8081/artifactory/api/v1/system/liveness) =~ OK ]]; do echo Artifactory is still alive; sleep 2; done"] + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: /scripts/script.sh + # subPath: script.sh + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "artifactory.scheme" . }}://localhost:{{ .Values.router.internalPort }}/router/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " prepended. + unifiedSecretPrependReleaseName: true + ## For HA installation, set this value > 1. This is only supported in Artifactory 7.25.x (appVersions) and above. + replicaCount: 1 + # minAvailable: 1 + + ## Note that by default we use appVersion to get image tag/version + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-pro + # tag: + pullPolicy: IfNotPresent + labels: {} + updateStrategy: + type: RollingUpdate + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + schedulerName: + ## Create a priority class for the Artifactory pod or use an existing one + ## NOTE - Maximum allowed value of a user defined priority is 1000000000 + priorityClass: + create: false + value: 1000000000 + ## Override default name + # name: + ## Use an existing priority class + # existingPriorityClass: + ## Spread Artifactory pods evenly across your nodes or some other topology + topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: kubernetes.io/hostname + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app: '{{ template "artifactory.name" . }}' + # role: '{{ template "artifactory.name" . }}' + # release: "{{ .Release.Name }}" + + ## Delete the db.properties file in ARTIFACTORY_HOME/etc/db.properties + deleteDBPropertiesOnStartup: true + ## certificates added to this secret will be copied to $JFROG_HOME/artifactory/var/etc/security/keys/trusted directory + customCertificates: + enabled: false + # certificateSecretName: + database: + maxOpenConnections: 80 + tomcat: + maintenanceConnector: + port: 8091 + connector: + maxThreads: 200 + sendReasonPhrase: false + extraConfig: 'acceptCount="400"' + ## Support for metrics is only available for Artifactory 7.7.x (appVersions) and above. + ## To enable set `.Values.artifactory.metrics.enabled` to `true` + ## Note : Depricated openMetrics as part of 7.87.x and renamed to `metrics` + ## Refer - https://www.jfrog.com/confluence/display/JFROG/Open+Metrics + metrics: + enabled: false + ## Settings for pushing metrics to Insight - enable filebeat to true + filebeat: + enabled: false + log: + enabled: false + ## Log level for filebeat. Possible values: debug, info, warning, or error. + level: "info" + ## Elasticsearch details for filebeat to connect + elasticsearch: + url: "Elasticsearch url where JFrog Insight is installed For example, http://:8082" + username: "" + password: "" + ## Support for Cold Artifact Storage + ## set 'coldStorage.enabled' to 'true' only for Artifactory instance that you are designating as the Cold instance + ## Refer - https://jfrog.com/help/r/jfrog-platform-administration-documentation/setting-up-cold-artifact-storage + coldStorage: + enabled: false + ## This directory is intended for use with NFS eventual configuration for HA + haDataDir: + enabled: false + path: + haBackupDir: + enabled: false + path: + ## Files to copy to ARTIFACTORY_HOME/ on each Artifactory startup + ## Note : From 107.46.x chart versions, copyOnEveryStartup is not needed for binarystore.xml, it is always copied via initContainers + copyOnEveryStartup: + ## Absolute path + # - source: /artifactory_bootstrap/artifactory.lic + ## Relative to ARTIFACTORY_HOME/ + # target: etc/artifactory/ + + ## Sidecar containers for tailing Artifactory logs + loggers: [] + # - access-audit.log + # - access-request.log + # - access-security-audit.log + # - access-service.log + # - artifactory-access.log + # - artifactory-event.log + # - artifactory-import-export.log + # - artifactory-request.log + # - artifactory-service.log + # - frontend-request.log + # - frontend-service.log + # - metadata-request.log + # - metadata-service.log + # - router-request.log + # - router-service.log + # - router-traefik.log + # - derby.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Sidecar containers for tailing Tomcat (catalina) logs + catalinaLoggers: [] + # - tomcat-catalina.log + # - tomcat-localhost.log + + ## Tomcat (catalina) loggers resources + catalinaLoggersResources: {} + # requests: + # memory: "10Mi" + # cpu: "10m" + # limits: + # memory: "100Mi" + # cpu: "50m" + + ## Migration support from 6.x to 7.x + migration: + enabled: false + timeoutSeconds: 3600 + ## Extra pre-start command in migration Init Container to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + ## Add custom init containers execution before predefined init containers + customInitContainersBegin: | + # - name: "custom-setup" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'touch {{ .Values.artifactory.persistence.mountPath }}/example-custom-setup' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + ## Add custom init containers execution after predefined init containers + customInitContainers: | + # - name: "custom-systemyaml-setup" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'curl -o {{ .Values.artifactory.persistence.mountPath }}/etc/system.yaml https:///systemyaml' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + ## Add custom sidecar containers + ## - The provided example uses a custom volume (customVolumes) + customSidecarContainers: | + # - name: "sidecar-list-etc" + # image: {{ include "artifactory.getImageInfoByValue" (list . "initContainers") }} + # imagePullPolicy: {{ .Values.initContainers.image.pullPolicy }} + # securityContext: + # runAsNonRoot: true + # allowPrivilegeEscalation: false + # capabilities: + # drop: + # - NET_RAW + # command: + # - 'sh' + # - '-c' + # - 'sh /scripts/script.sh' + # volumeMounts: + # - mountPath: "{{ .Values.artifactory.persistence.mountPath }}" + # name: artifactory-volume + # - mountPath: "/scripts/script.sh" + # name: custom-script + # subPath: script.sh + # resources: + # requests: + # memory: "32Mi" + # cpu: "50m" + # limits: + # memory: "128Mi" + # cpu: "100m" + ## Add custom volumes + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' + customVolumes: | + # - name: custom-script + # configMap: + # name: custom-script + ## Add custom volumesMounts + customVolumeMounts: | + # - name: custom-script + # mountPath: "/scripts/script.sh" + # subPath: script.sh + # - name: posthook-start + # mountPath: "/scripts/posthoook-start.sh" + # subPath: posthoook-start.sh + # - name: prehook-start + # mountPath: "/scripts/prehook-start.sh" + # subPath: prehook-start.sh + ## Add custom persistent volume mounts - Available to the entire namespace + customPersistentVolumeClaim: {} + # name: + # mountPath: + # accessModes: + # - "-" + # size: + # storageClassName: + + ## Artifactory license. + license: + ## licenseKey is the license key in plain text. Use either this or the license.secret setting + licenseKey: + ## If artifactory.license.secret is passed, it will be mounted as + ## ARTIFACTORY_HOME/etc/artifactory.lic and loaded at run time. + secret: + ## The dataKey should be the name of the secret data key created. + dataKey: + ## Create configMap with artifactory.config.import.xml and security.import.xml and pass name of configMap in following parameter + configMapName: + ## Add any list of configmaps to Artifactory + configMaps: | + # posthook-start.sh: |- + # echo "This is a post start script" + # posthook-end.sh: |- + # echo "This is a post end script" + ## List of secrets for Artifactory user plugins. + ## One Secret per plugin's files. + userPluginSecrets: + # - archive-old-artifacts + # - build-cleanup + # - webhook + # - '{{ template "my-chart.fullname" . }}' + + ## Artifactory requires a unique master key. + ## You can generate one with the command: "openssl rand -hex 32" + ## An initial one is auto generated by Artifactory on first startup. + # masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF + ## Alternatively, you can use a pre-existing secret with a key called master-key by specifying masterKeySecretName + # masterKeySecretName: + + ## Join Key to connect other services to Artifactory + ## IMPORTANT: Setting this value overrides the existing joinKey + ## IMPORTANT: You should NOT use the example joinKey for a production deployment! + # joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE + ## Alternatively, you can use a pre-existing secret with a key called join-key by specifying joinKeySecretName + # joinKeySecretName: + + ## Registration Token for JFConnect + # jfConnectToken: + ## Alternatively, you can use a pre-existing secret with a key called jfconnect-token by specifying jfConnectTokenSecretName + # jfConnectTokenSecretName: + + ## Add custom secrets - secret per file + ## If .Values.artifactory.unifiedSecretInstallation is true then secret name should be '{{ template "artifactory.unifiedSecretPrependReleaseName" . }}-unified-secret' common to all secrets + customSecrets: + # - name: custom-secret + # key: custom-secret.yaml + # data: > + # custom_secret_config: + # parameter1: value1 + # parameter2: value2 + # - name: custom-secret2 + # key: custom-secret2.config + # data: | + # here the custom secret 2 config + + ## If false, all service console logs will not redirect to a common console.log + consoleLog: false + ## admin allows to set the password for the default admin user. + ## See: https://www.jfrog.com/confluence/display/JFROG/Users+and+Groups#UsersandGroups-RecreatingtheDefaultAdminUserrecreate + admin: + ip: "127.0.0.1" + username: "admin" + password: + secret: + dataKey: + ## Extra pre-start command to install JDBC driver for MySql/MariaDb/Oracle + # preStartCommand: "mkdir -p /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib; cd /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib && curl -o /opt/jfrog/artifactory/var/bootstrap/artifactory/tomcat/lib/mysql-connector-java-5.1.41.jar https://repo1.maven.org/maven2/mysql/mysql-connector-java/5.1.41/mysql-connector-java-5.1.41.jar" + + ## Add lifecycle hooks for artifactory container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Extra environment variables that can be used to tune Artifactory to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: SERVER_XML_ARTIFACTORY_PORT + # value: "8081" + # - name: SERVER_XML_ARTIFACTORY_MAX_THREADS + # value: "200" + # - name: SERVER_XML_ACCESS_MAX_THREADS + # value: "50" + # - name: SERVER_XML_ARTIFACTORY_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_ACCESS_EXTRA_CONFIG + # value: "" + # - name: SERVER_XML_EXTRA_CONNECTOR + # value: "" + # - name: DB_POOL_MAX_ACTIVE + # value: "100" + # - name: DB_POOL_MAX_IDLE + # value: "10" + # - name: MY_SECRET_ENV_VAR + # valueFrom: + # secretKeyRef: + # name: my-secret-name + # key: my-secret-key + + ## System YAML entries now reside under files/system.yaml. + ## You can provide the specific values that you want to add or override under 'artifactory.extraSystemYaml'. + ## For example: + ## extraSystemYaml: + ## shared: + ## node: + ## id: my-instance + ## The entries provided under 'artifactory.extraSystemYaml' are merged with files/system.yaml to create the final system.yaml. + ## If you have already provided system.yaml under, 'artifactory.systemYaml', the values in that entry take precedence over files/system.yaml + ## You can modify specific entries with your own value under `artifactory.extraSystemYaml`, The values under extraSystemYaml overrides the values under 'artifactory.systemYaml' and files/system.yaml + extraSystemYaml: {} + ## systemYaml is intentionally commented and the previous content has been moved under files/system.yaml. + ## You have to add the all entries of the system.yaml file here, and it overrides the values in files/system.yaml. + # systemYaml: + annotations: {} + service: + name: artifactory + type: ClusterIP + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Artifactory service (useful if setting service.type=LoadBalancer) + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + statefulset: + annotations: {} + ## IMPORTANT: If overriding artifactory.internalPort: + ## DO NOT use port lower than 1024 as Artifactory runs as non-root and cannot bind to ports lower than 1024! + externalPort: 8082 + internalPort: 8082 + externalArtifactoryPort: 8081 + internalArtifactoryPort: 8081 + terminationGracePeriodSeconds: 30 + ## Pod Security Context + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ + ## @param artifactory.podSecurityContext.enabled Enable security context + ## @param artifactory.podSecurityContext.runAsNonRoot Set pod's Security Context runAsNonRoot + ## @param artifactory.podSecurityContext.runAsUser User ID for the pod + ## @param artifactory.podSecurityContext.runASGroup Group ID for the pod + ## @param artifactory.podSecurityContext.fsGroup Group ID for the pod + ## + podSecurityContext: + enabled: true + runAsNonRoot: true + runAsUser: 1030 + runAsGroup: 1030 + fsGroup: 1030 + # fsGroupChangePolicy: "Always" + # seLinuxOptions: {} + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.artifactory.tomcat.maintenanceConnector.port }}/artifactory/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare "", + # "private_key_id": "?????", + # "private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n", + # "client_email": "???@j.iam.gserviceaccount.com", + # "client_id": "???????", + # "auth_uri": "https://accounts.google.com/o/oauth2/auth", + # "token_uri": "https://oauth2.googleapis.com/token", + # "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs", + # "client_x509_cert_url": "https://www.googleapis.com/robot/v1....." + # } + endpoint: commondatastorage.googleapis.com + httpsOnly: false + ## Set a unique bucket name + bucketName: "artifactory-gcp" + ## GCP Bucket Authentication with Identity and Credential is deprecated. + ## identity: + ## credential: + path: "artifactory/filestore" + bucketExists: false + useInstanceCredentials: false + enableSignedUrlRedirect: false + ## For artifactory.persistence.type aws-s3-v3, s3-storage-v3-direct, cluster-s3-storage-v3, s3-storage-v3-archive + awsS3V3: + testConnection: false + identity: + credential: + region: + bucketName: artifactory-aws + path: artifactory/filestore + endpoint: + port: + useHttp: + maxConnections: 50 + connectionTimeout: + socketTimeout: + kmsServerSideEncryptionKeyId: + kmsKeyRegion: + kmsCryptoMode: + useInstanceCredentials: true + usePresigning: false + signatureExpirySeconds: 300 + signedUrlExpirySeconds: 30 + cloudFrontDomainName: + cloudFrontKeyPairId: + cloudFrontPrivateKey: + enableSignedUrlRedirect: false + enablePathStyleAccess: false + multiPartLimit: + multipartElementSize: + ## For artifactory.persistence.type azure-blob, azure-blob-storage-direct, cluster-azure-blob-storage, azure-blob-storage-v2-direct + azureBlob: + accountName: + accountKey: + endpoint: + containerName: + multiPartLimit: 100000000 + multipartElementSize: 50000000 + testConnection: false + ## artifactory data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + ## Annotations for the Persistent Volume Claim + annotations: {} + ## Uncomment the following resources definitions or pass them from command line + ## to control the cpu and memory resources allocated by the Kubernetes cluster + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "2Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory. + ## You should set them according to the resources set above + javaOpts: + # xms: "1g" + # xmx: "2g" + jmx: + enabled: false + port: 9010 + host: + ssl: false + ## When authenticate is true, accessFile and passwordFile are required + authenticate: false + accessFile: + passwordFile: + # corePoolSize: 24 + # other: "" + nodeSelector: {} + tolerations: [] + affinity: {} + ## Only used if "affinity" is empty + podAntiAffinity: + ## Valid values are "soft" or "hard"; any other value indicates no anti-affinity + type: "soft" + topologyKey: "kubernetes.io/hostname" + ssh: + enabled: false + internalPort: 1339 + externalPort: 1339 +frontend: + name: frontend + enabled: true + internalPort: 8070 + ## Extra environment variables that can be used to tune frontend to your needs. + ## Uncomment and set value as needed + extraEnvironmentVariables: + # - name: MY_ENV_VAR + # value: "" + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "1" + + ## Add lifecycle hooks for frontend container + lifecycle: {} + # postStart: + # exec: + # command: ["/bin/sh", "-c", "echo Hello from the postStart handler"] + # preStop: + # exec: + # command: ["/bin/sh","-c","echo Hello from the preStop handler"] + + ## Session settings + session: + ## Time in minutes after which the frontend token will need to be refreshed + timeoutMinutes: '30' + ## The following settings are to configure the frequency of the liveness and startup probes when splitServicesToContainers set to true + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:{{ .Values.frontend.internalPort }}/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " --cert=ca.crt --key=ca.private.key` + # customCertificatesSecretName: + + ## When resetAccessCAKeys is true, Access will regenerate the CA certificate and matching private key + # resetAccessCAKeys: false + database: + maxOpenConnections: 80 + tomcat: + connector: + maxThreads: 50 + sendReasonPhrase: false + extraConfig: 'acceptCount="100"' + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl --fail --max-time {{ .Values.probes.timeoutSeconds }} http://localhost:8040/access/api/v1/system/liveness + initialDelaySeconds: {{ if semverCompare " /var/opt/jfrog/nginx/message"] + # preStop: + # exec: + # command: ["/bin/sh","-c","nginx -s quit; while killall -0 nginx; do sleep 1; done"] + + ## Sidecar containers for tailing Nginx logs + loggers: [] + # - access.log + # - error.log + + ## Loggers containers resources + loggersResources: {} + # requests: + # memory: "64Mi" + # cpu: "25m" + # limits: + # memory: "128Mi" + # cpu: "50m" + + ## Logs options + logs: + stderr: false + stdout: false + level: warn + ## A list of custom ports to expose on the NGINX pod. Follows the conventional Kubernetes yaml syntax for container ports. + customPorts: [] + # - containerPort: 8066 + # name: docker + + ## The nginx main conf was moved to files/nginx-main-conf.yaml. This key is commented out to keep support for the old configuration + # mainConf: | + + ## The nginx artifactory conf was moved to files/nginx-artifactory-conf.yaml. This key is commented out to keep support for the old configuration + # artifactoryConf: | + customInitContainers: "" + customSidecarContainers: "" + customVolumes: "" + customVolumeMounts: "" + customCommand: + ## allows overwriting the command for the nginx container. + ## defaults to [ 'nginx', '-g', 'daemon off;' ] + + service: + ## For minikube, set this to NodePort, elsewhere use LoadBalancer + type: LoadBalancer + ssloffload: false + ## @param service.ipFamilyPolicy Controller Service ipFamilyPolicy (optional, cloud specific) + ## This can be either SingleStack, PreferDualStack or RequireDualStack + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilyPolicy: "" + ## @param service.ipFamilies Controller Service ipFamilies (optional, cloud specific) + ## This can be either ["IPv4"], ["IPv6"], ["IPv4", "IPv6"] or ["IPv6", "IPv4"] + ## ref: https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services + ## + ipFamilies: [] + ## For supporting whitelist on the Nginx LoadBalancer service + ## Set this to a list of IP CIDR ranges + ## Example: loadBalancerSourceRanges: ['10.10.10.5/32', '10.11.10.5/32'] + ## or pass from helm command line + ## Example: helm install ... --set nginx.service.loadBalancerSourceRanges='{10.10.10.5/32,10.11.10.5/32}' + loadBalancerSourceRanges: [] + annotations: {} + ## Provide static ip address + loadBalancerIP: + ## There are two available options: "Cluster" (default) and "Local". + externalTrafficPolicy: Cluster + ## If the type is NodePort you can set a fixed port + # nodePort: 32082 + ## A list of custom ports to be exposed on nginx service. Follows the conventional Kubernetes yaml syntax for service ports. + customPorts: [] + # - port: 8066 + # targetPort: 8066 + # protocol: TCP + # name: docker + ## Renamed nginx internalPort 80,443 to 8080,8443 to support openshift + http: + enabled: true + externalPort: 80 + internalPort: 8080 + https: + enabled: true + externalPort: 443 + internalPort: 8443 + ssh: + internalPort: 1339 + externalPort: 1339 + ## DEPRECATED: The following will be removed in a future release + # externalPortHttp: 8080 + # internalPortHttp: 8080 + # externalPortHttps: 8443 + # internalPortHttps: 8443 + + ## The following settings are to configure the frequency of the liveness and readiness probes. + livenessProbe: + enabled: true + config: | + exec: + command: + - sh + - -c + - curl -s -k --fail --max-time {{ .Values.probes.timeoutSeconds }} {{ include "nginx.scheme" . }}://localhost:{{ include "nginx.port" . }}/ + initialDelaySeconds: {{ if semverCompare " + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClassName: "-" + resources: {} + # requests: + # memory: "250Mi" + # cpu: "100m" + # limits: + # memory: "250Mi" + # cpu: "500m" + nodeSelector: {} + tolerations: [] + affinity: {} +## Database configurations +## Use the wait-for-db init container. Set to false to skip +waitForDatabase: true +## Configuration values for the PostgreSQL dependency sub-chart +## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md +postgresql: + enabled: true + image: + registry: releases-docker.jfrog.io + repository: bitnami/postgresql + tag: 15.6.0-debian-11-r16 + postgresqlUsername: artifactory + postgresqlPassword: "" + postgresqlDatabase: artifactory + postgresqlExtendedConf: + listenAddresses: "*" + maxConnections: "1500" + persistence: + enabled: true + size: 200Gi + # existingClaim: + service: + port: 5432 + primary: + nodeSelector: {} + affinity: {} + tolerations: [] + readReplicas: + nodeSelector: {} + affinity: {} + tolerations: [] + resources: {} + securityContext: + enabled: true + containerSecurityContext: + enabled: true + # requests: + # memory: "512Mi" + # cpu: "100m" + # limits: + # memory: "1Gi" + # cpu: "500m" +## If NOT using the PostgreSQL in this chart (postgresql.enabled=false), +## specify custom database details here or leave empty and Artifactory will use embedded derby +database: + ## To run Artifactory with any database other than PostgreSQL allowNonPostgresql set to true. + allowNonPostgresql: false + type: + driver: + ## If you set the url, leave host and port empty + url: + ## If you would like this chart to create the secret containing the db + ## password, use these values + user: + password: + ## If you have existing Kubernetes secrets containing db credentials, use + ## these values + secrets: {} + # user: + # name: "rds-artifactory" + # key: "db-user" + # password: + # name: "rds-artifactory" + # key: "db-password" + # url: + # name: "rds-artifactory" + # key: "db-url" +## Filebeat Sidecar container +## The provided filebeat configuration is for Artifactory logs. It assumes you have a logstash installed and configured properly. +filebeat: + enabled: false + name: artifactory-filebeat + image: + repository: "docker.elastic.co/beats/filebeat" + version: 7.16.2 + logstashUrl: "logstash:5044" + livenessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + curl --fail 127.0.0.1:5066 + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + readinessProbe: + exec: + command: + - sh + - -c + - | + #!/usr/bin/env bash -e + filebeat test output + failureThreshold: 3 + initialDelaySeconds: 10 + periodSeconds: 10 + timeoutSeconds: 5 + resources: {} + # requests: + # memory: "100Mi" + # cpu: "100m" + # limits: + # memory: "100Mi" + # cpu: "100m" + + filebeatYml: | + logging.level: info + path.data: {{ .Values.artifactory.persistence.mountPath }}/log/filebeat + name: artifactory-filebeat + queue.spool: + file: + permissions: 0760 + filebeat.inputs: + - type: log + enabled: true + close_eof: ${CLOSE:false} + paths: + - {{ .Values.artifactory.persistence.mountPath }}/log/*.log + fields: + service: "jfrt" + log_type: "artifactory" + output: + logstash: + hosts: ["{{ .Values.filebeat.logstashUrl }}"] +## Allows to add additional kubernetes resources +## Use --- as a separator between multiple resources +## For an example, refer - https://github.com/jfrog/log-analytics-prometheus/blob/master/helm/artifactory-values.yaml +additionalResources: "" +## Adding entries to a Pod's /etc/hosts file +## For an example, refer - https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases +hostAliases: [] +# - ip: "127.0.0.1" +# hostnames: +# - "foo.local" +# - "bar.local" +# - ip: "10.1.2.3" +# hostnames: +# - "foo.remote" +# - "bar.remote" + +## Toggling this feature is seamless and requires helm upgrade +## will enable all microservices to run in different containers in a single pod (by default it is true) +splitServicesToContainers: true +## Specify common probes parameters +probes: + timeoutSeconds: 5 diff --git a/charts/jfrog/artifactory-jcr/107.90.14/ci/default-values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/ci/default-values.yaml new file mode 100644 index 0000000000..86355d3b3c --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/ci/default-values.yaml @@ -0,0 +1,7 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. +artifactory: + databaseUpgradeReady: true + + # To Fix ct tool --reuse-values - PASSWORDS ERROR: you must provide your current passwords when upgrade the release + postgresql: + postgresqlPassword: password diff --git a/charts/jfrog/artifactory-jcr/107.90.14/logo/jcr-logo.png b/charts/jfrog/artifactory-jcr/107.90.14/logo/jcr-logo.png new file mode 100644 index 0000000000..b1e312e321 Binary files /dev/null and b/charts/jfrog/artifactory-jcr/107.90.14/logo/jcr-logo.png differ diff --git a/charts/jfrog/artifactory-jcr/107.90.14/questions.yml b/charts/jfrog/artifactory-jcr/107.90.14/questions.yml new file mode 100644 index 0000000000..9cde428709 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/questions.yml @@ -0,0 +1,271 @@ +questions: +# Advance Settings +- variable: artifactory.artifactory.masterKey + default: "FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF" + description: "Artifactory master key. For security reasons, we strongly recommend you generate your own master key using this command: 'openssl rand -hex 32'" + type: string + label: Artifactory master key + group: "Security Settings" + +# Container Images +- variable: defaultImage + default: true + description: "Use default Docker image" + label: Use Default Image + type: boolean + show_subquestion_if: false + group: "Container Images" + subquestions: + - variable: artifactory.artifactory.image.repository + default: "docker.bintray.io/jfrog/artifactory-jcr" + description: "JFrog Container Registry image name" + type: string + label: JFrog Container Registry Image Name + - variable: artifactory.artifactory.image.version + default: "7.6.3" + description: "JFrog Container Registry image tag" + type: string + label: JFrog Container Registry Image Tag + - variable: artifactory.imagePullSecrets + description: "Image Pull Secret" + type: string + label: Image Pull Secret + +# Services and LoadBalancing Settings +- variable: artifactory.ingress.enabled + default: false + description: "Expose app using Layer 7 Load Balancer - ingress" + type: boolean + label: Expose app using Layer 7 Load Balancer + show_subquestion_if: true + group: "Services and Load Balancing" + required: true + subquestions: + - variable: artifactory.ingress.hosts[0] + default: "xip.io" + description: "Hostname to your artifactory installation" + type: hostname + required: true + label: Hostname + +# Nginx Settings +- variable: artifactory.nginx.enabled + default: true + description: "Enable nginx server" + type: boolean + label: Enable Nginx Server + group: "Services and Load Balancing" + required: true + show_if: "artifactory.ingress.enabled=false" +- variable: artifactory.nginx.service.type + default: "LoadBalancer" + description: "Nginx service type" + type: enum + required: true + label: Nginx Service Type + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" + options: + - "ClusterIP" + - "NodePort" + - "LoadBalancer" +- variable: artifactory.nginx.service.loadBalancerIP + default: "" + description: "Provide Static IP to configure with Nginx" + type: string + label: Config Nginx LoadBalancer IP + show_if: "artifactory.nginx.enabled=true&&artifactory.nginx.service.type=LoadBalancer&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" +- variable: artifactory.nginx.tlsSecretName + default: "" + description: "Provide SSL Secret name to configure with Nginx" + type: string + label: Config Nginx SSL Secret + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" +- variable: artifactory.nginx.customArtifactoryConfigMap + default: "" + description: "Provide configMap name to configure Nginx with custom `artifactory.conf`" + type: string + label: ConfigMap for Nginx Artifactory Config + show_if: "artifactory.nginx.enabled=true&&artifactory.ingress.enabled=false" + group: "Services and Load Balancing" + +# Database Settings +- variable: artifactory.postgresql.enabled + default: true + description: "Enable PostgreSQL" + type: boolean + required: true + label: Enable PostgreSQL + group: "Database Settings" + show_subquestion_if: true + subquestions: + - variable: artifactory.postgresql.postgresqlPassword + default: "" + description: "PostgreSQL password" + type: password + required: true + label: PostgreSQL Password + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.persistence.size + default: 20Gi + description: "PostgreSQL persistent volume size" + type: string + label: PostgreSQL Persistent Volume Size + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.persistence.storageClass + default: "" + description: "If undefined or null, uses the default StorageClass. Default to null" + type: storageclass + label: Default StorageClass for PostgreSQL + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.requests.cpu + default: "200m" + description: "PostgreSQL initial cpu request" + type: string + label: PostgreSQL Initial CPU Request + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.requests.memory + default: "500Mi" + description: "PostgreSQL initial memory request" + type: string + label: PostgreSQL Initial Memory Request + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.limits.cpu + default: "1" + description: "PostgreSQL cpu limit" + type: string + label: PostgreSQL CPU Limit + show_if: "artifactory.postgresql.enabled=true" + - variable: artifactory.postgresql.resources.limits.memory + default: "1Gi" + description: "PostgreSQL memory limit" + type: string + label: PostgreSQL Memory Limit + show_if: "artifactory.postgresql.enabled=true" +- variable: artifactory.database.type + default: "postgresql" + description: "xternal database type (postgresql, mysql, oracle or mssql)" + type: enum + required: true + label: External Database Type + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" + options: + - "postgresql" + - "mysql" + - "oracle" + - "mssql" +- variable: artifactory.database.url + default: "" + description: "External database URL. If you set the url, leave host and port empty" + type: string + label: External Database URL + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.host + default: "" + description: "External database hostname" + type: string + label: External Database Hostname + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.port + default: "" + description: "External database port" + type: string + label: External Database Port + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.user + default: "" + description: "External database username" + type: string + label: External Database Username + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" +- variable: artifactory.database.password + default: "" + description: "External database password" + type: password + label: External Database Password + group: "Database Settings" + show_if: "artifactory.postgresql.enabled=false" + +# Advance Settings +- variable: artifactory.advancedOptions + default: false + description: "Show advanced configurations" + label: Show Advanced Configurations + type: boolean + show_subquestion_if: true + group: "Advanced Options" + subquestions: + - variable: artifactory.artifactory.primary.resources.requests.cpu + default: "500m" + description: "Artifactory primary node initial cpu request" + type: string + label: Artifactory Primary Node Initial CPU Request + - variable: artifactory.artifactory.primary.resources.requests.memory + default: "1Gi" + description: "Artifactory primary node initial memory request" + type: string + label: Artifactory Primary Node Initial Memory Request + - variable: artifactory.artifactory.primary.javaOpts.xms + default: "1g" + description: "Artifactory primary node java Xms size" + type: string + label: Artifactory Primary Node Java Xms Size + - variable: artifactory.artifactory.primary.resources.limits.cpu + default: "2" + description: "Artifactory primary node cpu limit" + type: string + label: Artifactory Primary Node CPU Limit + - variable: artifactory.artifactory.primary.resources.limits.memory + default: "4Gi" + description: "Artifactory primary node memory limit" + type: string + label: Artifactory Primary Node Memory Limit + - variable: artifactory.artifactory.primary.javaOpts.xmx + default: "4g" + description: "Artifactory primary node java Xmx size" + type: string + label: Artifactory Primary Node Java Xmx Size + - variable: artifactory.artifactory.node.resources.requests.cpu + default: "500m" + description: "Artifactory member node initial cpu request" + type: string + label: Artifactory Member Node Initial CPU Request + - variable: artifactory.artifactory.node.resources.requests.memory + default: "2Gi" + description: "Artifactory member node initial memory request" + type: string + label: Artifactory Member Node Initial Memory Request + - variable: artifactory.artifactory.node.javaOpts.xms + default: "1g" + description: "Artifactory member node java Xms size" + type: string + label: Artifactory Member Node Java Xms Size + - variable: artifactory.artifactory.node.resources.limits.cpu + default: "2" + description: "Artifactory member node cpu limit" + type: string + label: Artifactory Member Node CPU Limit + - variable: artifactory.artifactory.node.resources.limits.memory + default: "4Gi" + description: "Artifactory member node memory limit" + type: string + label: Artifactory Member Node Memory Limit + - variable: artifactory.artifactory.node.javaOpts.xmx + default: "4g" + description: "Artifactory member node java Xmx size" + type: string + label: Artifactory Member Node Java Xmx Size + +# Internal Settings +- variable: installerInfo + default: '\{\"productId\": \"RancherHelm_artifactory-jcr/7.6.3\", \"features\": \[\{\"featureId\": \"Partner/ACC-007246\"\}\]\}' + type: string + group: "Internal Settings (Do not modify)" diff --git a/charts/jfrog/artifactory-jcr/107.90.14/templates/NOTES.txt b/charts/jfrog/artifactory-jcr/107.90.14/templates/NOTES.txt new file mode 100644 index 0000000000..035bf84174 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/templates/NOTES.txt @@ -0,0 +1 @@ +Congratulations. You have just deployed JFrog Container Registry! diff --git a/charts/jfrog/artifactory-jcr/107.90.14/values.yaml b/charts/jfrog/artifactory-jcr/107.90.14/values.yaml new file mode 100644 index 0000000000..a96b4f7d22 --- /dev/null +++ b/charts/jfrog/artifactory-jcr/107.90.14/values.yaml @@ -0,0 +1,75 @@ +# Default values for artifactory-jcr. +# This is a YAML-formatted file. + +# Beware when changing values here. You should know what you are doing! +# Access the values with {{ .Values.key.subkey }} + +# This chart is based on the main artifactory chart with some customizations. +# See all supported configuration keys in https://github.com/jfrog/charts/tree/master/stable/artifactory + +## All values are under the 'artifactory' sub chart. +artifactory: + ## Artifactory + ## See full list of supported Artifactory options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + artifactory: + ## Default tag is from the artifactory sub-chart in the requirements.yaml + image: + registry: releases-docker.jfrog.io + repository: jfrog/artifactory-jcr + # tag: + ## Uncomment the following resources definitions or pass them from command line + ## to control the cpu and memory resources allocated by the Kubernetes cluster + resources: {} + # requests: + # memory: "1Gi" + # cpu: "500m" + # limits: + # memory: "4Gi" + # cpu: "1" + ## The following Java options are passed to the java process running Artifactory. + ## You should set them according to the resources set above. + ## IMPORTANT: Make sure resources.limits.memory is at least 1G more than Xmx. + javaOpts: {} + # xms: "1g" + # xmx: "3g" + # other: "" + installer: + platform: jcr-helm + installerInfo: '{"productId":"Helm_artifactory-jcr/{{ .Chart.Version }}","features":[{"featureId":"Platform/{{ printf "%s-%s" "kubernetes" .Capabilities.KubeVersion.Version }}"},{"featureId":"Database/{{ .Values.database.type }}"},{"featureId":"PostgreSQL_Enabled/{{ .Values.postgresql.enabled }}"},{"featureId":"Nginx_Enabled/{{ .Values.nginx.enabled }}"},{"featureId":"ArtifactoryPersistence_Type/{{ .Values.artifactory.persistence.type }}"},{"featureId":"SplitServicesToContainers_Enabled/{{ .Values.splitServicesToContainers }}"},{"featureId":"UnifiedSecretInstallation_Enabled/{{ .Values.artifactory.unifiedSecretInstallation }}"},{"featureId":"Filebeat_Enabled/{{ .Values.filebeat.enabled }}"},{"featureId":"ReplicaCount/{{ .Values.artifactory.replicaCount }}"}]}' + ## Nginx + ## See full list of supported Nginx options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + nginx: + enabled: true + tlsSecretName: "" + service: + type: LoadBalancer + ## Ingress + ## See full list of supported Ingress options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + ingress: + enabled: false + tls: + ## PostgreSQL + ## See list of supported postgresql options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + ## Configuration values for the PostgreSQL dependency sub-chart + ## ref: https://github.com/bitnami/charts/blob/master/bitnami/postgresql/README.md + postgresql: + enabled: true + ## This key is required for upgrades to protect old PostgreSQL chart's breaking changes. + databaseUpgradeReady: "yes" + ## If NOT using the PostgreSQL in this chart (artifactory.postgresql.enabled=false), + ## specify custom database details here or leave empty and Artifactory will use embedded derby. + ## See full list of database options and documentation in artifactory chart: https://github.com/jfrog/charts/tree/master/stable/artifactory + # database: + jfconnect: + enabled: false + federation: + enabled: false +## Enable the PostgreSQL sub chart +postgresql: + enabled: true +router: + image: + tag: 7.118.3 +initContainers: + image: + tag: 9.4.949.1716471857 diff --git a/charts/kasten/k10/7.0.1101/Chart.lock b/charts/kasten/k10/7.0.1101/Chart.lock new file mode 100644 index 0000000000..3e85f0fa3f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/Chart.lock @@ -0,0 +1,9 @@ +dependencies: +- name: grafana + repository: "" + version: 8.5.0 +- name: prometheus + repository: "" + version: 25.24.1 +digest: sha256:916a3d0c3da91cf0344b61267d86bf7f95531cae2582fe57f1f28f7f04db67f7 +generated: "2024-10-08T19:50:05.986249602Z" diff --git a/charts/kasten/k10/7.0.1101/Chart.yaml b/charts/kasten/k10/7.0.1101/Chart.yaml new file mode 100644 index 0000000000..f1f6c8c9bd --- /dev/null +++ b/charts/kasten/k10/7.0.1101/Chart.yaml @@ -0,0 +1,25 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: K10 + catalog.cattle.io/kube-version: '>= 1.17.0-0' + catalog.cattle.io/release-name: k10 +apiVersion: v2 +appVersion: 7.0.11 +dependencies: +- condition: grafana.enabled + name: grafana + repository: "" + version: 8.5.0 +- condition: prometheus.server.enabled + name: prometheus + repository: "" + version: 25.24.1 +description: Kasten’s K10 Data Management Platform +home: https://kasten.io/ +icon: file://assets/icons/k10.png +kubeVersion: '>= 1.17.0-0' +maintainers: +- email: contact@kasten.io + name: kastenIO +name: k10 +version: 7.0.1101 diff --git a/charts/kasten/k10/7.0.1101/README.md b/charts/kasten/k10/7.0.1101/README.md new file mode 100644 index 0000000000..62659cf7e7 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/README.md @@ -0,0 +1,342 @@ +# Kasten's K10 Helm chart. + +[Kasten's k10](https://docs.kasten.io/) is a data lifecycle management system for all your persistence.enabled +container-based applications. + +## TL;DR; + +```console +$ helm install kasten/k10 --name=k10 --namespace=kasten-io +``` +Additionally, K10 images are available in Platform One's **Iron Bank** hardened container registry. +To install using these images, follow the instructions found +[here](https://docs.kasten.io/latest/install/ironbank.html). + +## Introduction + +This chart bootstraps Kasten's K10 platform on a [Kubernetes](http://kubernetes.io) cluster using +the [Helm](https://helm.sh) package manager. + +## Prerequisites + +- Kubernetes 1.23 - 1.26 + +## Installing the Chart + +To install the chart on a [GKE](https://cloud.google.com/container-engine/) cluster + +```console +$ helm install kasten/k10 --name=k10 --namespace=kasten-io +``` + +To install the chart on an [AWS](https://aws.amazon.com/) [kops](https://github.com/kubernetes/kops)-created cluster + +```console +$ helm install kasten/k10 --name=k10 --namespace=kasten-io --set secrets.awsAccessKeyId="${AWS_ACCESS_KEY_ID}" \ + --set secrets.awsSecretAccessKey="${AWS_SECRET_ACCESS_KEY}" +``` + +> **Tip**: List all releases using `helm list` + +## Uninstalling the Chart + +To uninstall/delete the `k10` application: + +```console +$ helm delete k10 --purge +``` + +## Configuration + +The following table lists the configurable parameters of the K10 +chart and their default values. + +Parameter | Description | Default +--- | --- | --- +`eula.accept`| Whether to enable accept EULA before installation | `false` +`eula.company` | Company name. Required field if EULA is accepted | `None` +`eula.email` | Contact email. Required field if EULA is accepted | `None` +`license` | License string obtained from Kasten | `None` +`rbac.create` | Whether to enable RBAC with a specific cluster role and binding for K10 | `true` +`scc.create` | Whether to create a SecurityContextConstraints for K10 ServiceAccounts | `false` +`scc.priority` | Sets the SecurityContextConstraints priority | `15` +`services.dashboardbff.hostNetwork` | Whether the dashboardbff pods may use the node network | `false` +`services.executor.hostNetwork` | Whether the executor pods may use the node network | `false` +`services.executor.workerCount` | Specifies count of running executor workers | 8 +`services.executor.maxConcurrentRestoreCsiSnapshots` | Limit of concurrent restore CSI snapshots operations per each restore action | 3 +`services.executor.maxConcurrentRestoreGenericVolumeSnapshots` | Limit of concurrent restore generic volume snapshots operations per each restore action | 3 +`services.executor.maxConcurrentRestoreWorkloads` | Limit of concurrent restore workloads operations per each restore action | 3 +`services.aggregatedapis.hostNetwork` | Whether the aggregatedapis pods may use the node network | `false` +`serviceAccount.create`| Specifies whether a ServiceAccount should be created | `true` +`serviceAccount.name` | The name of the ServiceAccount to use. If not set, a name is derived using the release and chart names. | `None` +`ingress.create` | Specifies whether the K10 dashboard should be exposed via ingress | `false` +`ingress.name` | Optional name of the Ingress object for the K10 dashboard. If not set, the name is formed using the release name. | `{Release.Name}-ingress` +`ingress.class` | Cluster ingress controller class: `nginx`, `GCE` | `None` +`ingress.host` | FQDN (e.g., `k10.example.com`) for name-based virtual host | `None` +`ingress.urlPath` | URL path for K10 Dashboard (e.g., `/k10`) | `Release.Name` +`ingress.pathType` | Specifies the path type for the ingress resource | `ImplementationSpecific` +`ingress.annotations` | Additional Ingress object annotations | `{}` +`ingress.tls.enabled` | Configures a TLS use for `ingress.host` | `false` +`ingress.tls.secretName` | Optional TLS secret name | `None` +`ingress.defaultBackend.service.enabled` | Configures the default backend backed by a service for the K10 dashboard Ingress (mutually exclusive setting with `ingress.defaultBackend.resource.enabled`). | `false` +`ingress.defaultBackend.service.name` | The name of a service referenced by the default backend (required if the service-backed default backend is used). | `None` +`ingress.defaultBackend.service.port.name` | The port name of a service referenced by the default backend (mutually exclusive setting with port `number`, required if the service-backed default backend is used). | `None` +`ingress.defaultBackend.service.port.number` | The port number of a service referenced by the default backend (mutually exclusive setting with port `name`, required if the service-backed default backend is used). | `None` +`ingress.defaultBackend.resource.enabled` | Configures the default backend backed by a resource for the K10 dashboard Ingress (mutually exclusive setting with `ingress.defaultBackend.service.enabled`). | `false` +`ingress.defaultBackend.resource.apiGroup` | Optional API group of a resource backing the default backend. | `''` +`ingress.defaultBackend.resource.kind` | The type of a resource being referenced by the default backend (required if the resource default backend is used). | `None` +`ingress.defaultBackend.resource.name` | The name of a resource being referenced by the default backend (required if the resource default backend is used). | `None` +`global.persistence.size` | Default global size of volumes for K10 persistent services | `20Gi` +`global.persistence.catalog.size` | Size of a volume for catalog service | `global.persistence.size` +`global.persistence.jobs.size` | Size of a volume for jobs service | `global.persistence.size` +`global.persistence.logging.size` | Size of a volume for logging service | `global.persistence.size` +`global.persistence.metering.size` | Size of a volume for metering service | `global.persistence.size` +`global.persistence.storageClass` | Specified StorageClassName will be used for PVCs | `None` +`global.podLabels` | Configures custom labels to be set to all Kasten pods | `None` +`global.podAnnotations` | Configures custom annotations to be set to all Kasten pods | `None` +`global.airgapped.repository` | Specify the helm repository for offline (airgapped) installation | `''` +`global.imagePullSecret` | Provide secret which contains docker config for private repository. Use `k10-ecr` when secrets.dockerConfigPath is used. | `''` +`global.prometheus.external.host` | Provide external prometheus host name | `''` +`global.prometheus.external.port` | Provide external prometheus port number | `''` +`global.prometheus.external.baseURL` | Provide Base URL of external prometheus | `''` +`global.network.enable_ipv6` | Enable `IPv6` support for K10 | `false` +`google.workloadIdentityFederation.enabled` | Enable Google Workload Identity Federation for K10 | `false` +`google.workloadIdentityFederation.idp.type` | Identity Provider type for Google Workload Identity Federation for K10 | `''` +`google.workloadIdentityFederation.idp.aud` | Audience for whom the ID Token from Identity Provider is intended | `''` +`secrets.awsAccessKeyId` | AWS access key ID (required for AWS deployment) | `None` +`secrets.awsSecretAccessKey` | AWS access key secret | `None` +`secrets.awsIamRole` | ARN of the AWS IAM role assumed by K10 to perform any AWS operation. | `None` +`secrets.awsClientSecretName` | The secret that contains AWS access key ID, AWS access key secret and AWS IAM role for AWS | `None` +`secrets.googleApiKey` | Non-default base64 encoded GCP Service Account key | `None` +`secrets.googleProjectId` | Sets Google Project ID other than the one used in the GCP Service Account | `None` +`secrets.azureTenantId` | Azure tenant ID (required for Azure deployment) | `None` +`secrets.azureClientId` | Azure Service App ID | `None` +`secrets.azureClientSecret` | Azure Service APP secret | `None` +`secrets.azureClientSecretName` | The secret that contains ClientID, ClientSecret and TenantID for Azure | `None` +`secrets.azureResourceGroup` | Resource Group name that was created for the Kubernetes cluster | `None` +`secrets.azureSubscriptionID` | Subscription ID in your Azure tenant | `None` +`secrets.azureResourceMgrEndpoint` | Resource management endpoint for the Azure Stack instance | `None` +`secrets.azureADEndpoint` | Azure Active Directory login endpoint | `None` +`secrets.azureADResourceID` | Azure Active Directory resource ID to obtain AD tokens | `None` +`secrets.microsoftEntraIDEndpoint` | Microsoft Entra ID login endpoint | `None` +`secrets.microsoftEntraIDResourceID` | Microsoft Entra ID resource ID to obtain AD tokens | `None` +`secrets.azureCloudEnvID` | Azure Cloud Environment ID | `None` +`secrets.vsphereEndpoint` | vSphere endpoint for login | `None` +`secrets.vsphereUsername` | vSphere username for login | `None` +`secrets.vspherePassword` | vSphere password for login | `None` +`secrets.vsphereClientSecretName` | The secret that contains vSphere username, vSphere password and vSphere endpoint | `None` +`secrets.dockerConfig` | Set base64 encoded docker config to use for image pull operations. Alternative to the ``secrets.dockerConfigPath`` | `None` +`secrets.dockerConfigPath` | Use ``--set-file secrets.dockerConfigPath=path_to_docker_config.yaml`` to specify docker config for image pull. Will be overwritten if ``secrets.dockerConfig`` is set | `None` +`cacertconfigmap.name` | Name of the ConfigMap that contains a certificate for a trusted root certificate authority | `None` +`clusterName` | Cluster name for better logs visibility | `None` +`metering.awsRegion` | Sets AWS_REGION for metering service | `None` +`metering.mode` | Control license reporting (set to `airgap` for private-network installs) | `None` +`metering.reportCollectionPeriod` | Sets metric report collection period (in seconds) | `1800` +`metering.reportPushPeriod` | Sets metric report push period (in seconds) | `3600` +`metering.promoID` | Sets K10 promotion ID from marketing campaigns | `None` +`metering.awsMarketplace` | Sets AWS cloud metering license mode | `false` +`metering.awsManagedLicense` | Sets AWS managed license mode | `false` +`metering.redhatMarketplacePayg` | Sets Red Hat cloud metering license mode | `false` +`metering.licenseConfigSecretName` | Sets AWS managed license config secret | `None` +`externalGateway.create` | Configures an external gateway for K10 API services | `false` +`externalGateway.annotations` | Standard annotations for the services | `None` +`externalGateway.fqdn.name` | Domain name for the K10 API services | `None` +`externalGateway.fqdn.type` | Supported gateway type: `route53-mapper` or `external-dns` | `None` +`externalGateway.awsSSLCertARN` | ARN for the AWS ACM SSL certificate used in the K10 API server | `None` +`auth.basicAuth.enabled` | Configures basic authentication for the K10 dashboard | `false` +`auth.basicAuth.htpasswd` | A username and password pair separated by a colon character | `None` +`auth.basicAuth.secretName` | Name of an existing Secret that contains a file generated with htpasswd | `None` +`auth.k10AdminGroups` | A list of groups whose members are granted admin level access to K10's dashboard | `None` +`auth.k10AdminUsers` | A list of users who are granted admin level access to K10's dashboard | `None` +`auth.tokenAuth.enabled` | Configures token based authentication for the K10 dashboard | `false` +`auth.oidcAuth.enabled` | Configures Open ID Connect based authentication for the K10 dashboard | `false` +`auth.oidcAuth.providerURL` | URL for the OIDC Provider | `None` +`auth.oidcAuth.redirectURL` | URL to the K10 gateway service | `None` +`auth.oidcAuth.scopes` | Space separated OIDC scopes required for userinfo. Example: "profile email" | `None` +`auth.oidcAuth.prompt` | The type of prompt to be used during authentication (none, consent, login or select_account) | `select_account` +`auth.oidcAuth.clientID` | Client ID given by the OIDC provider for K10 | `None` +`auth.oidcAuth.clientSecret` | Client secret given by the OIDC provider for K10 | `None` +`auth.oidcAuth.clientSecretName` | The secret that contains the Client ID and Client secret given by the OIDC provider for K10 | `None` +`auth.oidcAuth.usernameClaim` | The claim to be used as the username | `sub` +`auth.oidcAuth.usernamePrefix` | Prefix that has to be used with the username obtained from the username claim | `None` +`auth.oidcAuth.groupClaim` | Name of a custom OpenID Connect claim for specifying user groups | `None` +`auth.oidcAuth.groupPrefix` | All groups will be prefixed with this value to prevent conflicts | `None` +`auth.oidcAuth.sessionDuration` | Maximum OIDC session duration | `1h` +`auth.oidcAuth.refreshTokenSupport` | Enable OIDC Refresh Token support | `false` +`auth.openshift.enabled` | Enables access to the K10 dashboard by authenticating with the OpenShift OAuth server | `false` +`auth.openshift.serviceAccount` | Name of the service account that represents an OAuth client | `None` +`auth.openshift.clientSecret` | The token corresponding to the service account | `None` +`auth.openshift.clientSecretName` | The secret that contains the token corresponding to the service account | `None` +`auth.openshift.dashboardURL` | The URL used for accessing K10's dashboard | `None` +`auth.openshift.openshiftURL` | The URL for accessing OpenShift's API server | `None` +`auth.openshift.insecureCA` | To turn off SSL verification of connections to OpenShift | `false` +`auth.openshift.useServiceAccountCA` | Set this to true to use the CA certificate corresponding to the Service Account ``auth.openshift.serviceAccount`` usually found at ``/var/run/secrets/kubernetes.io/serviceaccount/ca.crt`` | `false` +`auth.openshift.caCertsAutoExtraction` | Set this to false to disable the OCP CA certificates automatic extraction to the K10 namespace | `true` +`auth.ldap.enabled` | Configures Active Directory/LDAP based authentication for the K10 dashboard | `false` +`auth.ldap.restartPod` | To force a restart of the authentication service pod (useful when updating authentication config) | `false` +`auth.ldap.dashboardURL` | The URL used for accessing K10's dashboard | `None` +`auth.ldap.host` | Host and optional port of the AD/LDAP server in the form `host:port` | `None` +`auth.ldap.insecureNoSSL` | Required if the AD/LDAP host is not using TLS | `false` +`auth.ldap.insecureSkipVerifySSL` | To turn off SSL verification of connections to the AD/LDAP host | `false` +`auth.ldap.startTLS` | When set to true, ldap:// is used to connect to the server followed by creation of a TLS session. When set to false, ldaps:// is used. | `false` +`auth.ldap.bindDN` | The Distinguished Name(username) used for connecting to the AD/LDAP host | `None` +`auth.ldap.bindPW` | The password corresponding to the `bindDN` for connecting to the AD/LDAP host | `None` +`auth.ldap.bindPWSecretName` | The name of the secret that contains the password corresponding to the `bindDN` for connecting to the AD/LDAP host | `None` +`auth.ldap.userSearch.baseDN` | The base Distinguished Name to start the AD/LDAP search from | `None` +`auth.ldap.userSearch.filter` | Optional filter to apply when searching the directory | `None` +`auth.ldap.userSearch.username` | Attribute used for comparing user entries when searching the directory | `None` +`auth.ldap.userSearch.idAttr` | AD/LDAP attribute in a user's entry that should map to the user ID field in a token | `None` +`auth.ldap.userSearch.emailAttr` | AD/LDAP attribute in a user's entry that should map to the email field in a token | `None` +`auth.ldap.userSearch.nameAttr` | AD/LDAP attribute in a user's entry that should map to the name field in a token | `None` +`auth.ldap.userSearch.preferredUsernameAttr` | AD/LDAP attribute in a user's entry that should map to the preferred_username field in a token | `None` +`auth.ldap.groupSearch.baseDN` | The base Distinguished Name to start the AD/LDAP group search from | `None` +`auth.ldap.groupSearch.filter` | Optional filter to apply when searching the directory for groups | `None` +`auth.ldap.groupSearch.nameAttr` | The AD/LDAP attribute that represents a group's name in the directory | `None` +`auth.ldap.groupSearch.userMatchers` | List of field pairs that are used to match a user to a group. | `None` +`auth.ldap.groupSearch.userMatchers.userAttr` | Attribute in the user's entry that must match with the `groupAttr` while searching for groups | `None` +`auth.ldap.groupSearch.userMatchers.groupAttr` | Attribute in the group's entry that must match with the `userAttr` while searching for groups | `None` +`auth.groupAllowList` | A list of groups whose members are allowed access to K10's dashboard | `None` +`services.securityContext` | Custom [security context](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for K10 service containers | `{"runAsUser" : 1000, "fsGroup": 1000}` +`services.securityContext.runAsUser` | User ID K10 service containers run as| `1000` +`services.securityContext.runAsGroup` | Group ID K10 service containers run as| `1000` +`services.securityContext.fsGroup` | FSGroup that owns K10 service container volumes | `1000` +`siem.logging.cluster.enabled` | Whether to enable writing K10 audit event logs to stdout (standard output) | `true` +`siem.logging.cloud.path` | Directory path for saving audit logs in a cloud object store | `k10audit/` +`siem.logging.cloud.awsS3.enabled` | Whether to enable sending K10 audit event logs to AWS S3 | `true` +`injectKanisterSidecar.enabled` | Enable Kanister sidecar injection for workload pods | `false` +`injectKanisterSidecar.namespaceSelector.matchLabels` | Set of labels to select namespaces in which sidecar injection is enabled for workloads | `{}` +`injectKanisterSidecar.objectSelector.matchLabels` | Set of labels to filter workload objects in which the sidecar is injected | `{}` +`injectKanisterSidecar.webhookServer.port` | Port number on which the mutating webhook server accepts request | `8080` +`gateway.insecureDisableSSLVerify` | Specifies whether to disable SSL verification for gateway pods | `false` +`gateway.exposeAdminPort` | Specifies whether to expose Admin port for gateway service | `true` +`gateway.resources.[requests\|limits].[cpu\|memory]` | Resource requests and limits for gateway pod | `{}` +`gateway.service.externalPort` | Specifies the gateway services external port | `80` +`genericVolumeSnapshot.resources.[requests\|limits].[cpu\|memory]` | Resource requests and limits for Generic Volume Snapshot restore pods | `{}` +`multicluster.enabled` | Choose whether to enable the multi-cluster system components and capabilities | `true` +`multicluster.primary.create` | Choose whether to setup cluster as a multi-cluster primary | `false` +`multicluster.primary.name` | Primary cluster name | `''` +`multicluster.primary.ingressURL` | Primary cluster dashboard URL | `''` +`prometheus.k10image.registry` | (optional) Set Prometheus image registry. | `gcr.io` +`prometheus.k10image.repository` | (optional) Set Prometheus image repository. | `kasten-images` +`prometheus.rbac.create` | (optional) Whether to create Prometheus RBAC configuration. Warning - this action will allow prometheus to scrape pods in all k8s namespaces | `false` +`prometheus.alertmanager.enabled` | DEPRECATED: (optional) Enable Prometheus `alertmanager` service | `false` +`prometheus.alertmanager.serviceAccount.create` | DEPRECATED: (optional) Set true to create ServiceAccount for `alertmanager` | `false` +`prometheus.networkPolicy.enabled` | DEPRECATED: (optional) Enable Prometheus `networkPolicy` | `false` +`prometheus.prometheus-node-exporter.enabled` | DEPRECATED: (optional) Enable Prometheus `node-exporter` | `false` +`prometheus.prometheus-node-exporter.serviceAccount.create` | DEPRECATED: (optional) Set true to create ServiceAccount for `prometheus-node-exporter` | `false` +`prometheus.prometheus-pushgateway.enabled` | DEPRECATED: (optional) Enable Prometheus `pushgateway` | `false` +`prometheus.prometheus-pushgateway.serviceAccount.create` | DEPRECATED: (optional) Set true to create ServiceAccount for `prometheus-pushgateway` | `false` +`prometheus.scrapeCAdvisor` | DEPRECATED: (optional) Enable Prometheus ScrapeCAdvisor | `false` +`prometheus.server.enabled` | (optional) If false, K10's Prometheus server will not be created, reducing the dashboard's functionality. | `true` +`prometheus.server.securityContext.runAsUser` | (optional) Set security context `runAsUser` ID for Prometheus server pod | `65534` +`prometheus.server.securityContext.runAsNonRoot` | (optional) Enable security context `runAsNonRoot` for Prometheus server pod | `true` +`prometheus.server.securityContext.runAsGroup` | (optional) Set security context `runAsGroup` ID for Prometheus server pod | `65534` +`prometheus.server.securityContext.fsGroup` | (optional) Set security context `fsGroup` ID for Prometheus server pod | `65534` +`prometheus.server.retention` | (optional) K10 Prometheus data retention | `"30d"` +`prometheus.server.strategy.rollingUpdate.maxSurge` | DEPRECATED: (optional) The number of Prometheus server pods that can be created above the desired amount of pods during an update | `"100%"` +`prometheus.server.strategy.rollingUpdate.maxUnavailable` | DEPRECATED: (optional) The number of Prometheus server pods that can be unavailable during the upgrade process | `"100%"` +`prometheus.server.strategy.type` | DEPRECATED: (optional) Change default deployment strategy for Prometheus server | `"RollingUpdate"` +`prometheus.server.persistentVolume.enabled` | DEPRECATED: (optional) If true, K10 Prometheus server will create a Persistent Volume Claim | `true` +`prometheus.server.persistentVolume.size` | (optional) K10 Prometheus server data Persistent Volume size | `30Gi` +`prometheus.server.persistentVolume.storageClass` | (optional) StorageClassName used to create Prometheus PVC. Setting this option overwrites global StorageClass value | `""` +`prometheus.server.configMapOverrideName` | DEPRECATED: (optional) Prometheus configmap name to override default generated name| `k10-prometheus-config` +`prometheus.server.fullnameOverride` | (optional) Prometheus deployment name to override default generated name| `prometheus-server` +`prometheus.server.baseURL` | (optional) K10 Prometheus external url path at which the server can be accessed | `/k10/prometheus/` +`prometheus.server.prefixURL` | (optional) K10 Prometheus prefix slug at which the server can be accessed | `/k10/prometheus/` +`prometheus.server.serviceAccounts.server.create` | DEPRECATED: (optional) Set true to create ServiceAccount for Prometheus server service | `true` +`grafana.enabled` | (optional) If false Grafana will not be available | `true` +`resources...[requests\|limits].[cpu\|memory]` | Overwriting the default K10 [container resource requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) | varies depending on the container +`route.enabled` | Specifies whether the K10 dashboard should be exposed via route | `false` +`route.host` | FQDN (e.g., `.k10.example.com`) for name-based virtual host | `""` +`route.path` | URL path for K10 Dashboard (e.g., `/k10`) | `/` +`route.annotations` | Additional Route object annotations | `{}` +`route.labels` | Additional Route object labels | `{}` +`route.tls.enabled` | Configures a TLS use for `route.host` | `false` +`route.tls.insecureEdgeTerminationPolicy` | Specifies behavior for insecure scheme traffic | `Redirect` +`route.tls.termination` | Specifies the TLS termination of the route | `edge` +`apigateway.serviceResolver` | Specifies the resolver used for service discovery in the API gateway (`dns` or `endpoint`) | `dns` +`limiter.concurrentSnapConversions` | Limit of concurrent snapshots to convert during export | `3` +`limiter.genericVolumeSnapshots` | Limit of concurrent generic volume snapshot create operations | `10` +`limiter.genericVolumeCopies` | Limit of concurrent generic volume snapshot copy operations | `10` +`limiter.genericVolumeRestores` | Limit of concurrent generic volume snapshot restore operations | `10` +`limiter.csiSnapshots` | Limit of concurrent CSI snapshot create operations | `10` +`limiter.providerSnapshots` | Limit of concurrent cloud provider create operations | `10` +`limiter.imageCopies` | Limit of concurrent image copy operations | `10` +`cluster.domainName` | Specifies the domain name of the cluster | `""` +`kanister.backupTimeout` | Specifies timeout to set on Kanister backup operations | `45` +`kanister.restoreTimeout` | Specifies timeout to set on Kanister restore operations | `600` +`kanister.deleteTimeout` | Specifies timeout to set on Kanister delete operations | `45` +`kanister.hookTimeout` | Specifies timeout to set on Kanister pre-hook and post-hook operations | `20` +`kanister.checkRepoTimeout` | Specifies timeout to set on Kanister checkRepo operations | `20` +`kanister.statsTimeout` | Specifies timeout to set on Kanister stats operations | `20` +`kanister.efsPostRestoreTimeout` | Specifies timeout to set on Kanister efsPostRestore operations | `45` +`kanister.podReadyWaitTimeout` | Specifies a timeout to wait for Kanister pods to reach the ready state during K10 operations | `15` +`awsConfig.assumeRoleDuration` | Duration of a session token generated by AWS for an IAM role. The minimum value is 15 minutes and the maximum value is the maximum duration setting for that IAM role. For documentation about how to view and edit the maximum session duration for an IAM role see https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session. The value accepts a number along with a single character ``m``(for minutes) or ``h`` (for hours) Examples: 60m or 2h | `''` +`awsConfig.efsBackupVaultName` | Specifies the AWS EFS backup vault name | `k10vault` +`vmWare.taskTimeoutMin` | Specifies the timeout for VMWare operations | `60` +`encryption.primaryKey.awsCmkKeyId` | Specifies the AWS CMK key ID for encrypting K10 Primary Key | `None` +`garbagecollector.daemonPeriod` | Sets garbage collection period (in seconds) | `21600` +`garbagecollector.keepMaxActions` | Sets maximum actions to keep | `1000` +`garbagecollector.actions.enabled` | Enables action collectors | `false` +`kubeVirtVMs.snapshot.unfreezeTimeout` | Defines the time duration within which the VMs must be unfrozen while backing them up. To know more about format [go doc](https://pkg.go.dev/time#ParseDuration) can be followed | `5m` +`excludedApps` | Specifies a list of applications to be excluded from the dashboard & compliance considerations. Format should be a :ref:`YAML array` | `["kube-system", "kube-ingress", "kube-node-lease", "kube-public", "kube-rook-ceph"]` +`kanisterPodMetricSidecar.enabled` | Enable the sidecar container to gather metrics from ephemeral pods | `true` +`kanisterPodMetricSidecar.metricLifetime` | Check periodically for metrics that should be removed | `2m` +`kanisterPodMetricSidecar.pushGatewayInterval` | Set the interval for sending metrics into the Prometheus | `30s` +`kanisterPodMetricSidecar.resources.[requests\|limits].[cpu\|memory]` | Resource requests and limits for the Kanister pod metric sidecar | `{}` +`maxJobWaitDuration` | Set a maximum duration of waiting for child jobs. If the execution of the subordinate jobs exceeds this value, the parent job will be canceled. If no value is set, a default of 10 hours will be used | `None` +`forceRootInKanisterHooks` | Forces Kanister Execution Hooks to run with root privileges | `true` +`defaultPriorityClassName` | Specifies the default [priority class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) name for all K10 deployments and ephemeral pods | `None` +`priorityClassName.` | Overrides the default [priority class](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) name for the specified deployment | `{}` +`ephemeralPVCOverhead` | Set the percentage increase for the ephemeral Persistent Volume Claim's storage request, e.g. PVC size = (file raw size) * (1 + `ephemeralPVCOverhead`) | `0.1` +`datastore.parallelUploads` | Specifies how many files can be uploaded in parallel to the data store | `8` +`datastore.parallelDownloads` | Specifies how many files can be downloaded in parallel from the data store | `8` +`kastenDisasterRecovery.quickMode.enabled` | Enables K10 Quick Disaster Recovery | `false` +`fips.enabled` | Specifies whether K10 should be run in the FIPS mode of operation | `false` +`workerPodCRDs.enabled` | Specifies whether K10 should use `ActionPodSpec` for granular resource control of worker pods | `false` +`workerPodCRDs.resourcesRequests.maxCPU` | Max CPU which might be setup in `ActionPodSpec` | `''` +`workerPodCRDs.resourcesRequests.maxMemory` | Max memory which might be setup in `ActionPodSpec` | `''` +`workerPodCRDs.defaultActionPodSpec.name` | The name of `ActionPodSpec` that will be used by default for worker pod resources. | `''` +`workerPodCRDs.defaultActionPodSpec.namespace` | The namespace of `ActionPodSpec` that will be used by default for worker pod resources. | `''` + + + +## Helm tips and tricks + +There is a way of setting values via a yaml file instead of using `--set`. +First, copy/paste values into a file (e.g., my_values.yaml): + +```yaml +secrets: + awsAccessKeyId: ${AWS_ACCESS_KEY_ID} + awsSecretAccessKey: ${AWS_SECRET_ACCESS_KEY} +``` + +and then run: + +```bash + envsubst < my_values.yaml > my_values_out.yaml && helm install k10 kasten/k10 -f my_values_out.yaml +``` + +To set a single value from a file, `--set-file` may be used over `--set`: + +```bash + helm install k10 kasten/k10 --set-file license=my_license.lic +``` + + +To use non-default GCP ServiceAccount (SA) credentials, the credentials JSON file needs to be encoded into a base64 +string: + +```bash + sa_key=$(base64 -w0 sa-key.json) + helm install k10 kasten/k10 --namespace=kasten-io --set secrets.googleApiKey=$sa_key +``` + +If the Google Service Account belongs to a project other than the one in which the cluster +is located, then the project's ID of the cluster must be also provided during the installation: + +```bash + sa_key=$(base64 -w0 sa-key.json) + helm install k10 kasten/k10 --namespace=kasten-io --set secrets.googleApiKey=$sa_key --set secrets.googleProjectId= +``` diff --git a/charts/kasten/k10/7.0.1101/app-readme.md b/charts/kasten/k10/7.0.1101/app-readme.md new file mode 100644 index 0000000000..1b221891be --- /dev/null +++ b/charts/kasten/k10/7.0.1101/app-readme.md @@ -0,0 +1,5 @@ +The K10 data management platform, purpose-built for Kubernetes, provides enterprise operations teams an easy-to-use, scalable, and secure system for backup/restore, disaster recovery, and mobility of Kubernetes applications. + +K10’s application-centric approach and deep integrations with relational and NoSQL databases, Kubernetes distributions, and all clouds provide teams the freedom of infrastructure choice without sacrificing operational simplicity. Policy-driven and extensible, K10 provides a native Kubernetes API and includes features such as full-spectrum consistency, database integrations, automatic application discovery, multi-cloud mobility, and a powerful web-based user interface. + +For more information, refer to the docs [https://docs.kasten.io/](https://docs.kasten.io/) diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/.helmignore b/charts/kasten/k10/7.0.1101/charts/grafana/.helmignore new file mode 100644 index 0000000000..8cade1318f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.vscode +.project +.idea/ +*.tmproj +OWNERS diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/Chart.yaml new file mode 100644 index 0000000000..e334ca5cb5 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/Chart.yaml @@ -0,0 +1,35 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/grafana/helm-charts + - name: Upstream Project + url: https://github.com/grafana/grafana +apiVersion: v2 +appVersion: 11.1.5 +description: The leading tool for querying and visualizing time series and metrics. +home: https://grafana.com +icon: https://artifacthub.io/image/b4fed1a7-6c8f-4945-b99d-096efa3e4116 +keywords: +- monitoring +- metric +kubeVersion: ^1.8.0-0 +maintainers: +- email: zanhsieh@gmail.com + name: zanhsieh +- email: rluckie@cisco.com + name: rtluckie +- email: maor.friedman@redhat.com + name: maorfr +- email: miroslav.hadzhiev@gmail.com + name: Xtigyro +- email: mail@torstenwalter.de + name: torstenwalter +- email: github@jkroepke.de + name: jkroepke +name: grafana +sources: +- https://github.com/grafana/grafana +- https://github.com/grafana/helm-charts +type: application +version: 8.5.0 diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/README.md b/charts/kasten/k10/7.0.1101/charts/grafana/README.md new file mode 100644 index 0000000000..2c9609e122 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/README.md @@ -0,0 +1,783 @@ +# Grafana Helm Chart + +* Installs the web dashboarding system [Grafana](http://grafana.org/) + +## Get Repo Info + +```console +helm repo add grafana https://grafana.github.io/helm-charts +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Installing the Chart + +To install the chart with the release name `my-release`: + +```console +helm install my-release grafana/grafana +``` + +## Uninstalling the Chart + +To uninstall/delete the my-release deployment: + +```console +helm delete my-release +``` + +The command removes all the Kubernetes components associated with the chart and deletes the release. + +## Upgrading an existing Release to a new major version + +A major chart version change (like v1.2.3 -> v2.0.0) indicates that there is an +incompatible breaking change needing manual actions. + +### To 4.0.0 (And 3.12.1) + +This version requires Helm >= 2.12.0. + +### To 5.0.0 + +You have to add --force to your helm upgrade command as the labels of the chart have changed. + +### To 6.0.0 + +This version requires Helm >= 3.1.0. + +### To 7.0.0 + +For consistency with other Helm charts, the `global.image.registry` parameter was renamed +to `global.imageRegistry`. If you were not previously setting `global.image.registry`, no action +is required on upgrade. If you were previously setting `global.image.registry`, you will +need to instead set `global.imageRegistry`. + +## Configuration + +| Parameter | Description | Default | +|-------------------------------------------|-----------------------------------------------|---------------------------------------------------------| +| `replicas` | Number of nodes | `1` | +| `podDisruptionBudget.minAvailable` | Pod disruption minimum available | `nil` | +| `podDisruptionBudget.maxUnavailable` | Pod disruption maximum unavailable | `nil` | +| `podDisruptionBudget.apiVersion` | Pod disruption apiVersion | `nil` | +| `deploymentStrategy` | Deployment strategy | `{ "type": "RollingUpdate" }` | +| `livenessProbe` | Liveness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } "initialDelaySeconds": 60, "timeoutSeconds": 30, "failureThreshold": 10 }` | +| `readinessProbe` | Readiness Probe settings | `{ "httpGet": { "path": "/api/health", "port": 3000 } }`| +| `securityContext` | Deployment securityContext | `{"runAsUser": 472, "runAsGroup": 472, "fsGroup": 472}` | +| `priorityClassName` | Name of Priority Class to assign pods | `nil` | +| `image.registry` | Image registry | `docker.io` | +| `image.repository` | Image repository | `grafana/grafana` | +| `image.tag` | Overrides the Grafana image tag whose default is the chart appVersion (`Must be >= 5.0.0`) | `` | +| `image.sha` | Image sha (optional) | `` | +| `image.pullPolicy` | Image pull policy | `IfNotPresent` | +| `image.pullSecrets` | Image pull secrets (can be templated) | `[]` | +| `service.enabled` | Enable grafana service | `true` | +| `service.ipFamilies` | Kubernetes service IP families | `[]` | +| `service.ipFamilyPolicy` | Kubernetes service IP family policy | `""` | +| `service.type` | Kubernetes service type | `ClusterIP` | +| `service.port` | Kubernetes port where service is exposed | `80` | +| `service.portName` | Name of the port on the service | `service` | +| `service.appProtocol` | Adds the appProtocol field to the service | `` | +| `service.targetPort` | Internal service is port | `3000` | +| `service.nodePort` | Kubernetes service nodePort | `nil` | +| `service.annotations` | Service annotations (can be templated) | `{}` | +| `service.labels` | Custom labels | `{}` | +| `service.clusterIP` | internal cluster service IP | `nil` | +| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `nil` | +| `service.loadBalancerSourceRanges` | list of IP CIDRs allowed access to lb (if supported) | `[]` | +| `service.externalIPs` | service external IP addresses | `[]` | +| `service.externalTrafficPolicy` | change the default externalTrafficPolicy | `nil` | +| `headlessService` | Create a headless service | `false` | +| `extraExposePorts` | Additional service ports for sidecar containers| `[]` | +| `hostAliases` | adds rules to the pod's /etc/hosts | `[]` | +| `ingress.enabled` | Enables Ingress | `false` | +| `ingress.annotations` | Ingress annotations (values are templated) | `{}` | +| `ingress.labels` | Custom labels | `{}` | +| `ingress.path` | Ingress accepted path | `/` | +| `ingress.pathType` | Ingress type of path | `Prefix` | +| `ingress.hosts` | Ingress accepted hostnames | `["chart-example.local"]` | +| `ingress.extraPaths` | Ingress extra paths to prepend to every host configuration. Useful when configuring [custom actions with AWS ALB Ingress Controller](https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.6/guide/ingress/annotations/#actions). Requires `ingress.hosts` to have one or more host entries. | `[]` | +| `ingress.tls` | Ingress TLS configuration | `[]` | +| `ingress.ingressClassName` | Ingress Class Name. MAY be required for Kubernetes versions >= 1.18 | `""` | +| `resources` | CPU/Memory resource requests/limits | `{}` | +| `nodeSelector` | Node labels for pod assignment | `{}` | +| `tolerations` | Toleration labels for pod assignment | `[]` | +| `affinity` | Affinity settings for pod assignment | `{}` | +| `extraInitContainers` | Init containers to add to the grafana pod | `{}` | +| `extraContainers` | Sidecar containers to add to the grafana pod | `""` | +| `extraContainerVolumes` | Volumes that can be mounted in sidecar containers | `[]` | +| `extraLabels` | Custom labels for all manifests | `{}` | +| `schedulerName` | Name of the k8s scheduler (other than default) | `nil` | +| `persistence.enabled` | Use persistent volume to store data | `false` | +| `persistence.type` | Type of persistence (`pvc` or `statefulset`) | `pvc` | +| `persistence.size` | Size of persistent volume claim | `10Gi` | +| `persistence.existingClaim` | Use an existing PVC to persist data (can be templated) | `nil` | +| `persistence.storageClassName` | Type of persistent volume claim | `nil` | +| `persistence.accessModes` | Persistence access modes | `[ReadWriteOnce]` | +| `persistence.annotations` | PersistentVolumeClaim annotations | `{}` | +| `persistence.finalizers` | PersistentVolumeClaim finalizers | `[ "kubernetes.io/pvc-protection" ]` | +| `persistence.extraPvcLabels` | Extra labels to apply to a PVC. | `{}` | +| `persistence.subPath` | Mount a sub dir of the persistent volume (can be templated) | `nil` | +| `persistence.inMemory.enabled` | If persistence is not enabled, whether to mount the local storage in-memory to improve performance | `false` | +| `persistence.inMemory.sizeLimit` | SizeLimit for the in-memory local storage | `nil` | +| `persistence.disableWarning` | Hide NOTES warning, useful when persisting to a database | `false` | +| `initChownData.enabled` | If false, don't reset data ownership at startup | true | +| `initChownData.image.registry` | init-chown-data container image registry | `docker.io` | +| `initChownData.image.repository` | init-chown-data container image repository | `busybox` | +| `initChownData.image.tag` | init-chown-data container image tag | `1.31.1` | +| `initChownData.image.sha` | init-chown-data container image sha (optional)| `""` | +| `initChownData.image.pullPolicy` | init-chown-data container image pull policy | `IfNotPresent` | +| `initChownData.resources` | init-chown-data pod resource requests & limits | `{}` | +| `schedulerName` | Alternate scheduler name | `nil` | +| `env` | Extra environment variables passed to pods | `{}` | +| `envValueFrom` | Environment variables from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` | +| `envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` | +| `envFromSecrets` | List of Kubernetes secrets (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` | +| `envFromConfigMaps` | List of Kubernetes ConfigMaps (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `[]` | +| `envRenderSecret` | Sensible environment variables passed to pods and stored as secret. (passed through [tpl](https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function)) | `{}` | +| `enableServiceLinks` | Inject Kubernetes services as environment variables. | `true` | +| `extraSecretMounts` | Additional grafana server secret mounts | `[]` | +| `extraVolumeMounts` | Additional grafana server volume mounts | `[]` | +| `extraVolumes` | Additional Grafana server volumes | `[]` | +| `automountServiceAccountToken` | Mounted the service account token on the grafana pod. Mandatory, if sidecars are enabled | `true` | +| `createConfigmap` | Enable creating the grafana configmap | `true` | +| `extraConfigmapMounts` | Additional grafana server configMap volume mounts (values are templated) | `[]` | +| `extraEmptyDirMounts` | Additional grafana server emptyDir volume mounts | `[]` | +| `plugins` | Plugins to be loaded along with Grafana | `[]` | +| `datasources` | Configure grafana datasources (passed through tpl) | `{}` | +| `alerting` | Configure grafana alerting (passed through tpl) | `{}` | +| `notifiers` | Configure grafana notifiers | `{}` | +| `dashboardProviders` | Configure grafana dashboard providers | `{}` | +| `dashboards` | Dashboards to import | `{}` | +| `dashboardsConfigMaps` | ConfigMaps reference that contains dashboards | `{}` | +| `grafana.ini` | Grafana's primary configuration | `{}` | +| `global.imageRegistry` | Global image pull registry for all images. | `null` | +| `global.imagePullSecrets` | Global image pull secrets (can be templated). Allows either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). | `[]` | +| `ldap.enabled` | Enable LDAP authentication | `false` | +| `ldap.existingSecret` | The name of an existing secret containing the `ldap.toml` file, this must have the key `ldap-toml`. | `""` | +| `ldap.config` | Grafana's LDAP configuration | `""` | +| `annotations` | Deployment annotations | `{}` | +| `labels` | Deployment labels | `{}` | +| `podAnnotations` | Pod annotations | `{}` | +| `podLabels` | Pod labels | `{}` | +| `podPortName` | Name of the grafana port on the pod | `grafana` | +| `lifecycleHooks` | Lifecycle hooks for podStart and preStop [Example](https://kubernetes.io/docs/tasks/configure-pod-container/attach-handler-lifecycle-event/#define-poststart-and-prestop-handlers) | `{}` | +| `sidecar.image.registry` | Sidecar image registry | `quay.io` | +| `sidecar.image.repository` | Sidecar image repository | `kiwigrid/k8s-sidecar` | +| `sidecar.image.tag` | Sidecar image tag | `1.26.0` | +| `sidecar.image.sha` | Sidecar image sha (optional) | `""` | +| `sidecar.imagePullPolicy` | Sidecar image pull policy | `IfNotPresent` | +| `sidecar.resources` | Sidecar resources | `{}` | +| `sidecar.securityContext` | Sidecar securityContext | `{}` | +| `sidecar.enableUniqueFilenames` | Sets the kiwigrid/k8s-sidecar UNIQUE_FILENAMES environment variable. If set to `true` the sidecar will create unique filenames where duplicate data keys exist between ConfigMaps and/or Secrets within the same or multiple Namespaces. | `false` | +| `sidecar.alerts.enabled` | Enables the cluster wide search for alerts and adds/updates/deletes them in grafana |`false` | +| `sidecar.alerts.label` | Label that config maps with alerts should have to be added | `grafana_alert` | +| `sidecar.alerts.labelValue` | Label value that config maps with alerts should have to be added | `""` | +| `sidecar.alerts.searchNamespace` | Namespaces list. If specified, the sidecar will search for alerts config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` | +| `sidecar.alerts.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` | +| `sidecar.alerts.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` | +| `sidecar.alerts.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/alerting/reload"` | +| `sidecar.alerts.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` | +| `sidecar.alerts.initAlerts` | Set to true to deploy the alerts sidecar as an initContainer. This is needed if skipReload is true, to load any alerts defined at startup time. | `false` | +| `sidecar.alerts.extraMounts` | Additional alerts sidecar volume mounts. | `[]` | +| `sidecar.dashboards.enabled` | Enables the cluster wide search for dashboards and adds/updates/deletes them in grafana | `false` | +| `sidecar.dashboards.SCProvider` | Enables creation of sidecar provider | `true` | +| `sidecar.dashboards.provider.name` | Unique name of the grafana provider | `sidecarProvider` | +| `sidecar.dashboards.provider.orgid` | Id of the organisation, to which the dashboards should be added | `1` | +| `sidecar.dashboards.provider.folder` | Logical folder in which grafana groups dashboards | `""` | +| `sidecar.dashboards.provider.folderUid` | Allows you to specify the static UID for the logical folder above | `""` | +| `sidecar.dashboards.provider.disableDelete` | Activate to avoid the deletion of imported dashboards | `false` | +| `sidecar.dashboards.provider.allowUiUpdates` | Allow updating provisioned dashboards from the UI | `false` | +| `sidecar.dashboards.provider.type` | Provider type | `file` | +| `sidecar.dashboards.provider.foldersFromFilesStructure` | Allow Grafana to replicate dashboard structure from filesystem. | `false` | +| `sidecar.dashboards.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` | +| `sidecar.skipTlsVerify` | Set to true to skip tls verification for kube api calls | `nil` | +| `sidecar.dashboards.label` | Label that config maps with dashboards should have to be added | `grafana_dashboard` | +| `sidecar.dashboards.labelValue` | Label value that config maps with dashboards should have to be added | `""` | +| `sidecar.dashboards.folder` | Folder in the pod that should hold the collected dashboards (unless `sidecar.dashboards.defaultFolderName` is set). This path will be mounted. | `/tmp/dashboards` | +| `sidecar.dashboards.folderAnnotation` | The annotation the sidecar will look for in configmaps to override the destination folder for files | `nil` | +| `sidecar.dashboards.defaultFolderName` | The default folder name, it will create a subfolder under the `sidecar.dashboards.folder` and put dashboards in there instead | `nil` | +| `sidecar.dashboards.searchNamespace` | Namespaces list. If specified, the sidecar will search for dashboards config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` | +| `sidecar.dashboards.script` | Absolute path to shell script to execute after a configmap got reloaded. | `nil` | +| `sidecar.dashboards.reloadURL` | Full url of dashboards configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/dashboards/reload"` | +| `sidecar.dashboards.skipReload` | Enabling this omits defining the REQ_USERNAME, REQ_PASSWORD, REQ_URL and REQ_METHOD environment variables | `false` | +| `sidecar.dashboards.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` | +| `sidecar.dashboards.extraMounts` | Additional dashboard sidecar volume mounts. | `[]` | +| `sidecar.datasources.enabled` | Enables the cluster wide search for datasources and adds/updates/deletes them in grafana |`false` | +| `sidecar.datasources.label` | Label that config maps with datasources should have to be added | `grafana_datasource` | +| `sidecar.datasources.labelValue` | Label value that config maps with datasources should have to be added | `""` | +| `sidecar.datasources.searchNamespace` | Namespaces list. If specified, the sidecar will search for datasources config-maps inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` | +| `sidecar.datasources.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` | +| `sidecar.datasources.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` | +| `sidecar.datasources.reloadURL` | Full url of datasource configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/datasources/reload"` | +| `sidecar.datasources.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` | +| `sidecar.datasources.initDatasources` | Set to true to deploy the datasource sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any datasources defined at startup time. | `false` | +| `sidecar.notifiers.enabled` | Enables the cluster wide search for notifiers and adds/updates/deletes them in grafana | `false` | +| `sidecar.notifiers.label` | Label that config maps with notifiers should have to be added | `grafana_notifier` | +| `sidecar.notifiers.labelValue` | Label value that config maps with notifiers should have to be added | `""` | +| `sidecar.notifiers.searchNamespace` | Namespaces list. If specified, the sidecar will search for notifiers config-maps (or secrets) inside these namespaces. Otherwise the namespace in which the sidecar is running will be used. It's also possible to specify ALL to search in all namespaces. | `nil` | +| `sidecar.notifiers.watchMethod` | Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. | `WATCH` | +| `sidecar.notifiers.resource` | Should the sidecar looks into secrets, configmaps or both. | `both` | +| `sidecar.notifiers.reloadURL` | Full url of notifier configuration reload API endpoint, to invoke after a config-map change | `"http://localhost:3000/api/admin/provisioning/notifications/reload"` | +| `sidecar.notifiers.skipReload` | Enabling this omits defining the REQ_URL and REQ_METHOD environment variables | `false` | +| `sidecar.notifiers.initNotifiers` | Set to true to deploy the notifier sidecar as an initContainer in addition to a container. This is needed if skipReload is true, to load any notifiers defined at startup time. | `false` | +| `smtp.existingSecret` | The name of an existing secret containing the SMTP credentials. | `""` | +| `smtp.userKey` | The key in the existing SMTP secret containing the username. | `"user"` | +| `smtp.passwordKey` | The key in the existing SMTP secret containing the password. | `"password"` | +| `admin.existingSecret` | The name of an existing secret containing the admin credentials (can be templated). | `""` | +| `admin.userKey` | The key in the existing admin secret containing the username. | `"admin-user"` | +| `admin.passwordKey` | The key in the existing admin secret containing the password. | `"admin-password"` | +| `serviceAccount.automountServiceAccountToken` | Automount the service account token on all pods where is service account is used | `false` | +| `serviceAccount.annotations` | ServiceAccount annotations | | +| `serviceAccount.create` | Create service account | `true` | +| `serviceAccount.labels` | ServiceAccount labels | `{}` | +| `serviceAccount.name` | Service account name to use, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `` | +| `serviceAccount.nameTest` | Service account name to use for test, when empty will be set to created account if `serviceAccount.create` is set else to `default` | `nil` | +| `rbac.create` | Create and use RBAC resources | `true` | +| `rbac.namespaced` | Creates Role and Rolebinding instead of the default ClusterRole and ClusteRoleBindings for the grafana instance | `false` | +| `rbac.useExistingRole` | Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to the rolename set here. | `nil` | +| `rbac.pspEnabled` | Create PodSecurityPolicy (with `rbac.create`, grant roles permissions as well) | `false` | +| `rbac.pspUseAppArmor` | Enforce AppArmor in created PodSecurityPolicy (requires `rbac.pspEnabled`) | `false` | +| `rbac.extraRoleRules` | Additional rules to add to the Role | [] | +| `rbac.extraClusterRoleRules` | Additional rules to add to the ClusterRole | [] | +| `command` | Define command to be executed by grafana container at startup | `nil` | +| `args` | Define additional args if command is used | `nil` | +| `testFramework.enabled` | Whether to create test-related resources | `true` | +| `testFramework.image.registry` | `test-framework` image registry. | `docker.io` | +| `testFramework.image.repository` | `test-framework` image repository. | `bats/bats` | +| `testFramework.image.tag` | `test-framework` image tag. | `v1.4.1` | +| `testFramework.imagePullPolicy` | `test-framework` image pull policy. | `IfNotPresent` | +| `testFramework.securityContext` | `test-framework` securityContext | `{}` | +| `downloadDashboards.env` | Environment variables to be passed to the `download-dashboards` container | `{}` | +| `downloadDashboards.envFromSecret` | Name of a Kubernetes secret (must be manually created in the same namespace) containing values to be added to the environment. Can be templated | `""` | +| `downloadDashboards.resources` | Resources of `download-dashboards` container | `{}` | +| `downloadDashboardsImage.registry` | Curl docker image registry | `docker.io` | +| `downloadDashboardsImage.repository` | Curl docker image repository | `curlimages/curl` | +| `downloadDashboardsImage.tag` | Curl docker image tag | `7.73.0` | +| `downloadDashboardsImage.sha` | Curl docker image sha (optional) | `""` | +| `downloadDashboardsImage.pullPolicy` | Curl docker image pull policy | `IfNotPresent` | +| `namespaceOverride` | Override the deployment namespace | `""` (`Release.Namespace`) | +| `serviceMonitor.enabled` | Use servicemonitor from prometheus operator | `false` | +| `serviceMonitor.namespace` | Namespace this servicemonitor is installed in | | +| `serviceMonitor.interval` | How frequently Prometheus should scrape | `1m` | +| `serviceMonitor.path` | Path to scrape | `/metrics` | +| `serviceMonitor.scheme` | Scheme to use for metrics scraping | `http` | +| `serviceMonitor.tlsConfig` | TLS configuration block for the endpoint | `{}` | +| `serviceMonitor.labels` | Labels for the servicemonitor passed to Prometheus Operator | `{}` | +| `serviceMonitor.scrapeTimeout` | Timeout after which the scrape is ended | `30s` | +| `serviceMonitor.relabelings` | RelabelConfigs to apply to samples before scraping. | `[]` | +| `serviceMonitor.metricRelabelings` | MetricRelabelConfigs to apply to samples before ingestion. | `[]` | +| `revisionHistoryLimit` | Number of old ReplicaSets to retain | `10` | +| `imageRenderer.enabled` | Enable the image-renderer deployment & service | `false` | +| `imageRenderer.image.registry` | image-renderer Image registry | `docker.io` | +| `imageRenderer.image.repository` | image-renderer Image repository | `grafana/grafana-image-renderer` | +| `imageRenderer.image.tag` | image-renderer Image tag | `latest` | +| `imageRenderer.image.sha` | image-renderer Image sha (optional) | `""` | +| `imageRenderer.image.pullPolicy` | image-renderer ImagePullPolicy | `Always` | +| `imageRenderer.env` | extra env-vars for image-renderer | `{}` | +| `imageRenderer.envValueFrom` | Environment variables for image-renderer from alternate sources. See the API docs on [EnvVarSource](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#envvarsource-v1-core) for format details. Can be templated | `{}` | +| `imageRenderer.extraConfigmapMounts` | Additional image-renderer configMap volume mounts (values are templated) | `[]` | +| `imageRenderer.extraSecretMounts` | Additional image-renderer secret volume mounts | `[]` | +| `imageRenderer.extraVolumeMounts` | Additional image-renderer volume mounts | `[]` | +| `imageRenderer.extraVolumes` | Additional image-renderer volumes | `[]` | +| `imageRenderer.serviceAccountName` | image-renderer deployment serviceAccountName | `""` | +| `imageRenderer.securityContext` | image-renderer deployment securityContext | `{}` | +| `imageRenderer.podAnnotations` | image-renderer image-renderer pod annotation | `{}` | +| `imageRenderer.hostAliases` | image-renderer deployment Host Aliases | `[]` | +| `imageRenderer.priorityClassName` | image-renderer deployment priority class | `''` | +| `imageRenderer.service.enabled` | Enable the image-renderer service | `true` | +| `imageRenderer.service.portName` | image-renderer service port name | `http` | +| `imageRenderer.service.port` | image-renderer port used by deployment | `8081` | +| `imageRenderer.service.targetPort` | image-renderer service port used by service | `8081` | +| `imageRenderer.appProtocol` | Adds the appProtocol field to the service | `` | +| `imageRenderer.grafanaSubPath` | Grafana sub path to use for image renderer callback url | `''` | +| `imageRenderer.serverURL` | Remote image renderer url | `''` | +| `imageRenderer.renderingCallbackURL` | Callback url for the Grafana image renderer | `''` | +| `imageRenderer.podPortName` | name of the image-renderer port on the pod | `http` | +| `imageRenderer.revisionHistoryLimit` | number of image-renderer replica sets to keep | `10` | +| `imageRenderer.networkPolicy.limitIngress` | Enable a NetworkPolicy to limit inbound traffic from only the created grafana pods | `true` | +| `imageRenderer.networkPolicy.limitEgress` | Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods | `false` | +| `imageRenderer.resources` | Set resource limits for image-renderer pods | `{}` | +| `imageRenderer.nodeSelector` | Node labels for pod assignment | `{}` | +| `imageRenderer.tolerations` | Toleration labels for pod assignment | `[]` | +| `imageRenderer.affinity` | Affinity settings for pod assignment | `{}` | +| `networkPolicy.enabled` | Enable creation of NetworkPolicy resources. | `false` | +| `networkPolicy.allowExternal` | Don't require client label for connections | `true` | +| `networkPolicy.explicitNamespacesSelector` | A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed | `{}` | +| `networkPolicy.ingress` | Enable the creation of an ingress network policy | `true` | +| `networkPolicy.egress.enabled` | Enable the creation of an egress network policy | `false` | +| `networkPolicy.egress.ports` | An array of ports to allow for the egress | `[]` | +| `enableKubeBackwardCompatibility` | Enable backward compatibility of kubernetes where pod's defintion version below 1.13 doesn't have the enableServiceLinks option | `false` | + +### Example ingress with path + +With grafana 6.3 and above + +```yaml +grafana.ini: + server: + domain: monitoring.example.com + root_url: "%(protocol)s://%(domain)s/grafana" + serve_from_sub_path: true +ingress: + enabled: true + hosts: + - "monitoring.example.com" + path: "/grafana" +``` + +### Example of extraVolumeMounts and extraVolumes + +Configure additional volumes with `extraVolumes` and volume mounts with `extraVolumeMounts`. + +Example for `extraVolumeMounts` and corresponding `extraVolumes`: + +```yaml +extraVolumeMounts: + - name: plugins + mountPath: /var/lib/grafana/plugins + subPath: configs/grafana/plugins + readOnly: false + - name: dashboards + mountPath: /var/lib/grafana/dashboards + hostPath: /usr/shared/grafana/dashboards + readOnly: false + +extraVolumes: + - name: plugins + existingClaim: existing-grafana-claim + - name: dashboards + hostPath: /usr/shared/grafana/dashboards +``` + +Volumes default to `emptyDir`. Set to `persistentVolumeClaim`, +`hostPath`, `csi`, or `configMap` for other types. For a +`persistentVolumeClaim`, specify an existing claim name with +`existingClaim`. + +## Import dashboards + +There are a few methods to import dashboards to Grafana. Below are some examples and explanations as to how to use each method: + +```yaml +dashboards: + default: + some-dashboard: + json: | + { + "annotations": + + ... + # Complete json file here + ... + + "title": "Some Dashboard", + "uid": "abcd1234", + "version": 1 + } + custom-dashboard: + # This is a path to a file inside the dashboards directory inside the chart directory + file: dashboards/custom-dashboard.json + prometheus-stats: + # Ref: https://grafana.com/dashboards/2 + gnetId: 2 + revision: 2 + datasource: Prometheus + loki-dashboard-quick-search: + gnetId: 12019 + revision: 2 + datasource: + - name: DS_PROMETHEUS + value: Prometheus + - name: DS_LOKI + value: Loki + local-dashboard: + url: https://raw.githubusercontent.com/user/repository/master/dashboards/dashboard.json +``` + +## BASE64 dashboards + +Dashboards could be stored on a server that does not return JSON directly and instead of it returns a Base64 encoded file (e.g. Gerrit) +A new parameter has been added to the url use case so if you specify a b64content value equals to true after the url entry a Base64 decoding is applied before save the file to disk. +If this entry is not set or is equals to false not decoding is applied to the file before saving it to disk. + +### Gerrit use case + +Gerrit API for download files has the following schema: where {project-name} and +{file-id} usually has '/' in their values and so they MUST be replaced by %2F so if project-name is user/repo, branch-id is master and file-id is equals to dir1/dir2/dashboard +the url value is + +## Sidecar for dashboards + +If the parameter `sidecar.dashboards.enabled` is set, a sidecar container is deployed in the grafana +pod. This container watches all configmaps (or secrets) in the cluster and filters out the ones with +a label as defined in `sidecar.dashboards.label`. The files defined in those configmaps are written +to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported +dashboards are deleted/updated. + +A recommendation is to use one configmap per dashboard, as a reduction of multiple dashboards inside +one configmap is currently not properly mirrored in grafana. + +Example dashboard config: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: sample-grafana-dashboard + labels: + grafana_dashboard: "1" +data: + k8s-dashboard.json: |- + [...] +``` + +## Sidecar for datasources + +If the parameter `sidecar.datasources.enabled` is set, an init container is deployed in the grafana +pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and +filters out the ones with a label as defined in `sidecar.datasources.label`. The files defined in +those secrets are written to a folder and accessed by grafana on startup. Using these yaml files, +the data sources in grafana can be imported. + +Should you aim for reloading datasources in Grafana each time the config is changed, set `sidecar.datasources.skipReload: false` and adjust `sidecar.datasources.reloadURL` to `http://..svc.cluster.local/api/admin/provisioning/datasources/reload`. + +Secrets are recommended over configmaps for this usecase because datasources usually contain private +data like usernames and passwords. Secrets are the more appropriate cluster resource to manage those. + +Example values to add a postgres datasource as a kubernetes secret: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: grafana-datasources + labels: + grafana_datasource: 'true' # default value for: sidecar.datasources.label +stringData: + pg-db.yaml: |- + apiVersion: 1 + datasources: + - name: My pg db datasource + type: postgres + url: my-postgresql-db:5432 + user: db-readonly-user + secureJsonData: + password: 'SUperSEcretPa$$word' + jsonData: + database: my_datase + sslmode: 'disable' # disable/require/verify-ca/verify-full + maxOpenConns: 0 # Grafana v5.4+ + maxIdleConns: 2 # Grafana v5.4+ + connMaxLifetime: 14400 # Grafana v5.4+ + postgresVersion: 1000 # 903=9.3, 904=9.4, 905=9.5, 906=9.6, 1000=10 + timescaledb: false + # allow users to edit datasources from the UI. + editable: false +``` + +Example values to add a datasource adapted from [Grafana](http://docs.grafana.org/administration/provisioning/#example-datasource-config-file): + +```yaml +datasources: + datasources.yaml: + apiVersion: 1 + datasources: + # name of the datasource. Required + - name: Graphite + # datasource type. Required + type: graphite + # access mode. proxy or direct (Server or Browser in the UI). Required + access: proxy + # org id. will default to orgId 1 if not specified + orgId: 1 + # url + url: http://localhost:8080 + # database password, if used + password: + # database user, if used + user: + # database name, if used + database: + # enable/disable basic auth + basicAuth: + # basic auth username + basicAuthUser: + # basic auth password + basicAuthPassword: + # enable/disable with credentials headers + withCredentials: + # mark as default datasource. Max one per org + isDefault: + # fields that will be converted to json and stored in json_data + jsonData: + graphiteVersion: "1.1" + tlsAuth: true + tlsAuthWithCACert: true + # json object of data that will be encrypted. + secureJsonData: + tlsCACert: "..." + tlsClientCert: "..." + tlsClientKey: "..." + version: 1 + # allow users to edit datasources from the UI. + editable: false +``` + +## Sidecar for notifiers + +If the parameter `sidecar.notifiers.enabled` is set, an init container is deployed in the grafana +pod. This container lists all secrets (or configmaps, though not recommended) in the cluster and +filters out the ones with a label as defined in `sidecar.notifiers.label`. The files defined in +those secrets are written to a folder and accessed by grafana on startup. Using these yaml files, +the notification channels in grafana can be imported. The secrets must be created before +`helm install` so that the notifiers init container can list the secrets. + +Secrets are recommended over configmaps for this usecase because alert notification channels usually contain +private data like SMTP usernames and passwords. Secrets are the more appropriate cluster resource to manage those. + +Example datasource config adapted from [Grafana](https://grafana.com/docs/grafana/latest/administration/provisioning/#alert-notification-channels): + +```yaml +notifiers: + - name: notification-channel-1 + type: slack + uid: notifier1 + # either + org_id: 2 + # or + org_name: Main Org. + is_default: true + send_reminder: true + frequency: 1h + disable_resolve_message: false + # See `Supported Settings` section for settings supporter for each + # alert notification type. + settings: + recipient: 'XXX' + token: 'xoxb' + uploadImage: true + url: https://slack.com + +delete_notifiers: + - name: notification-channel-1 + uid: notifier1 + org_id: 2 + - name: notification-channel-2 + # default org_id: 1 +``` + +## Sidecar for alerting resources + +If the parameter `sidecar.alerts.enabled` is set, a sidecar container is deployed in the grafana +pod. This container watches all configmaps (or secrets) in the cluster (namespace defined by `sidecar.alerts.searchNamespace`) and filters out the ones with +a label as defined in `sidecar.alerts.label` (default is `grafana_alert`). The files defined in those configmaps are written +to a folder and accessed by grafana. Changes to the configmaps are monitored and the imported alerting resources are updated, however, deletions are a little more complicated (see below). + +This sidecar can be used to provision alert rules, contact points, notification policies, notification templates and mute timings as shown in [Grafana Documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/). + +To fetch the alert config which will be provisioned, use the alert provisioning API ([Grafana Documentation](https://grafana.com/docs/grafana/next/developers/http_api/alerting_provisioning/)). +You can use either JSON or YAML format. + +Example config for an alert rule: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: sample-grafana-alert + labels: + grafana_alert: "1" +data: + k8s-alert.yml: |- + apiVersion: 1 + groups: + - orgId: 1 + name: k8s-alert + [...] +``` + +To delete provisioned alert rules is a two step process, you need to delete the configmap which defined the alert rule +and then create a configuration which deletes the alert rule. + +Example deletion configuration: + +```yaml +apiVersion: v1 +kind: ConfigMap +metadata: + name: delete-sample-grafana-alert + namespace: monitoring + labels: + grafana_alert: "1" +data: + delete-k8s-alert.yml: |- + apiVersion: 1 + deleteRules: + - orgId: 1 + uid: 16624780-6564-45dc-825c-8bded4ad92d3 +``` + +## Statically provision alerting resources + +If you don't need to change alerting resources (alert rules, contact points, notification policies and notification templates) regularly you could use the `alerting` config option instead of the sidecar option above. +This will grab the alerting config and apply it statically at build time for the helm file. + +There are two methods to statically provision alerting configuration in Grafana. Below are some examples and explanations as to how to use each method: + +```yaml +alerting: + team1-alert-rules.yaml: + file: alerting/team1/rules.yaml + team2-alert-rules.yaml: + file: alerting/team2/rules.yaml + team3-alert-rules.yaml: + file: alerting/team3/rules.yaml + notification-policies.yaml: + file: alerting/shared/notification-policies.yaml + notification-templates.yaml: + file: alerting/shared/notification-templates.yaml + contactpoints.yaml: + apiVersion: 1 + contactPoints: + - orgId: 1 + name: Slack channel + receivers: + - uid: default-receiver + type: slack + settings: + # Webhook URL to be filled in + url: "" + # We need to escape double curly braces for the tpl function. + text: '{{ `{{ template "default.message" . }}` }}' + title: '{{ `{{ template "default.title" . }}` }}' +``` + +The two possibilities for static alerting resource provisioning are: + +* Inlining the file contents as shown for contact points in the above example. +* Importing a file using a relative path starting from the chart root directory as shown for the alert rules in the above example. + +### Important notes on file provisioning + +* The format of the files is defined in the [Grafana documentation](https://grafana.com/docs/grafana/next/alerting/set-up/provision-alerting-resources/file-provisioning/) on file provisioning. +* The chart supports importing YAML and JSON files. +* The filename must be unique, otherwise one volume mount will overwrite the other. +* In case of inlining, double curly braces that arise from the Grafana configuration format and are not intended as templates for the chart must be escaped. +* The number of total files under `alerting:` is not limited. Each file will end up as a volume mount in the corresponding provisioning folder of the deployed Grafana instance. +* The file size for each import is limited by what the function `.Files.Get` can handle, which suffices for most cases. + +## How to serve Grafana with a path prefix (/grafana) + +In order to serve Grafana with a prefix (e.g., ), add the following to your values.yaml. + +```yaml +ingress: + enabled: true + annotations: + kubernetes.io/ingress.class: "nginx" + nginx.ingress.kubernetes.io/rewrite-target: /$1 + nginx.ingress.kubernetes.io/use-regex: "true" + + path: /grafana/?(.*) + hosts: + - k8s.example.dev + +grafana.ini: + server: + root_url: http://localhost:3000/grafana # this host can be localhost +``` + +## How to securely reference secrets in grafana.ini + +This example uses Grafana [file providers](https://grafana.com/docs/grafana/latest/administration/configuration/#file-provider) for secret values and the `extraSecretMounts` configuration flag (Additional grafana server secret mounts) to mount the secrets. + +In grafana.ini: + +```yaml +grafana.ini: + [auth.generic_oauth] + enabled = true + client_id = $__file{/etc/secrets/auth_generic_oauth/client_id} + client_secret = $__file{/etc/secrets/auth_generic_oauth/client_secret} +``` + +Existing secret, or created along with helm: + +```yaml +--- +apiVersion: v1 +kind: Secret +metadata: + name: auth-generic-oauth-secret +type: Opaque +stringData: + client_id: + client_secret: +``` + +Include in the `extraSecretMounts` configuration flag: + +```yaml +extraSecretMounts: + - name: auth-generic-oauth-secret-mount + secretName: auth-generic-oauth-secret + defaultMode: 0440 + mountPath: /etc/secrets/auth_generic_oauth + readOnly: true +``` + +### extraSecretMounts using a Container Storage Interface (CSI) provider + +This example uses a CSI driver e.g. retrieving secrets using [Azure Key Vault Provider](https://github.com/Azure/secrets-store-csi-driver-provider-azure) + +```yaml +extraSecretMounts: + - name: secrets-store-inline + mountPath: /run/secrets + readOnly: true + csi: + driver: secrets-store.csi.k8s.io + readOnly: true + volumeAttributes: + secretProviderClass: "my-provider" + nodePublishSecretRef: + name: akv-creds +``` + +## Image Renderer Plug-In + +This chart supports enabling [remote image rendering](https://github.com/grafana/grafana-image-renderer/blob/master/README.md#run-in-docker) + +```yaml +imageRenderer: + enabled: true +``` + +### Image Renderer NetworkPolicy + +By default the image-renderer pods will have a network policy which only allows ingress traffic from the created grafana instance + +### High Availability for unified alerting + +If you want to run Grafana in a high availability cluster you need to enable +the headless service by setting `headlessService: true` in your `values.yaml` +file. + +As next step you have to setup the `grafana.ini` in your `values.yaml` in a way +that it will make use of the headless service to obtain all the IPs of the +cluster. You should replace ``{{ Name }}`` with the name of your helm deployment. + +```yaml +grafana.ini: + ... + unified_alerting: + enabled: true + ha_peers: {{ Name }}-headless:9094 + ha_listen_address: ${POD_IP}:9094 + ha_advertise_address: ${POD_IP}:9094 + + alerting: + enabled: false +``` diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/default-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/default-values.yaml new file mode 100644 index 0000000000..fc2ba605ad --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/default-values.yaml @@ -0,0 +1 @@ +# Leave this file empty to ensure that CI runs builds against the default configuration in values.yaml. diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-affinity-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-affinity-values.yaml new file mode 100644 index 0000000000..f5b9b53e73 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-affinity-values.yaml @@ -0,0 +1,16 @@ +affinity: + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + labelSelector: + matchLabels: + app.kubernetes.io/instance: grafana-test + app.kubernetes.io/name: grafana + topologyKey: failure-domain.beta.kubernetes.io/zone + weight: 100 + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: + matchLabels: + app.kubernetes.io/instance: grafana-test + app.kubernetes.io/name: grafana + topologyKey: kubernetes.io/hostname diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-json-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-json-values.yaml new file mode 100644 index 0000000000..e0c4e41687 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-json-values.yaml @@ -0,0 +1,53 @@ +dashboards: + my-provider: + my-awesome-dashboard: + # An empty but valid dashboard + json: | + { + "__inputs": [], + "__requires": [ + { + "type": "grafana", + "id": "grafana", + "name": "Grafana", + "version": "6.3.5" + } + ], + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": "-- Grafana --", + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "type": "dashboard" + } + ] + }, + "editable": true, + "gnetId": null, + "graphTooltip": 0, + "id": null, + "links": [], + "panels": [], + "schemaVersion": 19, + "style": "dark", + "tags": [], + "templating": { + "list": [] + }, + "time": { + "from": "now-6h", + "to": "now" + }, + "timepicker": { + "refresh_intervals": ["5s"] + }, + "timezone": "", + "title": "Dummy Dashboard", + "uid": "IdcYQooWk", + "version": 1 + } + datasource: Prometheus diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-values.yaml new file mode 100644 index 0000000000..7b662c5fd4 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-dashboard-values.yaml @@ -0,0 +1,19 @@ +dashboards: + my-provider: + my-awesome-dashboard: + gnetId: 10000 + revision: 1 + datasource: Prometheus +dashboardProviders: + dashboardproviders.yaml: + apiVersion: 1 + providers: + - name: 'my-provider' + orgId: 1 + folder: '' + type: file + updateIntervalSeconds: 10 + disableDeletion: true + editable: true + options: + path: /var/lib/grafana/dashboards/my-provider diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-extraconfigmapmounts-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-extraconfigmapmounts-values.yaml new file mode 100644 index 0000000000..5cc44a056a --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-extraconfigmapmounts-values.yaml @@ -0,0 +1,7 @@ +extraConfigmapMounts: + - name: '{{ include "grafana.fullname" . }}' + configMap: '{{ include "grafana.fullname" . }}' + mountPath: /var/lib/grafana/dashboards/test-dashboard.json + # This is not a realistic test, but for this we only care about extraConfigmapMounts not being empty and pointing to an existing ConfigMap + subPath: grafana.ini + readOnly: true diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-image-renderer-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-image-renderer-values.yaml new file mode 100644 index 0000000000..06c0bda13e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-image-renderer-values.yaml @@ -0,0 +1,107 @@ +podLabels: + customLableA: Aaaaa +imageRenderer: + enabled: true + env: + RENDERING_ARGS: --disable-gpu,--window-size=1280x758 + RENDERING_MODE: clustered + podLabels: + customLableB: Bbbbb + networkPolicy: + limitIngress: true + limitEgress: true + resources: + limits: + cpu: 1000m + memory: 1000Mi + requests: + cpu: 500m + memory: 50Mi + extraVolumes: + - name: empty-renderer-volume + emtpyDir: {} + extraVolumeMounts: + - mountPath: /tmp/renderer + name: empty-renderer-volume + extraConfigmapMounts: + - name: renderer-config + mountPath: /usr/src/app/config.json + subPath: renderer-config.json + configMap: image-renderer-config + extraSecretMounts: + - name: renderer-certificate + mountPath: /usr/src/app/certs/ + secretName: image-renderer-certificate + readOnly: true + +extraObjects: + - apiVersion: v1 + kind: ConfigMap + metadata: + name: image-renderer-config + data: + renderer-config.json: | + { + "service": { + "host": null, + "port": 8081, + "protocol": "http", + "certFile": "", + "certKey": "", + + "metrics": { + "enabled": true, + "collectDefaultMetrics": true, + "requestDurationBuckets": [1, 5, 7, 9, 11, 13, 15, 20, 30] + }, + + "logging": { + "level": "info", + "console": { + "json": true, + "colorize": false + } + }, + + "security": { + "authToken": "-" + } + }, + "rendering": { + "chromeBin": null, + "args": ["--no-sandbox", "--disable-gpu"], + "ignoresHttpsErrors": false, + + "timezone": null, + "acceptLanguage": null, + "width": 1000, + "height": 500, + "deviceScaleFactor": 1, + "maxWidth": 3080, + "maxHeight": 3000, + "maxDeviceScaleFactor": 4, + "pageZoomLevel": 1, + "headed": false, + + "mode": "default", + "emulateNetworkConditions": false, + "clustering": { + "monitor": false, + "mode": "browser", + "maxConcurrency": 5, + "timeout": 30 + }, + + "verboseLogging": false, + "dumpio": false, + "timingMetrics": false + } + } + - apiVersion: v1 + kind: Secret + metadata: + name: image-renderer-certificate + type: Opaque + data: + # Decodes to 'PLACEHOLDER CERTIFICATE' + not-a-real-certificate: UExBQ0VIT0xERVIgQ0VSVElGSUNBVEU= diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-nondefault-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-nondefault-values.yaml new file mode 100644 index 0000000000..fb5c179409 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-nondefault-values.yaml @@ -0,0 +1,6 @@ +global: + environment: prod +ingress: + enabled: true + hosts: + - monitoring-{{ .Values.global.environment }}.example.com diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-persistence.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-persistence.yaml new file mode 100644 index 0000000000..b92ca02c9e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-persistence.yaml @@ -0,0 +1,3 @@ +persistence: + type: pvc + enabled: true diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-sidecars-envvaluefrom-values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-sidecars-envvaluefrom-values.yaml new file mode 100644 index 0000000000..a6935e56d0 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/ci/with-sidecars-envvaluefrom-values.yaml @@ -0,0 +1,38 @@ +extraObjects: + - apiVersion: v1 + kind: ConfigMap + metadata: + name: '{{ include "grafana.fullname" . }}-test' + data: + var1: "value1" + - apiVersion: v1 + kind: Secret + metadata: + name: '{{ include "grafana.fullname" . }}-test' + type: Opaque + data: + var2: "dmFsdWUy" + +sidecar: + dashboards: + enabled: true + envValueFrom: + VAR1: + configMapKeyRef: + name: '{{ include "grafana.fullname" . }}-test' + key: var1 + VAR2: + secretKeyRef: + name: '{{ include "grafana.fullname" . }}-test' + key: var2 + datasources: + enabled: true + envValueFrom: + VAR1: + configMapKeyRef: + name: '{{ include "grafana.fullname" . }}-test' + key: var1 + VAR2: + secretKeyRef: + name: '{{ include "grafana.fullname" . }}-test' + key: var2 diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/dashboards/custom-dashboard.json b/charts/kasten/k10/7.0.1101/charts/grafana/dashboards/custom-dashboard.json new file mode 100644 index 0000000000..9e26dfeeb6 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/dashboards/custom-dashboard.json @@ -0,0 +1 @@ +{} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/grafana/templates/NOTES.txt new file mode 100644 index 0000000000..a40f666a47 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/NOTES.txt @@ -0,0 +1,55 @@ +1. Get your '{{ .Values.adminUser }}' user password by running: + + kubectl get secret --namespace {{ include "grafana.namespace" . }} {{ .Values.admin.existingSecret | default (include "grafana.fullname" .) }} -o jsonpath="{.data.{{ .Values.admin.passwordKey | default "admin-password" }}}" | base64 --decode ; echo + + +2. The Grafana server can be accessed via port {{ .Values.service.port }} on the following DNS name from within your cluster: + + {{ include "grafana.fullname" . }}.{{ include "grafana.namespace" . }}.svc.cluster.local +{{ if .Values.ingress.enabled }} + If you bind grafana to 80, please update values in values.yaml and reinstall: + ``` + securityContext: + runAsUser: 0 + runAsGroup: 0 + fsGroup: 0 + + command: + - "setcap" + - "'cap_net_bind_service=+ep'" + - "/usr/sbin/grafana-server &&" + - "sh" + - "/run.sh" + ``` + Details refer to https://grafana.com/docs/installation/configuration/#http-port. + Or grafana would always crash. + + From outside the cluster, the server URL(s) are: + {{- range .Values.ingress.hosts }} + http://{{ . }} + {{- end }} +{{- else }} + Get the Grafana URL to visit by running these commands in the same shell: + {{- if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ include "grafana.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "grafana.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ include "grafana.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT + {{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc --namespace {{ include "grafana.namespace" . }} -w {{ include "grafana.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ include "grafana.namespace" . }} {{ include "grafana.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + http://$SERVICE_IP:{{ .Values.service.port -}} + {{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ include "grafana.namespace" . }} -l "app.kubernetes.io/name={{ include "grafana.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl --namespace {{ include "grafana.namespace" . }} port-forward $POD_NAME 3000 + {{- end }} +{{- end }} + +3. Login with the password from step 1 and the username: {{ .Values.adminUser }} + +{{- if and (not .Values.persistence.enabled) (not .Values.persistence.disableWarning) }} +################################################################################# +###### WARNING: Persistence is disabled!!! You will lose your data when ##### +###### the Grafana pod is terminated. ##### +################################################################################# +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/_config.tpl b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_config.tpl new file mode 100644 index 0000000000..b866217f2e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_config.tpl @@ -0,0 +1,172 @@ +{{/* + Generate config map data + */}} +{{- define "grafana.configData" -}} +{{ include "grafana.assertNoLeakedSecrets" . }} +{{- $files := .Files }} +{{- $root := . -}} +{{- with .Values.plugins }} +plugins: {{ join "," . }} +{{- end }} +grafana.ini: | +{{- range $elem, $elemVal := index .Values "grafana.ini" }} + {{- if not (kindIs "map" $elemVal) }} + {{- if kindIs "invalid" $elemVal }} + {{ $elem }} = + {{- else if kindIs "string" $elemVal }} + {{ $elem }} = {{ tpl $elemVal $ }} + {{- else }} + {{ $elem }} = {{ $elemVal }} + {{- end }} + {{- end }} +{{- end }} +{{- range $key, $value := index .Values "grafana.ini" }} + {{- if kindIs "map" $value }} + [{{ $key }}] + {{- range $elem, $elemVal := $value }} + {{- if kindIs "invalid" $elemVal }} + {{ $elem }} = + {{- else if kindIs "string" $elemVal }} + {{ $elem }} = {{ tpl $elemVal $ }} + {{- else }} + {{ $elem }} = {{ $elemVal }} + {{- end }} + {{- end }} + {{- end }} +{{- end }} + +{{- range $key, $value := .Values.datasources }} +{{- if not (hasKey $value "secret") }} +{{ $key }}: | + {{- tpl (toYaml $value | nindent 2) $root }} +{{- end }} +{{- end }} + +{{- range $key, $value := .Values.notifiers }} +{{- if not (hasKey $value "secret") }} +{{ $key }}: | + {{- toYaml $value | nindent 2 }} +{{- end }} +{{- end }} + +{{- range $key, $value := .Values.alerting }} +{{- if (hasKey $value "file") }} +{{ $key }}: +{{- toYaml ( $files.Get $value.file ) | nindent 2 }} +{{- else if (or (hasKey $value "secret") (hasKey $value "secretFile"))}} +{{/* will be stored inside secret generated by "configSecret.yaml"*/}} +{{- else }} +{{ $key }}: | + {{- tpl (toYaml $value | nindent 2) $root }} +{{- end }} +{{- end }} + +{{- range $key, $value := .Values.dashboardProviders }} +{{ $key }}: | + {{- toYaml $value | nindent 2 }} +{{- end }} + +{{- if .Values.dashboards }} +download_dashboards.sh: | + #!/usr/bin/env sh + set -euf + {{- if .Values.dashboardProviders }} + {{- range $key, $value := .Values.dashboardProviders }} + {{- range $value.providers }} + mkdir -p {{ .options.path }} + {{- end }} + {{- end }} + {{- end }} +{{ $dashboardProviders := .Values.dashboardProviders }} +{{- range $provider, $dashboards := .Values.dashboards }} + {{- range $key, $value := $dashboards }} + {{- if (or (hasKey $value "gnetId") (hasKey $value "url")) }} + curl -skf \ + --connect-timeout 60 \ + --max-time 60 \ + {{- if not $value.b64content }} + {{- if not $value.acceptHeader }} + -H "Accept: application/json" \ + {{- else }} + -H "Accept: {{ $value.acceptHeader }}" \ + {{- end }} + {{- if $value.token }} + -H "Authorization: token {{ $value.token }}" \ + {{- end }} + {{- if $value.bearerToken }} + -H "Authorization: Bearer {{ $value.bearerToken }}" \ + {{- end }} + {{- if $value.basic }} + -H "Authorization: Basic {{ $value.basic }}" \ + {{- end }} + {{- if $value.gitlabToken }} + -H "PRIVATE-TOKEN: {{ $value.gitlabToken }}" \ + {{- end }} + -H "Content-Type: application/json;charset=UTF-8" \ + {{- end }} + {{- $dpPath := "" -}} + {{- range $kd := (index $dashboardProviders "dashboardproviders.yaml").providers }} + {{- if eq $kd.name $provider }} + {{- $dpPath = $kd.options.path }} + {{- end }} + {{- end }} + {{- if $value.url }} + "{{ $value.url }}" \ + {{- else }} + "https://grafana.com/api/dashboards/{{ $value.gnetId }}/revisions/{{- if $value.revision -}}{{ $value.revision }}{{- else -}}1{{- end -}}/download" \ + {{- end }} + {{- if $value.datasource }} + {{- if kindIs "string" $value.datasource }} + | sed '/-- .* --/! s/"datasource":.*,/"datasource": "{{ $value.datasource }}",/g' \ + {{- end }} + {{- if kindIs "slice" $value.datasource }} + {{- range $value.datasource }} + | sed '/-- .* --/! s/${{"{"}}{{ .name }}}/{{ .value }}/g' \ + {{- end }} + {{- end }} + {{- end }} + {{- if $value.b64content }} + | base64 -d \ + {{- end }} + > "{{- if $dpPath -}}{{ $dpPath }}{{- else -}}/var/lib/grafana/dashboards/{{ $provider }}{{- end -}}/{{ $key }}.json" + {{ end }} + {{- end }} +{{- end }} +{{- end }} +{{- end -}} + +{{/* + Generate dashboard json config map data + */}} +{{- define "grafana.configDashboardProviderData" -}} +provider.yaml: |- + apiVersion: 1 + providers: + - name: '{{ .Values.sidecar.dashboards.provider.name }}' + orgId: {{ .Values.sidecar.dashboards.provider.orgid }} + {{- if not .Values.sidecar.dashboards.provider.foldersFromFilesStructure }} + folder: '{{ .Values.sidecar.dashboards.provider.folder }}' + folderUid: '{{ .Values.sidecar.dashboards.provider.folderUid }}' + {{- end }} + type: {{ .Values.sidecar.dashboards.provider.type }} + disableDeletion: {{ .Values.sidecar.dashboards.provider.disableDelete }} + allowUiUpdates: {{ .Values.sidecar.dashboards.provider.allowUiUpdates }} + updateIntervalSeconds: {{ .Values.sidecar.dashboards.provider.updateIntervalSeconds | default 30 }} + options: + foldersFromFilesStructure: {{ .Values.sidecar.dashboards.provider.foldersFromFilesStructure }} + path: {{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }} +{{- end -}} + +{{- define "grafana.secretsData" -}} +{{- if and (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) }} +admin-user: {{ .Values.adminUser | b64enc | quote }} +{{- if .Values.adminPassword }} +admin-password: {{ .Values.adminPassword | b64enc | quote }} +{{- else }} +admin-password: {{ include "grafana.password" . }} +{{- end }} +{{- end }} +{{- if not .Values.ldap.existingSecret }} +ldap-toml: {{ tpl .Values.ldap.config $ | b64enc | quote }} +{{- end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_helpers.tpl new file mode 100644 index 0000000000..2a68cb6f85 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_helpers.tpl @@ -0,0 +1,276 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "grafana.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "grafana.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "grafana.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create the name of the service account +*/}} +{{- define "grafana.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "grafana.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{- define "grafana.serviceAccountNameTest" -}} +{{- if .Values.serviceAccount.create }} +{{- default (print (include "grafana.fullname" .) "-test") .Values.serviceAccount.nameTest }} +{{- else }} +{{- default "default" .Values.serviceAccount.nameTest }} +{{- end }} +{{- end }} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "grafana.namespace" -}} +{{- if .Values.namespaceOverride }} +{{- .Values.namespaceOverride }} +{{- else }} +{{- .Release.Namespace }} +{{- end }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "grafana.labels" -}} +helm.sh/chart: {{ include "grafana.chart" . }} +{{ include "grafana.selectorLabels" . }} +{{- if or .Chart.AppVersion .Values.image.tag }} +app.kubernetes.io/version: {{ mustRegexReplaceAllLiteral "@sha.*" .Values.image.tag "" | default .Chart.AppVersion | trunc 63 | trimSuffix "-" | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- with .Values.extraLabels }} +{{ toYaml . }} +{{- end }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "grafana.selectorLabels" -}} +app.kubernetes.io/name: {{ include "grafana.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "grafana.imageRenderer.labels" -}} +helm.sh/chart: {{ include "grafana.chart" . }} +{{ include "grafana.imageRenderer.selectorLabels" . }} +{{- if or .Chart.AppVersion .Values.image.tag }} +app.kubernetes.io/version: {{ mustRegexReplaceAllLiteral "@sha.*" .Values.image.tag "" | default .Chart.AppVersion | trunc 63 | trimSuffix "-" | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels ImageRenderer +*/}} +{{- define "grafana.imageRenderer.selectorLabels" -}} +app.kubernetes.io/name: {{ include "grafana.name" . }}-image-renderer +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Looks if there's an existing secret and reuse its password. If not it generates +new password and use it. +*/}} +{{- define "grafana.password" -}} +{{- $secret := (lookup "v1" "Secret" (include "grafana.namespace" .) (include "grafana.fullname" .) ) }} +{{- if $secret }} +{{- index $secret "data" "admin-password" }} +{{- else }} +{{- (randAlphaNum 40) | b64enc | quote }} +{{- end }} +{{- end }} + +{{/* +Return the appropriate apiVersion for rbac. +*/}} +{{- define "grafana.rbac.apiVersion" -}} +{{- if $.Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }} +{{- print "rbac.authorization.k8s.io/v1" }} +{{- else }} +{{- print "rbac.authorization.k8s.io/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "grafana.ingress.apiVersion" -}} +{{- if and ($.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19-0" .Capabilities.KubeVersion.Version) }} +{{- print "networking.k8s.io/v1" }} +{{- else if $.Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" }} +{{- print "networking.k8s.io/v1beta1" }} +{{- else }} +{{- print "extensions/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Return the appropriate apiVersion for Horizontal Pod Autoscaler. +*/}} +{{- define "grafana.hpa.apiVersion" -}} +{{- if .Capabilities.APIVersions.Has "autoscaling/v2" }} +{{- print "autoscaling/v2" }} +{{- else }} +{{- print "autoscaling/v2beta2" }} +{{- end }} +{{- end }} + +{{/* +Return the appropriate apiVersion for podDisruptionBudget. +*/}} +{{- define "grafana.podDisruptionBudget.apiVersion" -}} +{{- if $.Values.podDisruptionBudget.apiVersion }} +{{- print $.Values.podDisruptionBudget.apiVersion }} +{{- else if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +{{- print "policy/v1" }} +{{- else }} +{{- print "policy/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Return if ingress is stable. +*/}} +{{- define "grafana.ingress.isStable" -}} +{{- eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1" }} +{{- end }} + +{{/* +Return if ingress supports ingressClassName. +*/}} +{{- define "grafana.ingress.supportsIngressClassName" -}} +{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }} +{{- end }} + +{{/* +Return if ingress supports pathType. +*/}} +{{- define "grafana.ingress.supportsPathType" -}} +{{- or (eq (include "grafana.ingress.isStable" .) "true") (and (eq (include "grafana.ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18-0" .Capabilities.KubeVersion.Version)) }} +{{- end }} + +{{/* +Formats imagePullSecrets. Input is (dict "root" . "imagePullSecrets" .{specific imagePullSecrets}) +*/}} +{{- define "grafana.imagePullSecrets" -}} +{{- $root := .root }} +{{- range (concat .root.Values.global.imagePullSecrets .imagePullSecrets) }} +{{- if eq (typeOf .) "map[string]interface {}" }} +- {{ toYaml (dict "name" (tpl .name $root)) | trim }} +{{- else }} +- name: {{ tpl . $root }} +{{- end }} +{{- end }} +{{- end }} + + +{{/* + Checks whether or not the configSecret secret has to be created + */}} +{{- define "grafana.shouldCreateConfigSecret" -}} +{{- $secretFound := false -}} +{{- range $key, $value := .Values.datasources }} + {{- if hasKey $value "secret" }} + {{- $secretFound = true}} + {{- end }} +{{- end }} +{{- range $key, $value := .Values.notifiers }} + {{- if hasKey $value "secret" }} + {{- $secretFound = true}} + {{- end }} +{{- end }} +{{- range $key, $value := .Values.alerting }} + {{- if (or (hasKey $value "secret") (hasKey $value "secretFile")) }} + {{- $secretFound = true}} + {{- end }} +{{- end }} +{{- $secretFound}} +{{- end -}} + +{{/* + Checks whether the user is attempting to store secrets in plaintext + in the grafana.ini configmap +*/}} +{{/* grafana.assertNoLeakedSecrets checks for sensitive keys in values */}} +{{- define "grafana.assertNoLeakedSecrets" -}} + {{- $sensitiveKeysYaml := ` +sensitiveKeys: +- path: ["database", "password"] +- path: ["smtp", "password"] +- path: ["security", "secret_key"] +- path: ["security", "admin_password"] +- path: ["auth.basic", "password"] +- path: ["auth.ldap", "bind_password"] +- path: ["auth.google", "client_secret"] +- path: ["auth.github", "client_secret"] +- path: ["auth.gitlab", "client_secret"] +- path: ["auth.generic_oauth", "client_secret"] +- path: ["auth.okta", "client_secret"] +- path: ["auth.azuread", "client_secret"] +- path: ["auth.grafana_com", "client_secret"] +- path: ["auth.grafananet", "client_secret"] +- path: ["azure", "user_identity_client_secret"] +- path: ["unified_alerting", "ha_redis_password"] +- path: ["metrics", "basic_auth_password"] +- path: ["external_image_storage.s3", "secret_key"] +- path: ["external_image_storage.webdav", "password"] +- path: ["external_image_storage.azure_blob", "account_key"] +` | fromYaml -}} + {{- if $.Values.assertNoLeakedSecrets -}} + {{- $grafanaIni := index .Values "grafana.ini" -}} + {{- range $_, $secret := $sensitiveKeysYaml.sensitiveKeys -}} + {{- $currentMap := $grafanaIni -}} + {{- $shouldContinue := true -}} + {{- range $index, $elem := $secret.path -}} + {{- if and $shouldContinue (hasKey $currentMap $elem) -}} + {{- if eq (len $secret.path) (add1 $index) -}} + {{- if not (regexMatch "\\$(?:__(?:env|file|vault))?{[^}]+}" (index $currentMap $elem)) -}} + {{- fail (printf "Sensitive key '%s' should not be defined explicitly in values. Use variable expansion instead. You can disable this client-side validation by changing the value of assertNoLeakedSecrets." (join "." $secret.path)) -}} + {{- end -}} + {{- else -}} + {{- $currentMap = index $currentMap $elem -}} + {{- end -}} + {{- else -}} + {{- $shouldContinue = false -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/_pod.tpl b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_pod.tpl new file mode 100644 index 0000000000..8b33db75ba --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/_pod.tpl @@ -0,0 +1,1320 @@ +{{- define "grafana.pod" -}} +{{- $sts := list "sts" "StatefulSet" "statefulset" -}} +{{- $root := . -}} +{{- with .Values.schedulerName }} +schedulerName: "{{ . }}" +{{- end }} +serviceAccountName: {{ include "grafana.serviceAccountName" . }} +automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} +{{- with .Values.securityContext }} +securityContext: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.hostAliases }} +hostAliases: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- if .Values.dnsPolicy }} +dnsPolicy: {{ .Values.dnsPolicy }} +{{- end }} +{{- with .Values.dnsConfig }} +dnsConfig: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.priorityClassName }} +priorityClassName: {{ . }} +{{- end }} +{{- if ( or .Values.persistence.enabled .Values.dashboards .Values.extraInitContainers (and .Values.sidecar.alerts.enabled .Values.sidecar.alerts.initAlerts) (and .Values.sidecar.datasources.enabled .Values.sidecar.datasources.initDatasources) (and .Values.sidecar.notifiers.enabled .Values.sidecar.notifiers.initNotifiers)) }} +initContainers: +{{- end }} +{{- if ( and .Values.persistence.enabled .Values.initChownData.enabled ) }} + - name: init-chown-data + {{- $registry := .Values.global.imageRegistry | default .Values.initChownData.image.registry -}} + {{- if .Values.initChownData.image.sha }} + image: "{{ $registry }}/{{ .Values.initChownData.image.repository }}{{ if .Values.initChownData.image.tag }}:{{ .Values.initChownData.image.tag }}{{ end }}@sha256:{{ .Values.initChownData.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.initChownData.image.repository }}{{ if .Values.initChownData.image.tag }}:{{ .Values.initChownData.image.tag }}{{ end }}" + {{- end }} + imagePullPolicy: {{ .Values.initChownData.image.pullPolicy }} + {{- with .Values.initChownData.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + command: + - chown + - -R + - {{ .Values.securityContext.runAsUser }}:{{ .Values.securityContext.runAsGroup }} + - /var/lib/grafana + {{- with .Values.initChownData.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: storage + mountPath: "/var/lib/grafana" + {{- with .Values.persistence.subPath }} + subPath: {{ tpl . $root }} + {{- end }} +{{- end }} +{{- if .Values.dashboards }} + - name: download-dashboards + {{- $registry := .Values.global.imageRegistry | default .Values.downloadDashboardsImage.registry -}} + {{- if .Values.downloadDashboardsImage.sha }} + image: "{{ $registry }}/{{ .Values.downloadDashboardsImage.repository }}{{ if .Values.downloadDashboardsImage.tag }}:{{ .Values.downloadDashboardsImage.tag }}{{ end }}@sha256:{{ .Values.downloadDashboardsImage.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.downloadDashboardsImage.repository }}{{ if .Values.downloadDashboardsImage.tag }}:{{ .Values.downloadDashboardsImage.tag }}{{ end }}" + {{- end }} + imagePullPolicy: {{ .Values.downloadDashboardsImage.pullPolicy }} + command: ["/bin/sh"] + args: [ "-c", "mkdir -p /var/lib/grafana/dashboards/default && /bin/sh -x /etc/grafana/download_dashboards.sh" ] + {{- with .Values.downloadDashboards.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + env: + {{- range $key, $value := .Values.downloadDashboards.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- range $key, $value := .Values.downloadDashboards.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 10 }} + {{- end }} + {{- with .Values.downloadDashboards.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.downloadDashboards.envFromSecret }} + envFrom: + - secretRef: + name: {{ tpl . $root }} + {{- end }} + volumeMounts: + - name: config + mountPath: "/etc/grafana/download_dashboards.sh" + subPath: download_dashboards.sh + - name: storage + mountPath: "/var/lib/grafana" + {{- with .Values.persistence.subPath }} + subPath: {{ tpl . $root }} + {{- end }} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + readOnly: {{ .readOnly }} + {{- end }} +{{- end }} +{{- if and .Values.sidecar.alerts.enabled .Values.sidecar.alerts.initAlerts }} + - name: {{ include "grafana.name" . }}-init-sc-alerts + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.alerts.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- if .Values.sidecar.alerts.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: "LIST" + - name: LABEL + value: "{{ .Values.sidecar.alerts.label }}" + {{- with .Values.sidecar.alerts.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.alerts.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.alerts.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/alerting" + - name: RESOURCE + value: {{ quote .Values.sidecar.alerts.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.alerts.searchNamespace }} + - name: NAMESPACE + value: {{ . | join "," | quote }} + {{- end }} + {{- with .Values.sidecar.alerts.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: {{ quote . }} + {{- end }} + {{- with .Values.sidecar.alerts.script }} + - name: SCRIPT + value: {{ quote . }} + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-alerts-volume + mountPath: "/etc/grafana/provisioning/alerting" + {{- with .Values.sidecar.alerts.extraMounts }} + {{- toYaml . | trim | nindent 6 }} + {{- end }} +{{- end }} +{{- if and .Values.sidecar.datasources.enabled .Values.sidecar.datasources.initDatasources }} + - name: {{ include "grafana.name" . }}-init-sc-datasources + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.datasources.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- range $key, $value := .Values.sidecar.datasources.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 10 }} + {{- end }} + {{- if .Values.sidecar.datasources.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: "LIST" + - name: LABEL + value: "{{ .Values.sidecar.datasources.label }}" + {{- with .Values.sidecar.datasources.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/datasources" + - name: RESOURCE + value: {{ quote .Values.sidecar.datasources.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- if .Values.sidecar.datasources.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (.Values.sidecar.datasources.searchNamespace | join ",") . }}" + {{- end }} + {{- with .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-datasources-volume + mountPath: "/etc/grafana/provisioning/datasources" +{{- end }} +{{- if and .Values.sidecar.notifiers.enabled .Values.sidecar.notifiers.initNotifiers }} + - name: {{ include "grafana.name" . }}-init-sc-notifiers + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.notifiers.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- if .Values.sidecar.notifiers.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: LIST + - name: LABEL + value: "{{ .Values.sidecar.notifiers.label }}" + {{- with .Values.sidecar.notifiers.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/notifiers" + - name: RESOURCE + value: {{ quote .Values.sidecar.notifiers.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.notifiers.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (. | join ",") $root }}" + {{- end }} + {{- with .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-notifiers-volume + mountPath: "/etc/grafana/provisioning/notifiers" +{{- end}} +{{- with .Values.extraInitContainers }} + {{- tpl (toYaml .) $root | nindent 2 }} +{{- end }} +{{- if or .Values.image.pullSecrets .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- include "grafana.imagePullSecrets" (dict "root" $root "imagePullSecrets" .Values.image.pullSecrets) | nindent 2 }} +{{- end }} +{{- if not .Values.enableKubeBackwardCompatibility }} +enableServiceLinks: {{ .Values.enableServiceLinks }} +{{- end }} +containers: +{{- if and .Values.sidecar.alerts.enabled (not .Values.sidecar.alerts.initAlerts) }} + - name: {{ include "grafana.name" . }}-sc-alerts + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.alerts.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- if .Values.sidecar.alerts.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: {{ .Values.sidecar.alerts.watchMethod }} + - name: LABEL + value: "{{ .Values.sidecar.alerts.label }}" + {{- with .Values.sidecar.alerts.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.alerts.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.alerts.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/alerting" + - name: RESOURCE + value: {{ quote .Values.sidecar.alerts.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.alerts.searchNamespace }} + - name: NAMESPACE + value: {{ . | join "," | quote }} + {{- end }} + {{- with .Values.sidecar.alerts.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: {{ quote . }} + {{- end }} + {{- with .Values.sidecar.alerts.script }} + - name: SCRIPT + value: {{ quote . }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_USERNAME + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + {{- if not .Values.sidecar.alerts.skipReload }} + - name: REQ_URL + value: {{ .Values.sidecar.alerts.reloadURL }} + - name: REQ_METHOD + value: POST + {{- end }} + {{- if .Values.sidecar.alerts.watchServerTimeout }} + {{- if ne .Values.sidecar.alerts.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.alerts.watchServerTimeout with .Values.sidecar.alerts.watchMethod %s" .Values.sidecar.alerts.watchMethod) }} + {{- end }} + - name: WATCH_SERVER_TIMEOUT + value: "{{ .Values.sidecar.alerts.watchServerTimeout }}" + {{- end }} + {{- if .Values.sidecar.alerts.watchClientTimeout }} + {{- if ne .Values.sidecar.alerts.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.alerts.watchClientTimeout with .Values.sidecar.alerts.watchMethod %s" .Values.sidecar.alerts.watchMethod) }} + {{- end }} + - name: WATCH_CLIENT_TIMEOUT + value: "{{ .Values.sidecar.alerts.watchClientTimeout }}" + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-alerts-volume + mountPath: "/etc/grafana/provisioning/alerting" + {{- with .Values.sidecar.alerts.extraMounts }} + {{- toYaml . | trim | nindent 6 }} + {{- end }} +{{- end}} +{{- if .Values.sidecar.dashboards.enabled }} + - name: {{ include "grafana.name" . }}-sc-dashboard + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.dashboards.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- range $key, $value := .Values.sidecar.dashboards.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 10 }} + {{- end }} + {{- if .Values.sidecar.dashboards.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: {{ .Values.sidecar.dashboards.watchMethod }} + - name: LABEL + value: "{{ .Values.sidecar.dashboards.label }}" + {{- with .Values.sidecar.dashboards.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.dashboards.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.dashboards.logLevel }} + {{- end }} + - name: FOLDER + value: "{{ .Values.sidecar.dashboards.folder }}{{- with .Values.sidecar.dashboards.defaultFolderName }}/{{ . }}{{- end }}" + - name: RESOURCE + value: {{ quote .Values.sidecar.dashboards.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.dashboards.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (. | join ",") $root }}" + {{- end }} + {{- with .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.dashboards.folderAnnotation }} + - name: FOLDER_ANNOTATION + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.dashboards.script }} + - name: SCRIPT + value: "{{ . }}" + {{- end }} + {{- if not .Values.sidecar.dashboards.skipReload }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_USERNAME + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + - name: REQ_URL + value: {{ .Values.sidecar.dashboards.reloadURL }} + - name: REQ_METHOD + value: POST + {{- end }} + {{- if .Values.sidecar.dashboards.watchServerTimeout }} + {{- if ne .Values.sidecar.dashboards.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.dashboards.watchServerTimeout with .Values.sidecar.dashboards.watchMethod %s" .Values.sidecar.dashboards.watchMethod) }} + {{- end }} + - name: WATCH_SERVER_TIMEOUT + value: "{{ .Values.sidecar.dashboards.watchServerTimeout }}" + {{- end }} + {{- if .Values.sidecar.dashboards.watchClientTimeout }} + {{- if ne .Values.sidecar.dashboards.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.dashboards.watchClientTimeout with .Values.sidecar.dashboards.watchMethod %s" .Values.sidecar.dashboards.watchMethod) }} + {{- end }} + - name: WATCH_CLIENT_TIMEOUT + value: {{ .Values.sidecar.dashboards.watchClientTimeout | quote }} + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-dashboard-volume + mountPath: {{ .Values.sidecar.dashboards.folder | quote }} + {{- with .Values.sidecar.dashboards.extraMounts }} + {{- toYaml . | trim | nindent 6 }} + {{- end }} +{{- end}} +{{- if and .Values.sidecar.datasources.enabled (not .Values.sidecar.datasources.initDatasources) }} + - name: {{ include "grafana.name" . }}-sc-datasources + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.datasources.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- range $key, $value := .Values.sidecar.datasources.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 10 }} + {{- end }} + {{- if .Values.sidecar.datasources.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: {{ .Values.sidecar.datasources.watchMethod }} + - name: LABEL + value: "{{ .Values.sidecar.datasources.label }}" + {{- with .Values.sidecar.datasources.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.datasources.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/datasources" + - name: RESOURCE + value: {{ quote .Values.sidecar.datasources.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.datasources.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (. | join ",") $root }}" + {{- end }} + {{- if .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ .Values.sidecar.skipTlsVerify }}" + {{- end }} + {{- if .Values.sidecar.datasources.script }} + - name: SCRIPT + value: "{{ .Values.sidecar.datasources.script }}" + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_USERNAME + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + {{- if not .Values.sidecar.datasources.skipReload }} + - name: REQ_URL + value: {{ .Values.sidecar.datasources.reloadURL }} + - name: REQ_METHOD + value: POST + {{- end }} + {{- if .Values.sidecar.datasources.watchServerTimeout }} + {{- if ne .Values.sidecar.datasources.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.datasources.watchServerTimeout with .Values.sidecar.datasources.watchMethod %s" .Values.sidecar.datasources.watchMethod) }} + {{- end }} + - name: WATCH_SERVER_TIMEOUT + value: "{{ .Values.sidecar.datasources.watchServerTimeout }}" + {{- end }} + {{- if .Values.sidecar.datasources.watchClientTimeout }} + {{- if ne .Values.sidecar.datasources.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.datasources.watchClientTimeout with .Values.sidecar.datasources.watchMethod %s" .Values.sidecar.datasources.watchMethod) }} + {{- end }} + - name: WATCH_CLIENT_TIMEOUT + value: "{{ .Values.sidecar.datasources.watchClientTimeout }}" + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-datasources-volume + mountPath: "/etc/grafana/provisioning/datasources" +{{- end}} +{{- if .Values.sidecar.notifiers.enabled }} + - name: {{ include "grafana.name" . }}-sc-notifiers + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.notifiers.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- if .Values.sidecar.notifiers.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: {{ .Values.sidecar.notifiers.watchMethod }} + - name: LABEL + value: "{{ .Values.sidecar.notifiers.label }}" + {{- with .Values.sidecar.notifiers.labelValue }} + - name: LABEL_VALUE + value: {{ quote . }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.notifiers.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/notifiers" + - name: RESOURCE + value: {{ quote .Values.sidecar.notifiers.resource }} + {{- if .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ .Values.sidecar.enableUniqueFilenames }}" + {{- end }} + {{- with .Values.sidecar.notifiers.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (. | join ",") $root }}" + {{- end }} + {{- with .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ . }}" + {{- end }} + {{- if .Values.sidecar.notifiers.script }} + - name: SCRIPT + value: "{{ .Values.sidecar.notifiers.script }}" + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_USERNAME + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + {{- if not .Values.sidecar.notifiers.skipReload }} + - name: REQ_URL + value: {{ .Values.sidecar.notifiers.reloadURL }} + - name: REQ_METHOD + value: POST + {{- end }} + {{- if .Values.sidecar.notifiers.watchServerTimeout }} + {{- if ne .Values.sidecar.notifiers.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.notifiers.watchServerTimeout with .Values.sidecar.notifiers.watchMethod %s" .Values.sidecar.notifiers.watchMethod) }} + {{- end }} + - name: WATCH_SERVER_TIMEOUT + value: "{{ .Values.sidecar.notifiers.watchServerTimeout }}" + {{- end }} + {{- if .Values.sidecar.notifiers.watchClientTimeout }} + {{- if ne .Values.sidecar.notifiers.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.notifiers.watchClientTimeout with .Values.sidecar.notifiers.watchMethod %s" .Values.sidecar.notifiers.watchMethod) }} + {{- end }} + - name: WATCH_CLIENT_TIMEOUT + value: "{{ .Values.sidecar.notifiers.watchClientTimeout }}" + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-notifiers-volume + mountPath: "/etc/grafana/provisioning/notifiers" +{{- end}} +{{- if .Values.sidecar.plugins.enabled }} + - name: {{ include "grafana.name" . }}-sc-plugins + {{- $registry := .Values.global.imageRegistry | default .Values.sidecar.image.registry -}} + {{- if .Values.sidecar.image.sha }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}@sha256:{{ .Values.sidecar.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.sidecar.image.repository }}:{{ .Values.sidecar.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.sidecar.imagePullPolicy }} + env: + {{- range $key, $value := .Values.sidecar.plugins.env }} + - name: "{{ $key }}" + value: "{{ $value }}" + {{- end }} + {{- if .Values.sidecar.plugins.ignoreAlreadyProcessed }} + - name: IGNORE_ALREADY_PROCESSED + value: "true" + {{- end }} + - name: METHOD + value: {{ .Values.sidecar.plugins.watchMethod }} + - name: LABEL + value: "{{ .Values.sidecar.plugins.label }}" + {{- if .Values.sidecar.plugins.labelValue }} + - name: LABEL_VALUE + value: {{ quote .Values.sidecar.plugins.labelValue }} + {{- end }} + {{- if or .Values.sidecar.logLevel .Values.sidecar.plugins.logLevel }} + - name: LOG_LEVEL + value: {{ default .Values.sidecar.logLevel .Values.sidecar.plugins.logLevel }} + {{- end }} + - name: FOLDER + value: "/etc/grafana/provisioning/plugins" + - name: RESOURCE + value: {{ quote .Values.sidecar.plugins.resource }} + {{- with .Values.sidecar.enableUniqueFilenames }} + - name: UNIQUE_FILENAMES + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.plugins.searchNamespace }} + - name: NAMESPACE + value: "{{ tpl (. | join ",") $root }}" + {{- end }} + {{- with .Values.sidecar.plugins.script }} + - name: SCRIPT + value: "{{ . }}" + {{- end }} + {{- with .Values.sidecar.skipTlsVerify }} + - name: SKIP_TLS_VERIFY + value: "{{ . }}" + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_USERNAME + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: REQ_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + {{- if not .Values.sidecar.plugins.skipReload }} + - name: REQ_URL + value: {{ .Values.sidecar.plugins.reloadURL }} + - name: REQ_METHOD + value: POST + {{- end }} + {{- if .Values.sidecar.plugins.watchServerTimeout }} + {{- if ne .Values.sidecar.plugins.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.plugins.watchServerTimeout with .Values.sidecar.plugins.watchMethod %s" .Values.sidecar.plugins.watchMethod) }} + {{- end }} + - name: WATCH_SERVER_TIMEOUT + value: "{{ .Values.sidecar.plugins.watchServerTimeout }}" + {{- end }} + {{- if .Values.sidecar.plugins.watchClientTimeout }} + {{- if ne .Values.sidecar.plugins.watchMethod "WATCH" }} + {{- fail (printf "Cannot use .Values.sidecar.plugins.watchClientTimeout with .Values.sidecar.plugins.watchMethod %s" .Values.sidecar.plugins.watchMethod) }} + {{- end }} + - name: WATCH_CLIENT_TIMEOUT + value: "{{ .Values.sidecar.plugins.watchClientTimeout }}" + {{- end }} + {{- with .Values.sidecar.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.sidecar.securityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: sc-plugins-volume + mountPath: "/etc/grafana/provisioning/plugins" +{{- end}} + - name: {{ .Chart.Name }} + {{- $registry := .Values.global.imageRegistry | default .Values.image.registry -}} + {{- if .Values.image.sha }} + image: "{{ $registry }}/{{ .Values.image.repository }}{{ if .Values.image.tag }}:{{ .Values.image.tag | default .Chart.AppVersion }}{{ end }}@sha256:{{ .Values.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.image.repository }}{{ if .Values.image.tag }}:{{ .Values.image.tag | default .Chart.AppVersion }}{{ end }}" + {{- end }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + {{- if .Values.command }} + command: + {{- range .Values.command }} + - {{ . | quote }} + {{- end }} + {{- end }} + {{- if .Values.args }} + args: + {{- range .Values.args }} + - {{ . | quote }} + {{- end }} + {{- end }} + {{- with .Values.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: config + mountPath: "/etc/grafana/grafana.ini" + subPath: grafana.ini + {{- if .Values.ldap.enabled }} + - name: ldap + mountPath: "/etc/grafana/ldap.toml" + subPath: ldap.toml + {{- end }} + {{- range .Values.extraConfigmapMounts }} + - name: {{ tpl .name $root }} + mountPath: {{ tpl .mountPath $root }} + subPath: {{ tpl (.subPath | default "") $root }} + readOnly: {{ .readOnly }} + {{- end }} + - name: storage + mountPath: "/var/lib/grafana" + {{- with .Values.persistence.subPath }} + subPath: {{ tpl . $root }} + {{- end }} + {{- with .Values.dashboards }} + {{- range $provider, $dashboards := . }} + {{- range $key, $value := $dashboards }} + {{- if (or (hasKey $value "json") (hasKey $value "file")) }} + - name: dashboards-{{ $provider }} + mountPath: "/var/lib/grafana/dashboards/{{ $provider }}/{{ $key }}.json" + subPath: "{{ $key }}.json" + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.dashboardsConfigMaps }} + {{- range (keys . | sortAlpha) }} + - name: dashboards-{{ . }} + mountPath: "/var/lib/grafana/dashboards/{{ . }}" + {{- end }} + {{- end }} + {{- with .Values.datasources }} + {{- $datasources := . }} + {{- range (keys . | sortAlpha) }} + {{- if (or (hasKey (index $datasources .) "secret")) }} {{/*check if current datasource should be handeled as secret */}} + - name: config-secret + mountPath: "/etc/grafana/provisioning/datasources/{{ . }}" + subPath: {{ . | quote }} + {{- else }} + - name: config + mountPath: "/etc/grafana/provisioning/datasources/{{ . }}" + subPath: {{ . | quote }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.notifiers }} + {{- $notifiers := . }} + {{- range (keys . | sortAlpha) }} + {{- if (or (hasKey (index $notifiers .) "secret")) }} {{/*check if current notifier should be handeled as secret */}} + - name: config-secret + mountPath: "/etc/grafana/provisioning/notifiers/{{ . }}" + subPath: {{ . | quote }} + {{- else }} + - name: config + mountPath: "/etc/grafana/provisioning/notifiers/{{ . }}" + subPath: {{ . | quote }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.alerting }} + {{- $alertingmap := .}} + {{- range (keys . | sortAlpha) }} + {{- if (or (hasKey (index $.Values.alerting .) "secret") (hasKey (index $.Values.alerting .) "secretFile")) }} {{/*check if current alerting entry should be handeled as secret */}} + - name: config-secret + mountPath: "/etc/grafana/provisioning/alerting/{{ . }}" + subPath: {{ . | quote }} + {{- else }} + - name: config + mountPath: "/etc/grafana/provisioning/alerting/{{ . }}" + subPath: {{ . | quote }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.dashboardProviders }} + {{- range (keys . | sortAlpha) }} + - name: config + mountPath: "/etc/grafana/provisioning/dashboards/{{ . }}" + subPath: {{ . | quote }} + {{- end }} + {{- end }} + {{- with .Values.sidecar.alerts.enabled }} + - name: sc-alerts-volume + mountPath: "/etc/grafana/provisioning/alerting" + {{- end}} + {{- if .Values.sidecar.dashboards.enabled }} + - name: sc-dashboard-volume + mountPath: {{ .Values.sidecar.dashboards.folder | quote }} + {{- if .Values.sidecar.dashboards.SCProvider }} + - name: sc-dashboard-provider + mountPath: "/etc/grafana/provisioning/dashboards/sc-dashboardproviders.yaml" + subPath: provider.yaml + {{- end}} + {{- end}} + {{- if .Values.sidecar.datasources.enabled }} + - name: sc-datasources-volume + mountPath: "/etc/grafana/provisioning/datasources" + {{- end}} + {{- if .Values.sidecar.plugins.enabled }} + - name: sc-plugins-volume + mountPath: "/etc/grafana/provisioning/plugins" + {{- end}} + {{- if .Values.sidecar.notifiers.enabled }} + - name: sc-notifiers-volume + mountPath: "/etc/grafana/provisioning/notifiers" + {{- end}} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + readOnly: {{ .readOnly }} + subPath: {{ .subPath | default "" }} + {{- end }} + {{- range .Values.extraVolumeMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath | default "" }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.extraEmptyDirMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + {{- end }} + ports: + - name: {{ .Values.podPortName }} + containerPort: {{ .Values.service.targetPort }} + protocol: TCP + - name: {{ .Values.gossipPortName }}-tcp + containerPort: 9094 + protocol: TCP + - name: {{ .Values.gossipPortName }}-udp + containerPort: 9094 + protocol: UDP + env: + - name: POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + {{- if and (not .Values.env.GF_SECURITY_ADMIN_USER) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: GF_SECURITY_ADMIN_USER + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.userKey | default "admin-user" }} + {{- end }} + {{- if and (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + - name: GF_SECURITY_ADMIN_PASSWORD + valueFrom: + secretKeyRef: + name: {{ (tpl .Values.admin.existingSecret .) | default (include "grafana.fullname" .) }} + key: {{ .Values.admin.passwordKey | default "admin-password" }} + {{- end }} + {{- if .Values.plugins }} + - name: GF_INSTALL_PLUGINS + valueFrom: + configMapKeyRef: + name: {{ include "grafana.fullname" . }} + key: plugins + {{- end }} + {{- if .Values.smtp.existingSecret }} + - name: GF_SMTP_USER + valueFrom: + secretKeyRef: + name: {{ .Values.smtp.existingSecret }} + key: {{ .Values.smtp.userKey | default "user" }} + - name: GF_SMTP_PASSWORD + valueFrom: + secretKeyRef: + name: {{ .Values.smtp.existingSecret }} + key: {{ .Values.smtp.passwordKey | default "password" }} + {{- end }} + {{- if .Values.imageRenderer.enabled }} + - name: GF_RENDERING_SERVER_URL + {{- if .Values.imageRenderer.serverURL }} + value: {{ .Values.imageRenderer.serverURL | quote }} + {{- else }} + value: http://{{ include "grafana.fullname" . }}-image-renderer.{{ include "grafana.namespace" . }}:{{ .Values.imageRenderer.service.port }}/render + {{- end }} + - name: GF_RENDERING_CALLBACK_URL + {{- if .Values.imageRenderer.renderingCallbackURL }} + value: {{ .Values.imageRenderer.renderingCallbackURL | quote }} + {{- else }} + value: {{ .Values.imageRenderer.grafanaProtocol }}://{{ include "grafana.fullname" . }}.{{ include "grafana.namespace" . }}:{{ .Values.service.port }}/{{ .Values.imageRenderer.grafanaSubPath }} + {{- end }} + {{- end }} + - name: GF_PATHS_DATA + value: {{ (get .Values "grafana.ini").paths.data }} + - name: GF_PATHS_LOGS + value: {{ (get .Values "grafana.ini").paths.logs }} + - name: GF_PATHS_PLUGINS + value: {{ (get .Values "grafana.ini").paths.plugins }} + - name: GF_PATHS_PROVISIONING + value: {{ (get .Values "grafana.ini").paths.provisioning }} + {{- range $key, $value := .Values.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 10 }} + {{- end }} + {{- range $key, $value := .Values.env }} + - name: "{{ tpl $key $ }}" + value: "{{ tpl (print $value) $ }}" + {{- end }} + {{- if or .Values.envFromSecret (or .Values.envRenderSecret .Values.envFromSecrets) .Values.envFromConfigMaps }} + envFrom: + {{- if .Values.envFromSecret }} + - secretRef: + name: {{ tpl .Values.envFromSecret . }} + {{- end }} + {{- if .Values.envRenderSecret }} + - secretRef: + name: {{ include "grafana.fullname" . }}-env + {{- end }} + {{- range .Values.envFromSecrets }} + - secretRef: + name: {{ tpl .name $ }} + optional: {{ .optional | default false }} + {{- if .prefix }} + prefix: {{ tpl .prefix $ }} + {{- end }} + {{- end }} + {{- range .Values.envFromConfigMaps }} + - configMapRef: + name: {{ tpl .name $ }} + optional: {{ .optional | default false }} + {{- if .prefix }} + prefix: {{ tpl .prefix $ }} + {{- end }} + {{- end }} + {{- end }} + {{- with .Values.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.lifecycleHooks }} + lifecycle: + {{- tpl (toYaml .) $root | nindent 6 }} + {{- end }} + {{- with .Values.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} +{{- with .Values.extraContainers }} + {{- tpl . $ | nindent 2 }} +{{- end }} +{{- with .Values.nodeSelector }} +nodeSelector: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.affinity }} +affinity: + {{- tpl (toYaml .) $root | nindent 2 }} +{{- end }} +{{- with .Values.topologySpreadConstraints }} +topologySpreadConstraints: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.tolerations }} +tolerations: + {{- toYaml . | nindent 2 }} +{{- end }} +volumes: + - name: config + configMap: + name: {{ include "grafana.fullname" . }} + {{- $createConfigSecret := eq (include "grafana.shouldCreateConfigSecret" .) "true" -}} + {{- if and .Values.createConfigmap $createConfigSecret }} + - name: config-secret + secret: + secretName: {{ include "grafana.fullname" . }}-config-secret + {{- end }} + {{- range .Values.extraConfigmapMounts }} + - name: {{ tpl .name $root }} + configMap: + name: {{ tpl .configMap $root }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- with .items }} + items: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- end }} + {{- if .Values.dashboards }} + {{- range (keys .Values.dashboards | sortAlpha) }} + - name: dashboards-{{ . }} + configMap: + name: {{ include "grafana.fullname" $ }}-dashboards-{{ . }} + {{- end }} + {{- end }} + {{- if .Values.dashboardsConfigMaps }} + {{- range $provider, $name := .Values.dashboardsConfigMaps }} + - name: dashboards-{{ $provider }} + configMap: + name: {{ tpl $name $root }} + {{- end }} + {{- end }} + {{- if .Values.ldap.enabled }} + - name: ldap + secret: + {{- if .Values.ldap.existingSecret }} + secretName: {{ .Values.ldap.existingSecret }} + {{- else }} + secretName: {{ include "grafana.fullname" . }} + {{- end }} + items: + - key: ldap-toml + path: ldap.toml + {{- end }} + {{- if and .Values.persistence.enabled (eq .Values.persistence.type "pvc") }} + - name: storage + persistentVolumeClaim: + claimName: {{ tpl (.Values.persistence.existingClaim | default (include "grafana.fullname" .)) . }} + {{- else if and .Values.persistence.enabled (has .Values.persistence.type $sts) }} + {{/* nothing */}} + {{- else }} + - name: storage + {{- if .Values.persistence.inMemory.enabled }} + emptyDir: + medium: Memory + {{- with .Values.persistence.inMemory.sizeLimit }} + sizeLimit: {{ . }} + {{- end }} + {{- else }} + emptyDir: {} + {{- end }} + {{- end }} + {{- if .Values.sidecar.alerts.enabled }} + - name: sc-alerts-volume + emptyDir: + {{- with .Values.sidecar.alerts.sizeLimit }} + sizeLimit: {{ . }} + {{- else }} + {} + {{- end }} + {{- end }} + {{- if .Values.sidecar.dashboards.enabled }} + - name: sc-dashboard-volume + emptyDir: + {{- with .Values.sidecar.dashboards.sizeLimit }} + sizeLimit: {{ . }} + {{- else }} + {} + {{- end }} + {{- if .Values.sidecar.dashboards.SCProvider }} + - name: sc-dashboard-provider + configMap: + name: {{ include "grafana.fullname" . }}-config-dashboards + {{- end }} + {{- end }} + {{- if .Values.sidecar.datasources.enabled }} + - name: sc-datasources-volume + emptyDir: + {{- with .Values.sidecar.datasources.sizeLimit }} + sizeLimit: {{ . }} + {{- else }} + {} + {{- end }} + {{- end }} + {{- if .Values.sidecar.plugins.enabled }} + - name: sc-plugins-volume + emptyDir: + {{- with .Values.sidecar.plugins.sizeLimit }} + sizeLimit: {{ . }} + {{- else }} + {} + {{- end }} + {{- end }} + {{- if .Values.sidecar.notifiers.enabled }} + - name: sc-notifiers-volume + emptyDir: + {{- with .Values.sidecar.notifiers.sizeLimit }} + sizeLimit: {{ . }} + {{- else }} + {} + {{- end }} + {{- end }} + {{- range .Values.extraSecretMounts }} + {{- if .secretName }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + defaultMode: {{ .defaultMode }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- with .items }} + items: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- else if .projected }} + - name: {{ .name }} + projected: + {{- toYaml .projected | nindent 6 }} + {{- else if .csi }} + - name: {{ .name }} + csi: + {{- toYaml .csi | nindent 6 }} + {{- end }} + {{- end }} + {{- range .Values.extraVolumes }} + - name: {{ .name }} + {{- if .existingClaim }} + persistentVolumeClaim: + claimName: {{ .existingClaim }} + {{- else if .hostPath }} + hostPath: + {{ toYaml .hostPath | nindent 6 }} + {{- else if .csi }} + csi: + {{- toYaml .csi | nindent 6 }} + {{- else if .configMap }} + configMap: + {{- toYaml .configMap | nindent 6 }} + {{- else if .emptyDir }} + emptyDir: + {{- toYaml .emptyDir | nindent 6 }} + {{- else }} + emptyDir: {} + {{- end }} + {{- end }} + {{- range .Values.extraEmptyDirMounts }} + - name: {{ .name }} + emptyDir: {} + {{- end }} + {{- with .Values.extraContainerVolumes }} + {{- tpl (toYaml .) $root | nindent 2 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrole.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrole.yaml new file mode 100644 index 0000000000..3af4b62b63 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrole.yaml @@ -0,0 +1,25 @@ +{{- if and .Values.rbac.create (or (not .Values.rbac.namespaced) .Values.rbac.extraClusterRoleRules) (not .Values.rbac.useExistingClusterRole) }} +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ include "grafana.fullname" . }}-clusterrole +{{- if or .Values.sidecar.dashboards.enabled .Values.rbac.extraClusterRoleRules .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled .Values.sidecar.alerts.enabled }} +rules: + {{- if or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled .Values.sidecar.alerts.enabled }} + - apiGroups: [""] # "" indicates the core API group + resources: ["configmaps", "secrets"] + verbs: ["get", "watch", "list"] + {{- end}} + {{- with .Values.rbac.extraClusterRoleRules }} + {{- toYaml . | nindent 2 }} + {{- end}} +{{- else }} +rules: [] +{{- end}} +{{- end}} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..bda9431a2c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/clusterrolebinding.yaml @@ -0,0 +1,24 @@ +{{- if and .Values.rbac.create (or (not .Values.rbac.namespaced) .Values.rbac.extraClusterRoleRules) }} +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: {{ include "grafana.fullname" . }}-clusterrolebinding + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +subjects: + - kind: ServiceAccount + name: {{ include "grafana.serviceAccountName" . }} + namespace: {{ include "grafana.namespace" . }} +roleRef: + kind: ClusterRole + {{- if .Values.rbac.useExistingClusterRole }} + name: {{ .Values.rbac.useExistingClusterRole }} + {{- else }} + name: {{ include "grafana.fullname" . }}-clusterrole + {{- end }} + apiGroup: rbac.authorization.k8s.io +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/configSecret.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configSecret.yaml new file mode 100644 index 0000000000..55574b9bbc --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configSecret.yaml @@ -0,0 +1,43 @@ +{{- $createConfigSecret := eq (include "grafana.shouldCreateConfigSecret" .) "true" -}} +{{- if and .Values.createConfigmap $createConfigSecret }} +{{- $files := .Files }} +{{- $root := . -}} +apiVersion: v1 +kind: Secret +metadata: + name: "{{ include "grafana.fullname" . }}-config-secret" + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +data: +{{- range $key, $value := .Values.alerting }} + {{- if (hasKey $value "secretFile") }} + {{- $key | nindent 2 }}: + {{- toYaml ( $files.Get $value.secretFile ) | b64enc | nindent 4}} + {{/* as of https://helm.sh/docs/chart_template_guide/accessing_files/ this will only work if you fork this chart and add files to it*/}} + {{- end }} +{{- end }} +stringData: +{{- range $key, $value := .Values.datasources }} +{{- if (hasKey $value "secret") }} +{{- $key | nindent 2 }}: | + {{- tpl (toYaml $value.secret | nindent 4) $root }} +{{- end }} +{{- end }} +{{- range $key, $value := .Values.notifiers }} +{{- if (hasKey $value "secret") }} +{{- $key | nindent 2 }}: | + {{- tpl (toYaml $value.secret | nindent 4) $root }} +{{- end }} +{{- end }} +{{- range $key, $value := .Values.alerting }} +{{ if (hasKey $value "secret") }} + {{- $key | nindent 2 }}: | + {{- tpl (toYaml $value.secret | nindent 4) $root }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap-dashboard-provider.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap-dashboard-provider.yaml new file mode 100644 index 0000000000..b412c4d1f0 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap-dashboard-provider.yaml @@ -0,0 +1,15 @@ +{{- if and .Values.sidecar.dashboards.enabled .Values.sidecar.dashboards.SCProvider }} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ include "grafana.fullname" . }}-config-dashboards + namespace: {{ include "grafana.namespace" . }} +data: + {{- include "grafana.configDashboardProviderData" . | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap.yaml new file mode 100644 index 0000000000..0a2edf47e3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/configmap.yaml @@ -0,0 +1,20 @@ +{{- if .Values.createConfigmap }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- if or .Values.configMapAnnotations .Values.annotations }} + annotations: + {{- with .Values.annotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.configMapAnnotations }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- end }} +data: + {{- include "grafana.configData" . | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/dashboards-json-configmap.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/dashboards-json-configmap.yaml new file mode 100644 index 0000000000..df0ed0d8c5 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/dashboards-json-configmap.yaml @@ -0,0 +1,35 @@ +{{- if .Values.dashboards }} +{{ $files := .Files }} +{{- range $provider, $dashboards := .Values.dashboards }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "grafana.fullname" $ }}-dashboards-{{ $provider }} + namespace: {{ include "grafana.namespace" $ }} + labels: + {{- include "grafana.labels" $ | nindent 4 }} + dashboard-provider: {{ $provider }} +{{- if $dashboards }} +data: +{{- $dashboardFound := false }} +{{- range $key, $value := $dashboards }} +{{- if (or (hasKey $value "json") (hasKey $value "file")) }} +{{- $dashboardFound = true }} + {{- print $key | nindent 2 }}.json: + {{- if hasKey $value "json" }} + |- + {{- $value.json | nindent 6 }} + {{- end }} + {{- if hasKey $value "file" }} + {{- toYaml ( $files.Get $value.file ) | nindent 4}} + {{- end }} +{{- end }} +{{- end }} +{{- if not $dashboardFound }} + {} +{{- end }} +{{- end }} +--- +{{- end }} + +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/deployment.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/deployment.yaml new file mode 100644 index 0000000000..c981362ad4 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/deployment.yaml @@ -0,0 +1,54 @@ +{{- if (and (not .Values.useStatefulSet) (or (not .Values.persistence.enabled) (eq .Values.persistence.type "pvc"))) }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + {{- if and (not .Values.autoscaling.enabled) (.Values.replicas) }} + replicas: {{ .Values.replicas }} + {{- end }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + selector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 6 }} + {{- with .Values.deploymentStrategy }} + strategy: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + template: + metadata: + labels: + {{- include "grafana.selectorLabels" . | nindent 8 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + annotations: + checksum/config: {{ include "grafana.configData" . | sha256sum }} + {{- if .Values.dashboards }} + checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }} + {{- end }} + checksum/sc-dashboard-provider-config: {{ include "grafana.configDashboardProviderData" . | sha256sum }} + {{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + checksum/secret: {{ include "grafana.secretsData" . | sha256sum }} + {{- end }} + {{- if .Values.envRenderSecret }} + checksum/secret-env: {{ tpl (toYaml .Values.envRenderSecret) . | sha256sum }} + {{- end }} + kubectl.kubernetes.io/default-container: {{ .Chart.Name }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- include "grafana.pod" . | nindent 6 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/extra-manifests.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/extra-manifests.yaml new file mode 100644 index 0000000000..a9bb3b6ba8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/extra-manifests.yaml @@ -0,0 +1,4 @@ +{{ range .Values.extraObjects }} +--- +{{ tpl (toYaml .) $ }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/headless-service.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/headless-service.yaml new file mode 100644 index 0000000000..3028589d32 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/headless-service.yaml @@ -0,0 +1,22 @@ +{{- $sts := list "sts" "StatefulSet" "statefulset" -}} +{{- if or .Values.headlessService (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (has .Values.persistence.type $sts)) }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "grafana.fullname" . }}-headless + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + clusterIP: None + selector: + {{- include "grafana.selectorLabels" . | nindent 4 }} + type: ClusterIP + ports: + - name: {{ .Values.gossipPortName }}-tcp + port: 9094 +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/hpa.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/hpa.yaml new file mode 100644 index 0000000000..46bbcb49a2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/hpa.yaml @@ -0,0 +1,52 @@ +{{- $sts := list "sts" "StatefulSet" "statefulset" -}} +{{- if .Values.autoscaling.enabled }} +apiVersion: {{ include "grafana.hpa.apiVersion" . }} +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + app.kubernetes.io/name: {{ include "grafana.name" . }} + helm.sh/chart: {{ include "grafana.chart" . }} + app.kubernetes.io/managed-by: {{ .Release.Service }} + app.kubernetes.io/instance: {{ .Release.Name }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + {{- if has .Values.persistence.type $sts }} + kind: StatefulSet + {{- else }} + kind: Deployment + {{- end }} + name: {{ include "grafana.fullname" . }} + minReplicas: {{ .Values.autoscaling.minReplicas }} + maxReplicas: {{ .Values.autoscaling.maxReplicas }} + metrics: + {{- if .Values.autoscaling.targetMemory }} + - type: Resource + resource: + name: memory + {{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }} + targetAverageUtilization: {{ .Values.autoscaling.targetMemory }} + {{- else }} + target: + type: Utilization + averageUtilization: {{ .Values.autoscaling.targetMemory }} + {{- end }} + {{- end }} + {{- if .Values.autoscaling.targetCPU }} + - type: Resource + resource: + name: cpu + {{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }} + targetAverageUtilization: {{ .Values.autoscaling.targetCPU }} + {{- else }} + target: + type: Utilization + averageUtilization: {{ .Values.autoscaling.targetCPU }} + {{- end }} + {{- end }} + {{- if .Values.autoscaling.behavior }} + behavior: {{ toYaml .Values.autoscaling.behavior | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-deployment.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-deployment.yaml new file mode 100644 index 0000000000..7722ede50d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-deployment.yaml @@ -0,0 +1,200 @@ +{{ if .Values.imageRenderer.enabled }} +{{- $root := . -}} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.imageRenderer.labels" . | nindent 4 }} + {{- with .Values.imageRenderer.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.imageRenderer.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + {{- if and (not .Values.imageRenderer.autoscaling.enabled) (.Values.imageRenderer.replicas) }} + replicas: {{ .Values.imageRenderer.replicas }} + {{- end }} + revisionHistoryLimit: {{ .Values.imageRenderer.revisionHistoryLimit }} + selector: + matchLabels: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }} + + {{- with .Values.imageRenderer.deploymentStrategy }} + strategy: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + template: + metadata: + labels: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 8 }} + {{- with .Values.imageRenderer.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- with .Values.imageRenderer.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- with .Values.imageRenderer.schedulerName }} + schedulerName: "{{ . }}" + {{- end }} + {{- with .Values.imageRenderer.serviceAccountName }} + serviceAccountName: "{{ . }}" + {{- end }} + {{- with .Values.imageRenderer.securityContext }} + securityContext: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.imageRenderer.hostAliases }} + hostAliases: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.imageRenderer.priorityClassName }} + priorityClassName: {{ . }} + {{- end }} + {{- with .Values.imageRenderer.image.pullSecrets }} + imagePullSecrets: + {{- range . }} + - name: {{ tpl . $root }} + {{- end}} + {{- end }} + containers: + - name: {{ .Chart.Name }}-image-renderer + {{- $registry := .Values.global.imageRegistry | default .Values.imageRenderer.image.registry -}} + {{- if .Values.imageRenderer.image.sha }} + image: "{{ $registry }}/{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}@sha256:{{ .Values.imageRenderer.image.sha }}" + {{- else }} + image: "{{ $registry }}/{{ .Values.imageRenderer.image.repository }}:{{ .Values.imageRenderer.image.tag }}" + {{- end }} + imagePullPolicy: {{ .Values.imageRenderer.image.pullPolicy }} + {{- if .Values.imageRenderer.command }} + command: + {{- range .Values.imageRenderer.command }} + - {{ . }} + {{- end }} + {{- end}} + ports: + - name: {{ .Values.imageRenderer.service.portName }} + containerPort: {{ .Values.imageRenderer.service.targetPort }} + protocol: TCP + livenessProbe: + httpGet: + path: / + port: {{ .Values.imageRenderer.service.portName }} + env: + - name: HTTP_PORT + value: {{ .Values.imageRenderer.service.targetPort | quote }} + {{- if .Values.imageRenderer.serviceMonitor.enabled }} + - name: ENABLE_METRICS + value: "true" + {{- end }} + {{- range $key, $value := .Values.imageRenderer.envValueFrom }} + - name: {{ $key | quote }} + valueFrom: + {{- tpl (toYaml $value) $ | nindent 16 }} + {{- end }} + {{- range $key, $value := .Values.imageRenderer.env }} + - name: {{ $key | quote }} + value: {{ $value | quote }} + {{- end }} + {{- with .Values.imageRenderer.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - mountPath: /tmp + name: image-renderer-tmpfs + {{- range .Values.imageRenderer.extraConfigmapMounts }} + - name: {{ tpl .name $root }} + mountPath: {{ tpl .mountPath $root }} + subPath: {{ tpl (.subPath | default "") $root }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.imageRenderer.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + readOnly: {{ .readOnly }} + subPath: {{ .subPath | default "" }} + {{- end }} + {{- range .Values.imageRenderer.extraVolumeMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath | default "" }} + readOnly: {{ .readOnly }} + {{- end }} + {{- with .Values.imageRenderer.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.imageRenderer.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.imageRenderer.affinity }} + affinity: + {{- tpl (toYaml .) $root | nindent 8 }} + {{- end }} + {{- with .Values.imageRenderer.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: image-renderer-tmpfs + emptyDir: {} + {{- range .Values.imageRenderer.extraConfigmapMounts }} + - name: {{ tpl .name $root }} + configMap: + name: {{ tpl .configMap $root }} + {{- with .items }} + items: + {{- toYaml . | nindent 14 }} + {{- end }} + {{- end }} + {{- range .Values.imageRenderer.extraSecretMounts }} + {{- if .secretName }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + defaultMode: {{ .defaultMode }} + {{- with .items }} + items: + {{- toYaml . | nindent 14 }} + {{- end }} + {{- else if .projected }} + - name: {{ .name }} + projected: + {{- toYaml .projected | nindent 12 }} + {{- else if .csi }} + - name: {{ .name }} + csi: + {{- toYaml .csi | nindent 12 }} + {{- end }} + {{- end }} + {{- range .Values.imageRenderer.extraVolumes }} + - name: {{ .name }} + {{- if .existingClaim }} + persistentVolumeClaim: + claimName: {{ .existingClaim }} + {{- else if .hostPath }} + hostPath: + {{ toYaml .hostPath | nindent 12 }} + {{- else if .csi }} + csi: + {{- toYaml .csi | nindent 12 }} + {{- else if .configMap }} + configMap: + {{- toYaml .configMap | nindent 12 }} + {{- else if .emptyDir }} + emptyDir: + {{- toYaml .emptyDir | nindent 12 }} + {{- else }} + emptyDir: {} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-hpa.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-hpa.yaml new file mode 100644 index 0000000000..b0f0059b79 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-hpa.yaml @@ -0,0 +1,47 @@ +{{- if and .Values.imageRenderer.enabled .Values.imageRenderer.autoscaling.enabled }} +apiVersion: {{ include "grafana.hpa.apiVersion" . }} +kind: HorizontalPodAutoscaler +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer + namespace: {{ include "grafana.namespace" . }} + labels: + app.kubernetes.io/name: {{ include "grafana.name" . }}-image-renderer + helm.sh/chart: {{ include "grafana.chart" . }} + app.kubernetes.io/managed-by: {{ .Release.Service }} + app.kubernetes.io/instance: {{ .Release.Name }} +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: {{ include "grafana.fullname" . }}-image-renderer + minReplicas: {{ .Values.imageRenderer.autoscaling.minReplicas }} + maxReplicas: {{ .Values.imageRenderer.autoscaling.maxReplicas }} + metrics: + {{- if .Values.imageRenderer.autoscaling.targetMemory }} + - type: Resource + resource: + name: memory + {{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }} + targetAverageUtilization: {{ .Values.imageRenderer.autoscaling.targetMemory }} + {{- else }} + target: + type: Utilization + averageUtilization: {{ .Values.imageRenderer.autoscaling.targetMemory }} + {{- end }} + {{- end }} + {{- if .Values.imageRenderer.autoscaling.targetCPU }} + - type: Resource + resource: + name: cpu + {{- if eq (include "grafana.hpa.apiVersion" .) "autoscaling/v2beta1" }} + targetAverageUtilization: {{ .Values.imageRenderer.autoscaling.targetCPU }} + {{- else }} + target: + type: Utilization + averageUtilization: {{ .Values.imageRenderer.autoscaling.targetCPU }} + {{- end }} + {{- end }} + {{- if .Values.imageRenderer.autoscaling.behavior }} + behavior: {{ toYaml .Values.imageRenderer.autoscaling.behavior | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-network-policy.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-network-policy.yaml new file mode 100644 index 0000000000..bcbd24976c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-network-policy.yaml @@ -0,0 +1,79 @@ +{{- if and .Values.imageRenderer.enabled .Values.imageRenderer.networkPolicy.limitIngress }} +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer-ingress + namespace: {{ include "grafana.namespace" . }} + annotations: + comment: Limit image-renderer ingress traffic from grafana +spec: + podSelector: + matchLabels: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }} + {{- with .Values.imageRenderer.podLabels }} + {{- toYaml . | nindent 6 }} + {{- end }} + + policyTypes: + - Ingress + ingress: + - ports: + - port: {{ .Values.imageRenderer.service.targetPort }} + protocol: TCP + from: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: {{ include "grafana.namespace" . }} + podSelector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 14 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 14 }} + {{- end }} + {{- with .Values.imageRenderer.networkPolicy.extraIngressSelectors -}} + {{ toYaml . | nindent 8 }} + {{- end }} +{{- end }} + +{{- if and .Values.imageRenderer.enabled .Values.imageRenderer.networkPolicy.limitEgress }} +--- +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer-egress + namespace: {{ include "grafana.namespace" . }} + annotations: + comment: Limit image-renderer egress traffic to grafana +spec: + podSelector: + matchLabels: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }} + {{- with .Values.imageRenderer.podLabels }} + {{- toYaml . | nindent 6 }} + {{- end }} + + policyTypes: + - Egress + egress: + # allow dns resolution + - ports: + - port: 53 + protocol: UDP + - port: 53 + protocol: TCP + # talk only to grafana + - ports: + - port: {{ .Values.service.targetPort }} + protocol: TCP + to: + - namespaceSelector: + matchLabels: + kubernetes.io/metadata.name: {{ include "grafana.namespace" . }} + podSelector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 14 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 14 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-service.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-service.yaml new file mode 100644 index 0000000000..f8da127cf8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-service.yaml @@ -0,0 +1,31 @@ +{{- if and .Values.imageRenderer.enabled .Values.imageRenderer.service.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.imageRenderer.labels" . | nindent 4 }} + {{- with .Values.imageRenderer.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.imageRenderer.service.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + type: ClusterIP + {{- with .Values.imageRenderer.service.clusterIP }} + clusterIP: {{ . }} + {{- end }} + ports: + - name: {{ .Values.imageRenderer.service.portName }} + port: {{ .Values.imageRenderer.service.port }} + protocol: TCP + targetPort: {{ .Values.imageRenderer.service.targetPort }} + {{- with .Values.imageRenderer.appProtocol }} + appProtocol: {{ . }} + {{- end }} + selector: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-servicemonitor.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-servicemonitor.yaml new file mode 100644 index 0000000000..5d9f09d266 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/image-renderer-servicemonitor.yaml @@ -0,0 +1,48 @@ +{{- if .Values.imageRenderer.serviceMonitor.enabled }} +--- +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "grafana.fullname" . }}-image-renderer + {{- if .Values.imageRenderer.serviceMonitor.namespace }} + namespace: {{ tpl .Values.imageRenderer.serviceMonitor.namespace . }} + {{- else }} + namespace: {{ include "grafana.namespace" . }} + {{- end }} + labels: + {{- include "grafana.imageRenderer.labels" . | nindent 4 }} + {{- with .Values.imageRenderer.serviceMonitor.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + endpoints: + - port: {{ .Values.imageRenderer.service.portName }} + {{- with .Values.imageRenderer.serviceMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.imageRenderer.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + honorLabels: true + path: {{ .Values.imageRenderer.serviceMonitor.path }} + scheme: {{ .Values.imageRenderer.serviceMonitor.scheme }} + {{- with .Values.imageRenderer.serviceMonitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.imageRenderer.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 6 }} + {{- end }} + jobLabel: "{{ .Release.Name }}-image-renderer" + selector: + matchLabels: + {{- include "grafana.imageRenderer.selectorLabels" . | nindent 6 }} + namespaceSelector: + matchNames: + - {{ include "grafana.namespace" . }} + {{- with .Values.imageRenderer.serviceMonitor.targetLabels }} + targetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/ingress.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/ingress.yaml new file mode 100644 index 0000000000..b2ffd81095 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/ingress.yaml @@ -0,0 +1,78 @@ +{{- if .Values.ingress.enabled -}} +{{- $ingressApiIsStable := eq (include "grafana.ingress.isStable" .) "true" -}} +{{- $ingressSupportsIngressClassName := eq (include "grafana.ingress.supportsIngressClassName" .) "true" -}} +{{- $ingressSupportsPathType := eq (include "grafana.ingress.supportsPathType" .) "true" -}} +{{- $fullName := include "grafana.fullname" . -}} +{{- $servicePort := .Values.service.port -}} +{{- $ingressPath := .Values.ingress.path -}} +{{- $ingressPathType := .Values.ingress.pathType -}} +{{- $extraPaths := .Values.ingress.extraPaths -}} +apiVersion: {{ include "grafana.ingress.apiVersion" . }} +kind: Ingress +metadata: + name: {{ $fullName }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.ingress.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.ingress.annotations }} + annotations: + {{- range $key, $value := . }} + {{ $key }}: {{ tpl $value $ | quote }} + {{- end }} + {{- end }} +spec: + {{- if and $ingressSupportsIngressClassName .Values.ingress.ingressClassName }} + ingressClassName: {{ .Values.ingress.ingressClassName }} + {{- end -}} + {{- with .Values.ingress.tls }} + tls: + {{- tpl (toYaml .) $ | nindent 4 }} + {{- end }} + rules: + {{- if .Values.ingress.hosts }} + {{- range .Values.ingress.hosts }} + - host: {{ tpl . $ | quote }} + http: + paths: + {{- with $extraPaths }} + {{- toYaml . | nindent 10 }} + {{- end }} + - path: {{ $ingressPath }} + {{- if $ingressSupportsPathType }} + pathType: {{ $ingressPathType }} + {{- end }} + backend: + {{- if $ingressApiIsStable }} + service: + name: {{ $fullName }} + port: + number: {{ $servicePort }} + {{- else }} + serviceName: {{ $fullName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end }} + {{- else }} + - http: + paths: + - backend: + {{- if $ingressApiIsStable }} + service: + name: {{ $fullName }} + port: + number: {{ $servicePort }} + {{- else }} + serviceName: {{ $fullName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- with $ingressPath }} + path: {{ . }} + {{- end }} + {{- if $ingressSupportsPathType }} + pathType: {{ $ingressPathType }} + {{- end }} + {{- end -}} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/networkpolicy.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/networkpolicy.yaml new file mode 100644 index 0000000000..4cd3ed6976 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/networkpolicy.yaml @@ -0,0 +1,61 @@ +{{- if .Values.networkPolicy.enabled }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + policyTypes: + {{- if .Values.networkPolicy.ingress }} + - Ingress + {{- end }} + {{- if .Values.networkPolicy.egress.enabled }} + - Egress + {{- end }} + podSelector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 6 }} + + {{- if .Values.networkPolicy.egress.enabled }} + egress: + {{- if not .Values.networkPolicy.egress.blockDNSResolution }} + - ports: + - port: 53 + protocol: UDP + {{- end }} + - ports: + {{ .Values.networkPolicy.egress.ports | toJson }} + {{- with .Values.networkPolicy.egress.to }} + to: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- end }} + {{- if .Values.networkPolicy.ingress }} + ingress: + - ports: + - port: {{ .Values.service.targetPort }} + {{- if not .Values.networkPolicy.allowExternal }} + from: + - podSelector: + matchLabels: + {{ include "grafana.fullname" . }}-client: "true" + {{- with .Values.networkPolicy.explicitNamespacesSelector }} + - namespaceSelector: + {{- toYaml . | nindent 12 }} + {{- end }} + - podSelector: + matchLabels: + {{- include "grafana.labels" . | nindent 14 }} + role: read + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/poddisruptionbudget.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/poddisruptionbudget.yaml new file mode 100644 index 0000000000..05251214ac --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/poddisruptionbudget.yaml @@ -0,0 +1,22 @@ +{{- if .Values.podDisruptionBudget }} +apiVersion: {{ include "grafana.podDisruptionBudget.apiVersion" . }} +kind: PodDisruptionBudget +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + {{- with .Values.podDisruptionBudget.minAvailable }} + minAvailable: {{ . }} + {{- end }} + {{- with .Values.podDisruptionBudget.maxUnavailable }} + maxUnavailable: {{ . }} + {{- end }} + selector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/podsecuritypolicy.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..eed7af95b5 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/podsecuritypolicy.yaml @@ -0,0 +1,49 @@ +{{- if and .Values.rbac.pspEnabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "grafana.fullname" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + annotations: + seccomp.security.alpha.kubernetes.io/allowedProfileNames: 'docker/default,runtime/default' + seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + {{- if .Values.rbac.pspUseAppArmor }} + apparmor.security.beta.kubernetes.io/allowedProfileNames: 'runtime/default' + apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + {{- end }} +spec: + privileged: false + allowPrivilegeEscalation: false + requiredDropCapabilities: + # Default set from Docker, with DAC_OVERRIDE and CHOWN + - ALL + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'csi' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/pvc.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/pvc.yaml new file mode 100644 index 0000000000..d1c4b2de27 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/pvc.yaml @@ -0,0 +1,39 @@ +{{- if and (not .Values.useStatefulSet) .Values.persistence.enabled (not .Values.persistence.existingClaim) (eq .Values.persistence.type "pvc")}} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.persistence.extraPvcLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.persistence.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.persistence.finalizers }} + finalizers: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + accessModes: + {{- range .Values.persistence.accessModes }} + - {{ . | quote }} + {{- end }} + resources: + requests: + storage: {{ .Values.persistence.size | quote }} + {{- if and (.Values.persistence.lookupVolumeName) (lookup "v1" "PersistentVolumeClaim" (include "grafana.namespace" .) (include "grafana.fullname" .)) }} + volumeName: {{ (lookup "v1" "PersistentVolumeClaim" (include "grafana.namespace" .) (include "grafana.fullname" .)).spec.volumeName }} + {{- end }} + {{- with .Values.persistence.storageClassName }} + storageClassName: {{ . }} + {{- end }} + {{- with .Values.persistence.selectorLabels }} + selector: + matchLabels: + {{- toYaml . | nindent 6 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/role.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/role.yaml new file mode 100644 index 0000000000..4b5edd978c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/role.yaml @@ -0,0 +1,32 @@ +{{- if and .Values.rbac.create (not .Values.rbac.useExistingRole) -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- if or .Values.rbac.pspEnabled (and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled .Values.rbac.extraRoleRules)) }} +rules: + {{- if and .Values.rbac.pspEnabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} + - apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: [{{ include "grafana.fullname" . }}] + {{- end }} + {{- if and .Values.rbac.namespaced (or .Values.sidecar.dashboards.enabled .Values.sidecar.datasources.enabled .Values.sidecar.plugins.enabled) }} + - apiGroups: [""] # "" indicates the core API group + resources: ["configmaps", "secrets"] + verbs: ["get", "watch", "list"] + {{- end }} + {{- with .Values.rbac.extraRoleRules }} + {{- toYaml . | nindent 2 }} + {{- end}} +{{- else }} +rules: [] +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/rolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/rolebinding.yaml new file mode 100644 index 0000000000..58f77c6b0b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/rolebinding.yaml @@ -0,0 +1,25 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + {{- if .Values.rbac.useExistingRole }} + name: {{ .Values.rbac.useExistingRole }} + {{- else }} + name: {{ include "grafana.fullname" . }} + {{- end }} +subjects: +- kind: ServiceAccount + name: {{ include "grafana.serviceAccountName" . }} + namespace: {{ include "grafana.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret-env.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret-env.yaml new file mode 100644 index 0000000000..eb14aac707 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret-env.yaml @@ -0,0 +1,14 @@ +{{- if .Values.envRenderSecret }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "grafana.fullname" . }}-env + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} +type: Opaque +data: +{{- range $key, $val := .Values.envRenderSecret }} + {{ $key }}: {{ tpl ($val | toString) $ | b64enc | quote }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret.yaml new file mode 100644 index 0000000000..fd2ca50f4b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/secret.yaml @@ -0,0 +1,16 @@ +{{- if or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret)) }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +type: Opaque +data: + {{- include "grafana.secretsData" . | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/service.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/service.yaml new file mode 100644 index 0000000000..022328c114 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/service.yaml @@ -0,0 +1,67 @@ +{{- if .Values.service.enabled }} +{{- $root := . }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.service.annotations }} + annotations: + {{- tpl (toYaml . | nindent 4) $root }} + {{- end }} +spec: + {{- if (or (eq .Values.service.type "ClusterIP") (empty .Values.service.type)) }} + type: ClusterIP + {{- with .Values.service.clusterIP }} + clusterIP: {{ . }} + {{- end }} + {{- else if eq .Values.service.type "LoadBalancer" }} + type: LoadBalancer + {{- with .Values.service.loadBalancerIP }} + loadBalancerIP: {{ . }} + {{- end }} + {{- with .Values.service.loadBalancerClass }} + loadBalancerClass: {{ . }} + {{- end }} + {{- with .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- else }} + type: {{ .Values.service.type }} + {{- end }} + {{- if .Values.service.ipFamilyPolicy }} + ipFamilyPolicy: {{ .Values.service.ipFamilyPolicy }} + {{- end }} + {{- if .Values.service.ipFamilies }} + ipFamilies: {{ .Values.service.ipFamilies | toYaml | nindent 2 }} + {{- end }} + {{- with .Values.service.externalIPs }} + externalIPs: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.service.externalTrafficPolicy }} + externalTrafficPolicy: {{ . }} + {{- end }} + ports: + - name: {{ .Values.service.portName }} + port: {{ .Values.service.port }} + protocol: TCP + targetPort: {{ .Values.service.targetPort }} + {{- with .Values.service.appProtocol }} + appProtocol: {{ . }} + {{- end }} + {{- if (and (eq .Values.service.type "NodePort") (not (empty .Values.service.nodePort))) }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + {{- with .Values.extraExposePorts }} + {{- tpl (toYaml . | nindent 4) $root }} + {{- end }} + selector: + {{- include "grafana.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/serviceaccount.yaml new file mode 100644 index 0000000000..ffca0717ae --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/serviceaccount.yaml @@ -0,0 +1,17 @@ +{{- if .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +automountServiceAccountToken: {{ .Values.serviceAccount.autoMount | default .Values.serviceAccount.automountServiceAccountToken }} +metadata: + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- tpl (toYaml . | nindent 4) $ }} + {{- end }} + name: {{ include "grafana.serviceAccountName" . }} + namespace: {{ include "grafana.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/servicemonitor.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/servicemonitor.yaml new file mode 100644 index 0000000000..0359013520 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/servicemonitor.yaml @@ -0,0 +1,52 @@ +{{- if .Values.serviceMonitor.enabled }} +--- +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ include "grafana.fullname" . }} + {{- if .Values.serviceMonitor.namespace }} + namespace: {{ tpl .Values.serviceMonitor.namespace . }} + {{- else }} + namespace: {{ include "grafana.namespace" . }} + {{- end }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.serviceMonitor.labels }} + {{- tpl (toYaml . | nindent 4) $ }} + {{- end }} +spec: + endpoints: + - port: {{ .Values.service.portName }} + {{- with .Values.serviceMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + honorLabels: true + path: {{ .Values.serviceMonitor.path }} + scheme: {{ .Values.serviceMonitor.scheme }} + {{- with .Values.serviceMonitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.serviceMonitor.metricRelabelings }} + metricRelabelings: + {{- toYaml . | nindent 6 }} + {{- end }} + jobLabel: "{{ .Release.Name }}" + selector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 6 }} + namespaceSelector: + matchNames: + - {{ include "grafana.namespace" . }} + {{- with .Values.serviceMonitor.targetLabels }} + targetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/statefulset.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/statefulset.yaml new file mode 100644 index 0000000000..7546c18876 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/statefulset.yaml @@ -0,0 +1,58 @@ +{{- $sts := list "sts" "StatefulSet" "statefulset" -}} +{{- if (or (.Values.useStatefulSet) (and .Values.persistence.enabled (not .Values.persistence.existingClaim) (has .Values.persistence.type $sts)))}} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ include "grafana.fullname" . }} + namespace: {{ include "grafana.namespace" . }} + labels: + {{- include "grafana.labels" . | nindent 4 }} + {{- with .Values.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + replicas: {{ .Values.replicas }} + selector: + matchLabels: + {{- include "grafana.selectorLabels" . | nindent 6 }} + serviceName: {{ include "grafana.fullname" . }}-headless + template: + metadata: + labels: + {{- include "grafana.selectorLabels" . | nindent 8 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + checksum/dashboards-json-config: {{ include (print $.Template.BasePath "/dashboards-json-configmap.yaml") . | sha256sum }} + checksum/sc-dashboard-provider-config: {{ include (print $.Template.BasePath "/configmap-dashboard-provider.yaml") . | sha256sum }} + {{- if and (or (and (not .Values.admin.existingSecret) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD__FILE) (not .Values.env.GF_SECURITY_ADMIN_PASSWORD)) (and .Values.ldap.enabled (not .Values.ldap.existingSecret))) (not .Values.env.GF_SECURITY_DISABLE_INITIAL_ADMIN_CREATION) }} + checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- end }} + kubectl.kubernetes.io/default-container: {{ .Chart.Name }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + spec: + {{- include "grafana.pod" . | nindent 6 }} + {{- if .Values.persistence.enabled}} + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: storage + spec: + accessModes: {{ .Values.persistence.accessModes }} + storageClassName: {{ .Values.persistence.storageClassName }} + resources: + requests: + storage: {{ .Values.persistence.size }} + {{- with .Values.persistence.selectorLabels }} + selector: + matchLabels: + {{- toYaml . | nindent 10 }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-configmap.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-configmap.yaml new file mode 100644 index 0000000000..1e81bee90b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-configmap.yaml @@ -0,0 +1,20 @@ +{{- if .Values.testFramework.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "grafana.fullname" . }}-test + namespace: {{ include "grafana.namespace" . }} + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + labels: + {{- include "grafana.labels" . | nindent 4 }} +data: + run.sh: |- + @test "Test Health" { + url="http://{{ include "grafana.fullname" . }}/api/health" + + code=$(wget --server-response --spider --timeout 90 --tries 10 ${url} 2>&1 | awk '/^ HTTP/{print $2}') + [ "$code" == "200" ] + } +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-podsecuritypolicy.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-podsecuritypolicy.yaml new file mode 100644 index 0000000000..c13a3bf668 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-podsecuritypolicy.yaml @@ -0,0 +1,32 @@ +{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.testFramework.enabled .Values.rbac.pspEnabled }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "grafana.fullname" . }}-test + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + labels: + {{- include "grafana.labels" . | nindent 4 }} +spec: + allowPrivilegeEscalation: true + privileged: false + hostNetwork: false + hostIPC: false + hostPID: false + fsGroup: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + runAsUser: + rule: RunAsAny + volumes: + - configMap + - downwardAPI + - emptyDir + - projected + - csi + - secret +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-role.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-role.yaml new file mode 100644 index 0000000000..75dddfdd3d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-role.yaml @@ -0,0 +1,17 @@ +{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.testFramework.enabled .Values.rbac.pspEnabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "grafana.fullname" . }}-test + namespace: {{ include "grafana.namespace" . }} + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + labels: + {{- include "grafana.labels" . | nindent 4 }} +rules: + - apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: [{{ include "grafana.fullname" . }}-test] +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-rolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-rolebinding.yaml new file mode 100644 index 0000000000..c0d2d39efa --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if and (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") .Values.testFramework.enabled .Values.rbac.pspEnabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "grafana.fullname" . }}-test + namespace: {{ include "grafana.namespace" . }} + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + labels: + {{- include "grafana.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "grafana.fullname" . }}-test +subjects: + - kind: ServiceAccount + name: {{ include "grafana.serviceAccountNameTest" . }} + namespace: {{ include "grafana.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-serviceaccount.yaml new file mode 100644 index 0000000000..7af8982723 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test-serviceaccount.yaml @@ -0,0 +1,12 @@ +{{- if and .Values.testFramework.enabled .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "grafana.labels" . | nindent 4 }} + name: {{ include "grafana.serviceAccountNameTest" . }} + namespace: {{ include "grafana.namespace" . }} + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test.yaml new file mode 100644 index 0000000000..2484a96da0 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/templates/tests/test.yaml @@ -0,0 +1,53 @@ +{{- if .Values.testFramework.enabled }} +{{- $root := . }} +apiVersion: v1 +kind: Pod +metadata: + name: {{ include "grafana.fullname" . }}-test + labels: + {{- include "grafana.labels" . | nindent 4 }} + annotations: + "helm.sh/hook": test + "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" + namespace: {{ include "grafana.namespace" . }} +spec: + serviceAccountName: {{ include "grafana.serviceAccountNameTest" . }} + {{- with .Values.testFramework.securityContext }} + securityContext: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- if or .Values.image.pullSecrets .Values.global.imagePullSecrets }} + imagePullSecrets: + {{- include "grafana.imagePullSecrets" (dict "root" $root "imagePullSecrets" .Values.image.pullSecrets) | nindent 4 }} + {{- end }} + {{- with .Values.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.affinity }} + affinity: + {{- tpl (toYaml .) $root | nindent 4 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 4 }} + {{- end }} + containers: + - name: {{ .Release.Name }}-test + image: "{{ .Values.global.imageRegistry | default .Values.testFramework.image.registry }}/{{ .Values.testFramework.image.repository }}:{{ .Values.testFramework.image.tag }}" + imagePullPolicy: "{{ .Values.testFramework.imagePullPolicy}}" + command: ["/opt/bats/bin/bats", "-t", "/tests/run.sh"] + volumeMounts: + - mountPath: /tests + name: tests + readOnly: true + {{- with .Values.testFramework.resources }} + resources: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: tests + configMap: + name: {{ include "grafana.fullname" . }}-test + restartPolicy: Never +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/grafana/values.yaml b/charts/kasten/k10/7.0.1101/charts/grafana/values.yaml new file mode 100644 index 0000000000..51e94e01ff --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/grafana/values.yaml @@ -0,0 +1,1386 @@ +global: + # -- Overrides the Docker registry globally for all images + imageRegistry: null + + # To help compatibility with other charts which use global.imagePullSecrets. + # Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). + # Can be templated. + # global: + # imagePullSecrets: + # - name: pullSecret1 + # - name: pullSecret2 + # or + # global: + # imagePullSecrets: + # - pullSecret1 + # - pullSecret2 + imagePullSecrets: [] + +rbac: + create: true + ## Use an existing ClusterRole/Role (depending on rbac.namespaced false/true) + # useExistingRole: name-of-some-role + # useExistingClusterRole: name-of-some-clusterRole + pspEnabled: false + pspUseAppArmor: false + namespaced: false + extraRoleRules: [] + # - apiGroups: [] + # resources: [] + # verbs: [] + extraClusterRoleRules: [] + # - apiGroups: [] + # resources: [] + # verbs: [] +serviceAccount: + create: true + name: + nameTest: + ## ServiceAccount labels. + labels: {} + ## Service account annotations. Can be templated. + # annotations: + # eks.amazonaws.com/role-arn: arn:aws:iam::123456789000:role/iam-role-name-here + + ## autoMount is deprecated in favor of automountServiceAccountToken + # autoMount: false + automountServiceAccountToken: false + +replicas: 1 + +## Create a headless service for the deployment +headlessService: false + +## Should the service account be auto mounted on the pod +automountServiceAccountToken: true + +## Create HorizontalPodAutoscaler object for deployment type +# +autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 5 + targetCPU: "60" + targetMemory: "" + behavior: {} + +## See `kubectl explain poddisruptionbudget.spec` for more +## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +podDisruptionBudget: {} +# apiVersion: "" +# minAvailable: 1 +# maxUnavailable: 1 + +## See `kubectl explain deployment.spec.strategy` for more +## ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy +deploymentStrategy: + type: RollingUpdate + +readinessProbe: + httpGet: + path: /api/health + port: 3000 + +livenessProbe: + httpGet: + path: /api/health + port: 3000 + initialDelaySeconds: 60 + timeoutSeconds: 30 + failureThreshold: 10 + +## Use an alternate scheduler, e.g. "stork". +## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ +## +# schedulerName: "default-scheduler" + +image: + # -- The Docker registry + registry: docker.io + # -- Docker image repository + repository: grafana/grafana + # Overrides the Grafana image tag whose default is the chart appVersion + tag: "" + sha: "" + pullPolicy: IfNotPresent + + ## Optionally specify an array of imagePullSecrets. + ## Secrets must be manually created in the namespace. + ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + ## Can be templated. + ## + pullSecrets: [] + # - myRegistrKeySecretName + +testFramework: + enabled: true + image: + # -- The Docker registry + registry: docker.io + repository: bats/bats + tag: "v1.4.1" + imagePullPolicy: IfNotPresent + securityContext: {} + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +# dns configuration for pod +dnsPolicy: ~ +dnsConfig: {} + # nameservers: + # - 8.8.8.8 + # options: + # - name: ndots + # value: "2" + # - name: edns0 + +securityContext: + runAsNonRoot: true + runAsUser: 472 + runAsGroup: 472 + fsGroup: 472 + +containerSecurityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + seccompProfile: + type: RuntimeDefault + +# Enable creating the grafana configmap +createConfigmap: true + +# Extra configmaps to mount in grafana pods +# Values are templated. +extraConfigmapMounts: [] + # - name: certs-configmap + # mountPath: /etc/grafana/ssl/ + # subPath: certificates.crt # (optional) + # configMap: certs-configmap + # readOnly: true + # optional: false + + +extraEmptyDirMounts: [] + # - name: provisioning-notifiers + # mountPath: /etc/grafana/provisioning/notifiers + + +# Apply extra labels to common labels. +extraLabels: {} + +## Assign a PriorityClassName to pods if set +# priorityClassName: + +downloadDashboardsImage: + # -- The Docker registry + registry: docker.io + repository: curlimages/curl + tag: 7.85.0 + sha: "" + pullPolicy: IfNotPresent + +downloadDashboards: + env: {} + envFromSecret: "" + resources: {} + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + seccompProfile: + type: RuntimeDefault + envValueFrom: {} + # ENV_NAME: + # configMapKeyRef: + # name: configmap-name + # key: value_key + +## Pod Annotations +# podAnnotations: {} + +## ConfigMap Annotations +# configMapAnnotations: {} + # argocd.argoproj.io/sync-options: Replace=true + +## Pod Labels +# podLabels: {} + +podPortName: grafana +gossipPortName: gossip +## Deployment annotations +# annotations: {} + +## Expose the grafana service to be accessed from outside the cluster (LoadBalancer service). +## or access it from within the cluster (ClusterIP service). Set the service type and the port to serve it. +## ref: http://kubernetes.io/docs/user-guide/services/ +## +service: + enabled: true + type: ClusterIP + # Set the ip family policy to configure dual-stack see [Configure dual-stack](https://kubernetes.io/docs/concepts/services-networking/dual-stack/#services) + ipFamilyPolicy: "" + # Sets the families that should be supported and the order in which they should be applied to ClusterIP as well. Can be IPv4 and/or IPv6. + ipFamilies: [] + loadBalancerIP: "" + loadBalancerClass: "" + loadBalancerSourceRanges: [] + port: 80 + targetPort: 3000 + # targetPort: 4181 To be used with a proxy extraContainer + ## Service annotations. Can be templated. + annotations: {} + labels: {} + portName: service + # Adds the appProtocol field to the service. This allows to work with istio protocol selection. Ex: "http" or "tcp" + appProtocol: "" + +serviceMonitor: + ## If true, a ServiceMonitor CR is created for a prometheus operator + ## https://github.com/coreos/prometheus-operator + ## + enabled: false + path: /metrics + # namespace: monitoring (defaults to use the namespace this chart is deployed to) + labels: {} + interval: 30s + scheme: http + tlsConfig: {} + scrapeTimeout: 30s + relabelings: [] + metricRelabelings: [] + targetLabels: [] + +extraExposePorts: [] + # - name: keycloak + # port: 8080 + # targetPort: 8080 + +# overrides pod.spec.hostAliases in the grafana deployment's pods +hostAliases: [] + # - ip: "1.2.3.4" + # hostnames: + # - "my.host.com" + +ingress: + enabled: false + # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName + # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress + # ingressClassName: nginx + # Values can be templated + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + labels: {} + path: / + + # pathType is only for k8s >= 1.1= + pathType: Prefix + + hosts: + - chart-example.local + ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. + extraPaths: [] + # - path: /* + # backend: + # serviceName: ssl-redirect + # servicePort: use-annotation + ## Or for k8s > 1.19 + # - path: /* + # pathType: Prefix + # backend: + # service: + # name: ssl-redirect + # port: + # name: use-annotation + + + tls: [] + # - secretName: chart-example-tls + # hosts: + # - chart-example.local + +resources: {} +# limits: +# cpu: 100m +# memory: 128Mi +# requests: +# cpu: 100m +# memory: 128Mi + +## Node labels for pod assignment +## ref: https://kubernetes.io/docs/user-guide/node-selection/ +# +nodeSelector: {} + +## Tolerations for pod assignment +## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +## +tolerations: [] + +## Affinity for pod assignment (evaluated as template) +## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +## +affinity: {} + +## Topology Spread Constraints +## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +## +topologySpreadConstraints: [] + +## Additional init containers (evaluated as template) +## ref: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/ +## +extraInitContainers: [] + +## Enable an Specify container in extraContainers. This is meant to allow adding an authentication proxy to a grafana pod +extraContainers: "" +# extraContainers: | +# - name: proxy +# image: quay.io/gambol99/keycloak-proxy:latest +# args: +# - -provider=github +# - -client-id= +# - -client-secret= +# - -github-org= +# - -email-domain=* +# - -cookie-secret= +# - -http-address=http://0.0.0.0:4181 +# - -upstream-url=http://127.0.0.1:3000 +# ports: +# - name: proxy-web +# containerPort: 4181 + +## Volumes that can be used in init containers that will not be mounted to deployment pods +extraContainerVolumes: [] +# - name: volume-from-secret +# secret: +# secretName: secret-to-mount +# - name: empty-dir-volume +# emptyDir: {} + +## Enable persistence using Persistent Volume Claims +## ref: https://kubernetes.io/docs/user-guide/persistent-volumes/ +## +persistence: + type: pvc + enabled: false + # storageClassName: default + accessModes: + - ReadWriteOnce + size: 10Gi + # annotations: {} + finalizers: + - kubernetes.io/pvc-protection + # selectorLabels: {} + ## Sub-directory of the PV to mount. Can be templated. + # subPath: "" + ## Name of an existing PVC. Can be templated. + # existingClaim: + ## Extra labels to apply to a PVC. + extraPvcLabels: {} + disableWarning: false + + ## If persistence is not enabled, this allows to mount the + ## local storage in-memory to improve performance + ## + inMemory: + enabled: false + ## The maximum usage on memory medium EmptyDir would be + ## the minimum value between the SizeLimit specified + ## here and the sum of memory limits of all containers in a pod + ## + # sizeLimit: 300Mi + + ## If 'lookupVolumeName' is set to true, Helm will attempt to retrieve + ## the current value of 'spec.volumeName' and incorporate it into the template. + lookupVolumeName: true + +initChownData: + ## If false, data ownership will not be reset at startup + ## This allows the grafana-server to be run with an arbitrary user + ## + enabled: true + + ## initChownData container image + ## + image: + # -- The Docker registry + registry: docker.io + repository: library/busybox + tag: "1.31.1" + sha: "" + pullPolicy: IfNotPresent + + ## initChownData resource requests and limits + ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + securityContext: + runAsNonRoot: false + runAsUser: 0 + seccompProfile: + type: RuntimeDefault + capabilities: + add: + - CHOWN + +# Administrator credentials when not using an existing secret (see below) +adminUser: admin +# adminPassword: strongpassword + +# Use an existing secret for the admin user. +admin: + ## Name of the secret. Can be templated. + existingSecret: "" + userKey: admin-user + passwordKey: admin-password + +## Define command to be executed at startup by grafana container +## Needed if using `vault-env` to manage secrets (ref: https://banzaicloud.com/blog/inject-secrets-into-pods-vault/) +## Default is "run.sh" as defined in grafana's Dockerfile +# command: +# - "sh" +# - "/run.sh" + +## Optionally define args if command is used +## Needed if using `hashicorp/envconsul` to manage secrets +## By default no arguments are set +# args: +# - "-secret" +# - "secret/grafana" +# - "./grafana" + +## Extra environment variables that will be pass onto deployment pods +## +## to provide grafana with access to CloudWatch on AWS EKS: +## 1. create an iam role of type "Web identity" with provider oidc.eks.* (note the provider for later) +## 2. edit the "Trust relationships" of the role, add a line inside the StringEquals clause using the +## same oidc eks provider as noted before (same as the existing line) +## also, replace NAMESPACE and prometheus-operator-grafana with the service account namespace and name +## +## "oidc.eks.us-east-1.amazonaws.com/id/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX:sub": "system:serviceaccount:NAMESPACE:prometheus-operator-grafana", +## +## 3. attach a policy to the role, you can use a built in policy called CloudWatchReadOnlyAccess +## 4. use the following env: (replace 123456789000 and iam-role-name-here with your aws account number and role name) +## +## env: +## AWS_ROLE_ARN: arn:aws:iam::123456789000:role/iam-role-name-here +## AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token +## AWS_REGION: us-east-1 +## +## 5. uncomment the EKS section in extraSecretMounts: below +## 6. uncomment the annotation section in the serviceAccount: above +## make sure to replace arn:aws:iam::123456789000:role/iam-role-name-here with your role arn + +env: {} + +## "valueFrom" environment variable references that will be added to deployment pods. Name is templated. +## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core +## Renders in container spec as: +## env: +## ... +## - name: +## valueFrom: +## +envValueFrom: {} + # ENV_NAME: + # configMapKeyRef: + # name: configmap-name + # key: value_key + +## The name of a secret in the same kubernetes namespace which contain values to be added to the environment +## This can be useful for auth tokens, etc. Value is templated. +envFromSecret: "" + +## Sensible environment variables that will be rendered as new secret object +## This can be useful for auth tokens, etc. +## If the secret values contains "{{", they'll need to be properly escaped so that they are not interpreted by Helm +## ref: https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function +envRenderSecret: {} + +## The names of secrets in the same kubernetes namespace which contain values to be added to the environment +## Each entry should contain a name key, and can optionally specify whether the secret must be defined with an optional key. +## Name is templated. +envFromSecrets: [] +## - name: secret-name +## prefix: prefix +## optional: true + +## The names of conifgmaps in the same kubernetes namespace which contain values to be added to the environment +## Each entry should contain a name key, and can optionally specify whether the configmap must be defined with an optional key. +## Name is templated. +## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#configmapenvsource-v1-core +envFromConfigMaps: [] +## - name: configmap-name +## prefix: prefix +## optional: true + +# Inject Kubernetes services as environment variables. +# See https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#environment-variables +enableServiceLinks: true + +## Additional grafana server secret mounts +# Defines additional mounts with secrets. Secrets must be manually created in the namespace. +extraSecretMounts: [] + # - name: secret-files + # mountPath: /etc/secrets + # secretName: grafana-secret-files + # readOnly: true + # optional: false + # subPath: "" + # + # for AWS EKS (cloudwatch) use the following (see also instruction in env: above) + # - name: aws-iam-token + # mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount + # readOnly: true + # projected: + # defaultMode: 420 + # sources: + # - serviceAccountToken: + # audience: sts.amazonaws.com + # expirationSeconds: 86400 + # path: token + # + # for CSI e.g. Azure Key Vault use the following + # - name: secrets-store-inline + # mountPath: /run/secrets + # readOnly: true + # csi: + # driver: secrets-store.csi.k8s.io + # readOnly: true + # volumeAttributes: + # secretProviderClass: "akv-grafana-spc" + # nodePublishSecretRef: # Only required when using service principal mode + # name: grafana-akv-creds # Only required when using service principal mode + +## Additional grafana server volume mounts +# Defines additional volume mounts. +extraVolumeMounts: [] + # - name: extra-volume-0 + # mountPath: /mnt/volume0 + # readOnly: true + # - name: extra-volume-1 + # mountPath: /mnt/volume1 + # readOnly: true + # - name: grafana-secrets + # mountPath: /mnt/volume2 + +## Additional Grafana server volumes +extraVolumes: [] + # - name: extra-volume-0 + # existingClaim: volume-claim + # - name: extra-volume-1 + # hostPath: + # path: /usr/shared/ + # type: "" + # - name: grafana-secrets + # csi: + # driver: secrets-store.csi.k8s.io + # readOnly: true + # volumeAttributes: + # secretProviderClass: "grafana-env-spc" + +## Container Lifecycle Hooks. Execute a specific bash command or make an HTTP request +lifecycleHooks: {} + # postStart: + # exec: + # command: [] + +## Pass the plugins you want installed as a list. +## +plugins: [] + # - digrich-bubblechart-panel + # - grafana-clock-panel + ## You can also use other plugin download URL, as long as they are valid zip files, + ## and specify the name of the plugin after the semicolon. Like this: + # - https://grafana.com/api/plugins/marcusolsson-json-datasource/versions/1.3.2/download;marcusolsson-json-datasource + +## Configure grafana datasources +## ref: http://docs.grafana.org/administration/provisioning/#datasources +## +datasources: {} +# datasources.yaml: +# apiVersion: 1 +# datasources: +# - name: Prometheus +# type: prometheus +# url: http://prometheus-prometheus-server +# access: proxy +# isDefault: true +# - name: CloudWatch +# type: cloudwatch +# access: proxy +# uid: cloudwatch +# editable: false +# jsonData: +# authType: default +# defaultRegion: us-east-1 +# deleteDatasources: [] +# - name: Prometheus + +## Configure grafana alerting (can be templated) +## ref: http://docs.grafana.org/administration/provisioning/#alerting +## +alerting: {} + # rules.yaml: + # apiVersion: 1 + # groups: + # - orgId: 1 + # name: '{{ .Chart.Name }}_my_rule_group' + # folder: my_first_folder + # interval: 60s + # rules: + # - uid: my_id_1 + # title: my_first_rule + # condition: A + # data: + # - refId: A + # datasourceUid: '-100' + # model: + # conditions: + # - evaluator: + # params: + # - 3 + # type: gt + # operator: + # type: and + # query: + # params: + # - A + # reducer: + # type: last + # type: query + # datasource: + # type: __expr__ + # uid: '-100' + # expression: 1==0 + # intervalMs: 1000 + # maxDataPoints: 43200 + # refId: A + # type: math + # dashboardUid: my_dashboard + # panelId: 123 + # noDataState: Alerting + # for: 60s + # annotations: + # some_key: some_value + # labels: + # team: sre_team_1 + # contactpoints.yaml: + # secret: + # apiVersion: 1 + # contactPoints: + # - orgId: 1 + # name: cp_1 + # receivers: + # - uid: first_uid + # type: pagerduty + # settings: + # integrationKey: XXX + # severity: critical + # class: ping failure + # component: Grafana + # group: app-stack + # summary: | + # {{ `{{ include "default.message" . }}` }} + +## Configure notifiers +## ref: http://docs.grafana.org/administration/provisioning/#alert-notification-channels +## +notifiers: {} +# notifiers.yaml: +# notifiers: +# - name: email-notifier +# type: email +# uid: email1 +# # either: +# org_id: 1 +# # or +# org_name: Main Org. +# is_default: true +# settings: +# addresses: an_email_address@example.com +# delete_notifiers: + +## Configure grafana dashboard providers +## ref: http://docs.grafana.org/administration/provisioning/#dashboards +## +## `path` must be /var/lib/grafana/dashboards/ +## +dashboardProviders: {} +# dashboardproviders.yaml: +# apiVersion: 1 +# providers: +# - name: 'default' +# orgId: 1 +# folder: '' +# type: file +# disableDeletion: false +# editable: true +# options: +# path: /var/lib/grafana/dashboards/default + +## Configure grafana dashboard to import +## NOTE: To use dashboards you must also enable/configure dashboardProviders +## ref: https://grafana.com/dashboards +## +## dashboards per provider, use provider name as key. +## +dashboards: {} + # default: + # some-dashboard: + # json: | + # $RAW_JSON + # custom-dashboard: + # file: dashboards/custom-dashboard.json + # prometheus-stats: + # gnetId: 2 + # revision: 2 + # datasource: Prometheus + # local-dashboard: + # url: https://example.com/repository/test.json + # token: '' + # local-dashboard-base64: + # url: https://example.com/repository/test-b64.json + # token: '' + # b64content: true + # local-dashboard-gitlab: + # url: https://example.com/repository/test-gitlab.json + # gitlabToken: '' + # local-dashboard-bitbucket: + # url: https://example.com/repository/test-bitbucket.json + # bearerToken: '' + # local-dashboard-azure: + # url: https://example.com/repository/test-azure.json + # basic: '' + # acceptHeader: '*/*' + +## Reference to external ConfigMap per provider. Use provider name as key and ConfigMap name as value. +## A provider dashboards must be defined either by external ConfigMaps or in values.yaml, not in both. +## ConfigMap data example: +## +## data: +## example-dashboard.json: | +## RAW_JSON +## +dashboardsConfigMaps: {} +# default: "" + +## Grafana's primary configuration +## NOTE: values in map will be converted to ini format +## ref: http://docs.grafana.org/installation/configuration/ +## +grafana.ini: + paths: + data: /var/lib/grafana/ + logs: /var/log/grafana + plugins: /var/lib/grafana/plugins + provisioning: /etc/grafana/provisioning + analytics: + check_for_updates: true + log: + mode: console + grafana_net: + url: https://grafana.net + server: + domain: "{{ if (and .Values.ingress.enabled .Values.ingress.hosts) }}{{ tpl (.Values.ingress.hosts | first) . }}{{ else }}''{{ end }}" +## grafana Authentication can be enabled with the following values on grafana.ini + # server: + # The full public facing url you use in browser, used for redirects and emails + # root_url: + # https://grafana.com/docs/grafana/latest/auth/github/#enable-github-in-grafana + # auth.github: + # enabled: false + # allow_sign_up: false + # scopes: user:email,read:org + # auth_url: https://github.com/login/oauth/authorize + # token_url: https://github.com/login/oauth/access_token + # api_url: https://api.github.com/user + # team_ids: + # allowed_organizations: + # client_id: + # client_secret: +## LDAP Authentication can be enabled with the following values on grafana.ini +## NOTE: Grafana will fail to start if the value for ldap.toml is invalid + # auth.ldap: + # enabled: true + # allow_sign_up: true + # config_file: /etc/grafana/ldap.toml + +## Grafana's LDAP configuration +## Templated by the template in _helpers.tpl +## NOTE: To enable the grafana.ini must be configured with auth.ldap.enabled +## ref: http://docs.grafana.org/installation/configuration/#auth-ldap +## ref: http://docs.grafana.org/installation/ldap/#configuration +ldap: + enabled: false + # `existingSecret` is a reference to an existing secret containing the ldap configuration + # for Grafana in a key `ldap-toml`. + existingSecret: "" + # `config` is the content of `ldap.toml` that will be stored in the created secret + config: "" + # config: |- + # verbose_logging = true + + # [[servers]] + # host = "my-ldap-server" + # port = 636 + # use_ssl = true + # start_tls = false + # ssl_skip_verify = false + # bind_dn = "uid=%s,ou=users,dc=myorg,dc=com" + +## Grafana's SMTP configuration +## NOTE: To enable, grafana.ini must be configured with smtp.enabled +## ref: http://docs.grafana.org/installation/configuration/#smtp +smtp: + # `existingSecret` is a reference to an existing secret containing the smtp configuration + # for Grafana. + existingSecret: "" + userKey: "user" + passwordKey: "password" + +## Sidecars that collect the configmaps with specified label and stores the included files them into the respective folders +## Requires at least Grafana 5 to work and can't be used together with parameters dashboardProviders, datasources and dashboards +sidecar: + image: + # -- The Docker registry + registry: quay.io + repository: kiwigrid/k8s-sidecar + tag: 1.27.4 + sha: "" + imagePullPolicy: IfNotPresent + resources: {} +# limits: +# cpu: 100m +# memory: 100Mi +# requests: +# cpu: 50m +# memory: 50Mi + securityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + seccompProfile: + type: RuntimeDefault + # skipTlsVerify Set to true to skip tls verification for kube api calls + # skipTlsVerify: true + enableUniqueFilenames: false + readinessProbe: {} + livenessProbe: {} + # Log level default for all sidecars. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. Defaults to INFO + # logLevel: INFO + alerts: + enabled: false + # Additional environment variables for the alerts sidecar + env: {} + # Do not reprocess already processed unchanged resources on k8s API reconnect. + # ignoreAlreadyProcessed: true + # label that the configmaps with alert are marked with + label: grafana_alert + # value of label that the configmaps with alert are set to + labelValue: "" + # Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. + # logLevel: INFO + # If specified, the sidecar will search for alert config-maps inside this namespace. + # Otherwise the namespace in which the sidecar is running will be used. + # It's also possible to specify ALL to search in all namespaces + searchNamespace: null + # Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. + watchMethod: WATCH + # search in configmap, secret or both + resource: both + # watchServerTimeout: request to the server, asking it to cleanly close the connection after that. + # defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S + # watchServerTimeout: 3600 + # + # watchClientTimeout: is a client-side timeout, configuring your local socket. + # If you have a network outage dropping all packets with no RST/FIN, + # this is how long your client waits before realizing & dropping the connection. + # defaults to 66sec (sic!) + # watchClientTimeout: 60 + # + # Endpoint to send request to reload alerts + reloadURL: "http://localhost:3000/api/admin/provisioning/alerting/reload" + # Absolute path to shell script to execute after a alert got reloaded + script: null + skipReload: false + # This is needed if skipReload is true, to load any alerts defined at startup time. + # Deploy the alert sidecar as an initContainer. + initAlerts: false + # Additional alert sidecar volume mounts + extraMounts: [] + # Sets the size limit of the alert sidecar emptyDir volume + sizeLimit: {} + dashboards: + enabled: false + # Additional environment variables for the dashboards sidecar + env: {} + ## "valueFrom" environment variable references that will be added to deployment pods. Name is templated. + ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core + ## Renders in container spec as: + ## env: + ## ... + ## - name: + ## valueFrom: + ## + envValueFrom: {} + # ENV_NAME: + # configMapKeyRef: + # name: configmap-name + # key: value_key + # Do not reprocess already processed unchanged resources on k8s API reconnect. + # ignoreAlreadyProcessed: true + SCProvider: true + # label that the configmaps with dashboards are marked with + label: grafana_dashboard + # value of label that the configmaps with dashboards are set to + labelValue: "" + # Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. + # logLevel: INFO + # folder in the pod that should hold the collected dashboards (unless `defaultFolderName` is set) + folder: /tmp/dashboards + # The default folder name, it will create a subfolder under the `folder` and put dashboards in there instead + defaultFolderName: null + # Namespaces list. If specified, the sidecar will search for config-maps/secrets inside these namespaces. + # Otherwise the namespace in which the sidecar is running will be used. + # It's also possible to specify ALL to search in all namespaces. + searchNamespace: null + # Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. + watchMethod: WATCH + # search in configmap, secret or both + resource: both + # If specified, the sidecar will look for annotation with this name to create folder and put graph here. + # You can use this parameter together with `provider.foldersFromFilesStructure`to annotate configmaps and create folder structure. + folderAnnotation: null + # Endpoint to send request to reload alerts + reloadURL: "http://localhost:3000/api/admin/provisioning/dashboards/reload" + # Absolute path to shell script to execute after a configmap got reloaded + script: null + skipReload: false + # watchServerTimeout: request to the server, asking it to cleanly close the connection after that. + # defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S + # watchServerTimeout: 3600 + # + # watchClientTimeout: is a client-side timeout, configuring your local socket. + # If you have a network outage dropping all packets with no RST/FIN, + # this is how long your client waits before realizing & dropping the connection. + # defaults to 66sec (sic!) + # watchClientTimeout: 60 + # + # provider configuration that lets grafana manage the dashboards + provider: + # name of the provider, should be unique + name: sidecarProvider + # orgid as configured in grafana + orgid: 1 + # folder in which the dashboards should be imported in grafana + folder: '' + # folder UID. will be automatically generated if not specified + folderUid: '' + # type of the provider + type: file + # disableDelete to activate a import-only behaviour + disableDelete: false + # allow updating provisioned dashboards from the UI + allowUiUpdates: false + # allow Grafana to replicate dashboard structure from filesystem + foldersFromFilesStructure: false + # Additional dashboard sidecar volume mounts + extraMounts: [] + # Sets the size limit of the dashboard sidecar emptyDir volume + sizeLimit: {} + datasources: + enabled: false + # Additional environment variables for the datasourcessidecar + env: {} + ## "valueFrom" environment variable references that will be added to deployment pods. Name is templated. + ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core + ## Renders in container spec as: + ## env: + ## ... + ## - name: + ## valueFrom: + ## + envValueFrom: {} + # ENV_NAME: + # configMapKeyRef: + # name: configmap-name + # key: value_key + # Do not reprocess already processed unchanged resources on k8s API reconnect. + # ignoreAlreadyProcessed: true + # label that the configmaps with datasources are marked with + label: grafana_datasource + # value of label that the configmaps with datasources are set to + labelValue: "" + # Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. + # logLevel: INFO + # If specified, the sidecar will search for datasource config-maps inside this namespace. + # Otherwise the namespace in which the sidecar is running will be used. + # It's also possible to specify ALL to search in all namespaces + searchNamespace: null + # Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. + watchMethod: WATCH + # search in configmap, secret or both + resource: both + # watchServerTimeout: request to the server, asking it to cleanly close the connection after that. + # defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S + # watchServerTimeout: 3600 + # + # watchClientTimeout: is a client-side timeout, configuring your local socket. + # If you have a network outage dropping all packets with no RST/FIN, + # this is how long your client waits before realizing & dropping the connection. + # defaults to 66sec (sic!) + # watchClientTimeout: 60 + # + # Endpoint to send request to reload datasources + reloadURL: "http://localhost:3000/api/admin/provisioning/datasources/reload" + # Absolute path to shell script to execute after a datasource got reloaded + script: null + skipReload: false + # This is needed if skipReload is true, to load any datasources defined at startup time. + # Deploy the datasources sidecar as an initContainer. + initDatasources: false + # Sets the size limit of the datasource sidecar emptyDir volume + sizeLimit: {} + plugins: + enabled: false + # Additional environment variables for the plugins sidecar + env: {} + # Do not reprocess already processed unchanged resources on k8s API reconnect. + # ignoreAlreadyProcessed: true + # label that the configmaps with plugins are marked with + label: grafana_plugin + # value of label that the configmaps with plugins are set to + labelValue: "" + # Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. + # logLevel: INFO + # If specified, the sidecar will search for plugin config-maps inside this namespace. + # Otherwise the namespace in which the sidecar is running will be used. + # It's also possible to specify ALL to search in all namespaces + searchNamespace: null + # Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. + watchMethod: WATCH + # search in configmap, secret or both + resource: both + # watchServerTimeout: request to the server, asking it to cleanly close the connection after that. + # defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S + # watchServerTimeout: 3600 + # + # watchClientTimeout: is a client-side timeout, configuring your local socket. + # If you have a network outage dropping all packets with no RST/FIN, + # this is how long your client waits before realizing & dropping the connection. + # defaults to 66sec (sic!) + # watchClientTimeout: 60 + # + # Endpoint to send request to reload plugins + reloadURL: "http://localhost:3000/api/admin/provisioning/plugins/reload" + # Absolute path to shell script to execute after a plugin got reloaded + script: null + skipReload: false + # Deploy the datasource sidecar as an initContainer in addition to a container. + # This is needed if skipReload is true, to load any plugins defined at startup time. + initPlugins: false + # Sets the size limit of the plugin sidecar emptyDir volume + sizeLimit: {} + notifiers: + enabled: false + # Additional environment variables for the notifierssidecar + env: {} + # Do not reprocess already processed unchanged resources on k8s API reconnect. + # ignoreAlreadyProcessed: true + # label that the configmaps with notifiers are marked with + label: grafana_notifier + # value of label that the configmaps with notifiers are set to + labelValue: "" + # Log level. Can be one of: DEBUG, INFO, WARN, ERROR, CRITICAL. + # logLevel: INFO + # If specified, the sidecar will search for notifier config-maps inside this namespace. + # Otherwise the namespace in which the sidecar is running will be used. + # It's also possible to specify ALL to search in all namespaces + searchNamespace: null + # Method to use to detect ConfigMap changes. With WATCH the sidecar will do a WATCH requests, with SLEEP it will list all ConfigMaps, then sleep for 60 seconds. + watchMethod: WATCH + # search in configmap, secret or both + resource: both + # watchServerTimeout: request to the server, asking it to cleanly close the connection after that. + # defaults to 60sec; much higher values like 3600 seconds (1h) are feasible for non-Azure K8S + # watchServerTimeout: 3600 + # + # watchClientTimeout: is a client-side timeout, configuring your local socket. + # If you have a network outage dropping all packets with no RST/FIN, + # this is how long your client waits before realizing & dropping the connection. + # defaults to 66sec (sic!) + # watchClientTimeout: 60 + # + # Endpoint to send request to reload notifiers + reloadURL: "http://localhost:3000/api/admin/provisioning/notifications/reload" + # Absolute path to shell script to execute after a notifier got reloaded + script: null + skipReload: false + # Deploy the notifier sidecar as an initContainer in addition to a container. + # This is needed if skipReload is true, to load any notifiers defined at startup time. + initNotifiers: false + # Sets the size limit of the notifier sidecar emptyDir volume + sizeLimit: {} + +## Override the deployment namespace +## +namespaceOverride: "" + +## Number of old ReplicaSets to retain +## +revisionHistoryLimit: 10 + +## Add a seperate remote image renderer deployment/service +imageRenderer: + deploymentStrategy: {} + # Enable the image-renderer deployment & service + enabled: false + replicas: 1 + autoscaling: + enabled: false + minReplicas: 1 + maxReplicas: 5 + targetCPU: "60" + targetMemory: "" + behavior: {} + # The url of remote image renderer if it is not in the same namespace with the grafana instance + serverURL: "" + # The callback url of grafana instances if it is not in the same namespace with the remote image renderer + renderingCallbackURL: "" + image: + # -- The Docker registry + registry: docker.io + # image-renderer Image repository + repository: grafana/grafana-image-renderer + # image-renderer Image tag + tag: latest + # image-renderer Image sha (optional) + sha: "" + # image-renderer ImagePullPolicy + pullPolicy: Always + # extra environment variables + env: + HTTP_HOST: "0.0.0.0" + # RENDERING_ARGS: --no-sandbox,--disable-gpu,--window-size=1280x758 + # RENDERING_MODE: clustered + # IGNORE_HTTPS_ERRORS: true + + ## "valueFrom" environment variable references that will be added to deployment pods. Name is templated. + ## ref: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#envvarsource-v1-core + ## Renders in container spec as: + ## env: + ## ... + ## - name: + ## valueFrom: + ## + envValueFrom: {} + # ENV_NAME: + # configMapKeyRef: + # name: configmap-name + # key: value_key + + # image-renderer deployment serviceAccount + serviceAccountName: "" + # image-renderer deployment securityContext + securityContext: {} + # image-renderer deployment container securityContext + containerSecurityContext: + seccompProfile: + type: RuntimeDefault + capabilities: + drop: ['ALL'] + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + ## image-renderer pod annotation + podAnnotations: {} + # image-renderer deployment Host Aliases + hostAliases: [] + # image-renderer deployment priority class + priorityClassName: '' + service: + # Enable the image-renderer service + enabled: true + # image-renderer service port name + portName: 'http' + # image-renderer service port used by both service and deployment + port: 8081 + targetPort: 8081 + # Adds the appProtocol field to the image-renderer service. This allows to work with istio protocol selection. Ex: "http" or "tcp" + appProtocol: "" + serviceMonitor: + ## If true, a ServiceMonitor CRD is created for a prometheus operator + ## https://github.com/coreos/prometheus-operator + ## + enabled: false + path: /metrics + # namespace: monitoring (defaults to use the namespace this chart is deployed to) + labels: {} + interval: 1m + scheme: http + tlsConfig: {} + scrapeTimeout: 30s + relabelings: [] + # See: https://doc.crds.dev/github.com/prometheus-operator/kube-prometheus/monitoring.coreos.com/ServiceMonitor/v1@v0.11.0#spec-targetLabels + targetLabels: [] + # - targetLabel1 + # - targetLabel2 + # If https is enabled in Grafana, this needs to be set as 'https' to correctly configure the callback used in Grafana + grafanaProtocol: http + # In case a sub_path is used this needs to be added to the image renderer callback + grafanaSubPath: "" + # name of the image-renderer port on the pod + podPortName: http + # number of image-renderer replica sets to keep + revisionHistoryLimit: 10 + networkPolicy: + # Enable a NetworkPolicy to limit inbound traffic to only the created grafana pods + limitIngress: true + # Enable a NetworkPolicy to limit outbound traffic to only the created grafana pods + limitEgress: false + # Allow additional services to access image-renderer (eg. Prometheus operator when ServiceMonitor is enabled) + extraIngressSelectors: [] + resources: {} +# limits: +# cpu: 100m +# memory: 100Mi +# requests: +# cpu: 50m +# memory: 50Mi + ## Node labels for pod assignment + ## ref: https://kubernetes.io/docs/user-guide/node-selection/ + # + nodeSelector: {} + + ## Tolerations for pod assignment + ## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ + ## + tolerations: [] + + ## Affinity for pod assignment (evaluated as template) + ## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity + ## + affinity: {} + + ## Use an alternate scheduler, e.g. "stork". + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + ## + # schedulerName: "default-scheduler" + + # Extra configmaps to mount in image-renderer pods + extraConfigmapMounts: [] + + # Extra secrets to mount in image-renderer pods + extraSecretMounts: [] + + # Extra volumes to mount in image-renderer pods + extraVolumeMounts: [] + + # Extra volumes for image-renderer pods + extraVolumes: [] + +networkPolicy: + ## @param networkPolicy.enabled Enable creation of NetworkPolicy resources. Only Ingress traffic is filtered for now. + ## + enabled: false + ## @param networkPolicy.allowExternal Don't require client label for connections + ## The Policy model to apply. When set to false, only pods with the correct + ## client label will have network access to grafana port defined. + ## When true, grafana will accept connections from any source + ## (with the correct destination port). + ## + ingress: true + ## @param networkPolicy.ingress When true enables the creation + ## an ingress network policy + ## + allowExternal: true + ## @param networkPolicy.explicitNamespacesSelector A Kubernetes LabelSelector to explicitly select namespaces from which traffic could be allowed + ## If explicitNamespacesSelector is missing or set to {}, only client Pods that are in the networkPolicy's namespace + ## and that match other criteria, the ones that have the good label, can reach the grafana. + ## But sometimes, we want the grafana to be accessible to clients from other namespaces, in this case, we can use this + ## LabelSelector to select these namespaces, note that the networkPolicy's namespace should also be explicitly added. + ## + ## Example: + ## explicitNamespacesSelector: + ## matchLabels: + ## role: frontend + ## matchExpressions: + ## - {key: role, operator: In, values: [frontend]} + ## + explicitNamespacesSelector: {} + ## + ## + ## + ## + ## + ## + egress: + ## @param networkPolicy.egress.enabled When enabled, an egress network policy will be + ## created allowing grafana to connect to external data sources from kubernetes cluster. + enabled: false + ## + ## @param networkPolicy.egress.blockDNSResolution When enabled, DNS resolution will be blocked + ## for all pods in the grafana namespace. + blockDNSResolution: false + ## + ## @param networkPolicy.egress.ports Add individual ports to be allowed by the egress + ports: [] + ## Add ports to the egress by specifying - port: + ## E.X. + ## - port: 80 + ## - port: 443 + ## + ## @param networkPolicy.egress.to Allow egress traffic to specific destinations + to: [] + ## Add destinations to the egress by specifying - ipBlock: + ## E.X. + ## to: + ## - namespaceSelector: + ## matchExpressions: + ## - {key: role, operator: In, values: [grafana]} + ## + ## + ## + ## + ## + +# Enable backward compatibility of kubernetes where version below 1.13 doesn't have the enableServiceLinks option +enableKubeBackwardCompatibility: false +useStatefulSet: false +# Create a dynamic manifests via values: +extraObjects: [] + # - apiVersion: "kubernetes-client.io/v1" + # kind: ExternalSecret + # metadata: + # name: grafana-secrets + # spec: + # backendType: gcpSecretsManager + # data: + # - key: grafana-admin-password + # name: adminPassword + +# assertNoLeakedSecrets is a helper function defined in _helpers.tpl that checks if secret +# values are not exposed in the rendered grafana.ini configmap. It is enabled by default. +# +# To pass values into grafana.ini without exposing them in a configmap, use variable expansion: +# https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/#variable-expansion +# +# Alternatively, if you wish to allow secret values to be exposed in the rendered grafana.ini configmap, +# you can disable this check by setting assertNoLeakedSecrets to false. +assertNoLeakedSecrets: true diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/.helmignore b/charts/kasten/k10/7.0.1101/charts/prometheus/.helmignore new file mode 100644 index 0000000000..825c007791 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj + +OWNERS diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/Chart.yaml new file mode 100644 index 0000000000..ca07363f90 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/Chart.yaml @@ -0,0 +1,53 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts + - name: Upstream Project + url: https://github.com/prometheus/prometheus +apiVersion: v2 +appVersion: v2.53.1 +dependencies: +- condition: alertmanager.enabled + name: alertmanager + repository: https://prometheus-community.github.io/helm-charts + version: 1.11.* +- condition: kube-state-metrics.enabled + name: kube-state-metrics + repository: https://prometheus-community.github.io/helm-charts + version: 5.21.* +- condition: prometheus-node-exporter.enabled + name: prometheus-node-exporter + repository: https://prometheus-community.github.io/helm-charts + version: 4.37.* +- condition: prometheus-pushgateway.enabled + name: prometheus-pushgateway + repository: https://prometheus-community.github.io/helm-charts + version: 2.14.* +description: Prometheus is a monitoring system and time series database. +home: https://prometheus.io/ +icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png +keywords: +- monitoring +- prometheus +kubeVersion: '>=1.19.0-0' +maintainers: +- email: gianrubio@gmail.com + name: gianrubio +- email: zanhsieh@gmail.com + name: zanhsieh +- email: miroslav.hadzhiev@gmail.com + name: Xtigyro +- email: naseem@transit.app + name: naseemkullah +- email: rootsandtrees@posteo.de + name: zeritti +name: prometheus +sources: +- https://github.com/prometheus/alertmanager +- https://github.com/prometheus/prometheus +- https://github.com/prometheus/pushgateway +- https://github.com/prometheus/node_exporter +- https://github.com/kubernetes/kube-state-metrics +type: application +version: 25.24.1 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/OWNERS b/charts/kasten/k10/7.0.1101/charts/prometheus/OWNERS new file mode 100644 index 0000000000..0cfd95021c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/OWNERS @@ -0,0 +1,6 @@ +approvers: +- mgoodness +- gianrubio +reviewers: +- mgoodness +- gianrubio diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/README.md b/charts/kasten/k10/7.0.1101/charts/prometheus/README.md new file mode 100644 index 0000000000..2cb744ce8f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/README.md @@ -0,0 +1,382 @@ +# Prometheus + +[Prometheus](https://prometheus.io/), a [Cloud Native Computing Foundation](https://cncf.io/) project, is a systems and service monitoring system. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true. + +This chart bootstraps a [Prometheus](https://prometheus.io/) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +## Prerequisites + +- Kubernetes 1.19+ +- Helm 3.7+ + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [helm repository](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +Starting with version 16.0, the Prometheus chart requires Helm 3.7+ in order to install successfully. Please check your `helm` release before installation. + +```console +helm install [RELEASE_NAME] prometheus-community/prometheus +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Dependencies + +By default this chart installs additional, dependent charts: + +- [alertmanager](https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager) +- [kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) +- [prometheus-node-exporter](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter) +- [prometheus-pushgateway](https://github.com/walker-tom/helm-charts/tree/main/charts/prometheus-pushgateway) + +To disable the dependency during installation, set `alertmanager.enabled`, `kube-state-metrics.enabled`, `prometheus-node-exporter.enabled` and `prometheus-pushgateway.enabled` to `false`. + +_See [helm dependency](https://helm.sh/docs/helm/helm_dependency/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Updating values.schema.json + +A [`values.schema.json`](https://helm.sh/docs/topics/charts/#schema-files) file has been added to validate chart values. When `values.yaml` file has a structure change (i.e. add a new field, change value type, etc.), modify `values.schema.json` file manually or run `helm schema-gen values.yaml > values.schema.json` to ensure the schema is aligned with the latest values. Refer to [helm plugin `helm-schema-gen`](https://github.com/karuppiah7890/helm-schema-gen) for plugin installation instructions. + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] prometheus-community/prometheus --install +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### To 25.0 + +The `server.remoteRead[].url` and `server.remoteWrite[].url` fields now support templating. Allowing for `url` values such as `https://{{ .Release.Name }}.example.com`. + +Any entries in these which previously included `{{` or `}}` must be escaped with `{{ "{{" }}` and `{{ "}}" }}` respectively. Entries which did not previously include the template-like syntax will not be affected. + +### To 24.0 + +Require Kubernetes 1.19+ + +Release 1.0.0 of the _alertmanager_ replaced [configmap-reload](https://github.com/jimmidyson/configmap-reload) with [prometheus-config-reloader](https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader). +Extra command-line arguments specified via `configmapReload.prometheus.extraArgs` are not compatible and will break with the new prometheus-config-reloader. Please, refer to the [sources](https://github.com/prometheus-operator/prometheus-operator/blob/main/cmd/prometheus-config-reloader/main.go) in order to make the appropriate adjustment to the extra command-line arguments. + +### To 23.0 + +Release 5.0.0 of the _kube-state-metrics_ chart introduced a separation of the `image.repository` value in two distinct values: + +```console + image: + registry: registry.k8s.io + repository: kube-state-metrics/kube-state-metrics +``` + +If a custom values file or CLI flags set `kube-state.metrics.image.repository`, please, set the new values accordingly. + +If you are upgrading _prometheus-pushgateway_ with the chart and _prometheus-pushgateway_ has been deployed as a statefulset with a persistent volume, the statefulset must be deleted before upgrading the chart, e.g.: + +```bash +kubectl delete sts -l app.kubernetes.io/name=prometheus-pushgateway -n monitoring --cascade=orphan +``` + +Users are advised to review changes in the corresponding chart releases before upgrading. + +### To 22.0 + +The `app.kubernetes.io/version` label has been removed from the pod selector. + +Therefore, you must delete the previous StatefulSet or Deployment before upgrading. Performing this operation will cause **Prometheus to stop functioning** until the upgrade is complete. + +```console +kubectl delete deploy,sts -l app.kubernetes.io/name=prometheus +``` + +### To 21.0 + +The Kubernetes labels have been updated to follow [Helm 3 label and annotation best practices](https://helm.sh/docs/chart_best_practices/labels/). +Specifically, labels mapping is listed below: + +| OLD | NEW | +|--------------------|------------------------------| +|heritage | app.kubernetes.io/managed-by | +|chart | helm.sh/chart | +|[container version] | app.kubernetes.io/version | +|app | app.kubernetes.io/name | +|release | app.kubernetes.io/instance | + +Therefore, depending on the way you've configured the chart, the previous StatefulSet or Deployment need to be deleted before upgrade. + +If `runAsStatefulSet: false` (this is the default): + +```console +kubectl delete deploy -l app=prometheus +``` + +If `runAsStatefulSet: true`: + +```console +kubectl delete sts -l app=prometheus +``` + +After that do the actual upgrade: + +```console +helm upgrade -i prometheus prometheus-community/prometheus +``` + +### To 20.0 + +The [configmap-reload](https://github.com/jimmidyson/configmap-reload) container was replaced by the [prometheus-config-reloader](https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader). +Extra command-line arguments specified via configmapReload.prometheus.extraArgs are not compatible and will break with the new prometheus-config-reloader, refer to the [sources](https://github.com/prometheus-operator/prometheus-operator/blob/main/cmd/prometheus-config-reloader/main.go) in order to make the appropriate adjustment to the extra command-line arguments. + +### To 19.0 + +Prometheus has been updated to version v2.40.5. + +Prometheus-pushgateway was updated to version 2.0.0 which adapted [Helm label and annotation best practices](https://helm.sh/docs/chart_best_practices/labels/). +See the [upgrade docs of the prometheus-pushgateway chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway#to-200) to see whats to do, before you upgrade Prometheus! + +The condition in Chart.yaml to disable kube-state-metrics has been changed from `kubeStateMetrics.enabled` to `kube-state-metrics.enabled` + +The Docker image tag is used from appVersion field in Chart.yaml by default. + +Unused subchart configs has been removed and subchart config is now on the bottom of the config file. + +If Prometheus is used as deployment the updatestrategy has been changed to "Recreate" by default, so Helm updates work out of the box. + +`.Values.server.extraTemplates` & `.Values.server.extraObjects` has been removed in favour of `.Values.extraManifests`, which can do the same. + +`.Values.server.enabled` has been removed as it's useless now that all components are created by subcharts. + +All files in `templates/server` directory has been moved to `templates` directory. + +```bash +helm upgrade [RELEASE_NAME] prometheus-community/prometheus --version 19.0.0 +``` + +### To 18.0 + +Version 18.0.0 uses alertmanager service from the [alertmanager chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager). If you've made some config changes, please check the old `alertmanager` and the new `alertmanager` configuration section in values.yaml for differences. + +Note that the `configmapReload` section for `alertmanager` was moved out of dedicated section (`configmapReload.alertmanager`) to alertmanager embedded (`alertmanager.configmapReload`). + +Before you update, please scale down the `prometheus-server` deployment to `0` then perform upgrade: + +```bash +# In 17.x +kubectl scale deploy prometheus-server --replicas=0 +# Upgrade +helm upgrade [RELEASE_NAME] prometheus-community/prometheus --version 18.0.0 +``` + +### To 17.0 + +Version 17.0.0 uses pushgateway service from the [prometheus-pushgateway chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway). If you've made some config changes, please check the old `pushgateway` and the new `prometheus-pushgateway` configuration section in values.yaml for differences. + +Before you update, please scale down the `prometheus-server` deployment to `0` then perform upgrade: + +```bash +# In 16.x +kubectl scale deploy prometheus-server --replicas=0 +# Upgrade +helm upgrade [RELEASE_NAME] prometheus-community/prometheus --version 17.0.0 +``` + +### To 16.0 + +Starting from version 16.0 embedded services (like alertmanager, node-exporter etc.) are moved out of Prometheus chart and the respecting charts from this repository are used as dependencies. Version 16.0.0 moves node-exporter service to [prometheus-node-exporter chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter). If you've made some config changes, please check the old `nodeExporter` and the new `prometheus-node-exporter` configuration section in values.yaml for differences. + +Before you update, please scale down the `prometheus-server` deployment to `0` then perform upgrade: + +```bash +# In 15.x +kubectl scale deploy prometheus-server --replicas=0 +# Upgrade +helm upgrade [RELEASE_NAME] prometheus-community/prometheus --version 16.0.0 +``` + +### To 15.0 + +Version 15.0.0 changes the relabeling config, aligning it with the [Prometheus community conventions](https://github.com/prometheus/prometheus/pull/9832). If you've made manual changes to the relabeling config, you have to adapt your changes. + +Before you update please execute the following command, to be able to update kube-state-metrics: + +```bash +kubectl delete deployments.apps -l app.kubernetes.io/instance=prometheus,app.kubernetes.io/name=kube-state-metrics --cascade=orphan +``` + +### To 9.0 + +Version 9.0 adds a new option to enable or disable the Prometheus Server. This supports the use case of running a Prometheus server in one k8s cluster and scraping exporters in another cluster while using the same chart for each deployment. To install the server `server.enabled` must be set to `true`. + +### To 5.0 + +As of version 5.0, this chart uses Prometheus 2.x. This version of prometheus introduces a new data format and is not compatible with prometheus 1.x. It is recommended to install this as a new release, as updating existing releases will not work. See the [prometheus docs](https://prometheus.io/docs/prometheus/latest/migration/#storage) for instructions on retaining your old data. + +Prometheus version 2.x has made changes to alertmanager, storage and recording rules. Check out the migration guide [here](https://prometheus.io/docs/prometheus/2.0/migration/). + +Users of this chart will need to update their alerting rules to the new format before they can upgrade. + +### Example Migration + +Assuming you have an existing release of the prometheus chart, named `prometheus-old`. In order to update to prometheus 2.x while keeping your old data do the following: + +1. Update the `prometheus-old` release. Disable scraping on every component besides the prometheus server, similar to the configuration below: + + ```yaml + alertmanager: + enabled: false + alertmanagerFiles: + alertmanager.yml: "" + kubeStateMetrics: + enabled: false + nodeExporter: + enabled: false + pushgateway: + enabled: false + server: + extraArgs: + storage.local.retention: 720h + serverFiles: + alerts: "" + prometheus.yml: "" + rules: "" + ``` + +1. Deploy a new release of the chart with version 5.0+ using prometheus 2.x. In the values.yaml set the scrape config as usual, and also add the `prometheus-old` instance as a remote-read target. + + ```yaml + prometheus.yml: + ... + remote_read: + - url: http://prometheus-old/api/v1/read + ... + ``` + + Old data will be available when you query the new prometheus instance. + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands: + +```console +helm show values prometheus-community/prometheus +``` + +You may similarly use the above configuration commands on each chart [dependency](#dependencies) to see its configurations. + +### Scraping Pod Metrics via Annotations + +This chart uses a default configuration that causes prometheus to scrape a variety of kubernetes resource types, provided they have the correct annotations. In this section we describe how to configure pods to be scraped; for information on how other resource types can be scraped you can do a `helm template` to get the kubernetes resource definitions, and then reference the prometheus configuration in the ConfigMap against the prometheus documentation for [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) and [kubernetes_sd_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config). + +In order to get prometheus to scrape pods, you must add annotations to the pods as below: + +```yaml +metadata: + annotations: + prometheus.io/scrape: "true" + prometheus.io/path: /metrics + prometheus.io/port: "8080" +``` + +You should adjust `prometheus.io/path` based on the URL that your pod serves metrics from. `prometheus.io/port` should be set to the port that your pod serves metrics from. Note that the values for `prometheus.io/scrape` and `prometheus.io/port` must be enclosed in double quotes. + +### Sharing Alerts Between Services + +Note that when [installing](#install-chart) or [upgrading](#upgrading-chart) you may use multiple values override files. This is particularly useful when you have alerts belonging to multiple services in the cluster. For example, + +```yaml +# values.yaml +# ... + +# service1-alert.yaml +serverFiles: + alerts: + service1: + - alert: anAlert + # ... + +# service2-alert.yaml +serverFiles: + alerts: + service2: + - alert: anAlert + # ... +``` + +```console +helm install [RELEASE_NAME] prometheus-community/prometheus -f values.yaml -f service1-alert.yaml -f service2-alert.yaml +``` + +### RBAC Configuration + +Roles and RoleBindings resources will be created automatically for `server` service. + +To manually setup RBAC you need to set the parameter `rbac.create=false` and specify the service account to be used for each service by setting the parameters: `serviceAccounts.{{ component }}.create` to `false` and `serviceAccounts.{{ component }}.name` to the name of a pre-existing service account. + +> **Tip**: You can refer to the default `*-clusterrole.yaml` and `*-clusterrolebinding.yaml` files in [templates](templates/) to customize your own. + +### ConfigMap Files + +AlertManager is configured through [alertmanager.yml](https://prometheus.io/docs/alerting/configuration/). This file (and any others listed in `alertmanagerFiles`) will be mounted into the `alertmanager` pod. + +Prometheus is configured through [prometheus.yml](https://prometheus.io/docs/operating/configuration/). This file (and any others listed in `serverFiles`) will be mounted into the `server` pod. + +### Ingress TLS + +If your cluster allows automatic creation/retrieval of TLS certificates (e.g. [cert-manager](https://github.com/jetstack/cert-manager)), please refer to the documentation for that mechanism. + +To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret in the namespace: + +```console +kubectl create secret tls prometheus-server-tls --cert=path/to/tls.cert --key=path/to/tls.key +``` + +Include the secret's name, along with the desired hostnames, in the alertmanager/server Ingress TLS section of your custom `values.yaml` file: + +```yaml +server: + ingress: + ## If true, Prometheus server Ingress will be created + ## + enabled: true + + ## Prometheus server Ingress hostnames + ## Must be provided if Ingress is enabled + ## + hosts: + - prometheus.domain.com + + ## Prometheus server Ingress TLS configuration + ## Secrets must be manually created in the namespace + ## + tls: + - secretName: prometheus-server-tls + hosts: + - prometheus.domain.com +``` + +### NetworkPolicy + +Enabling Network Policy for Prometheus will secure connections to Alert Manager and Kube State Metrics by only accepting connections from Prometheus Server. All inbound connections to Prometheus Server are still allowed. + +To enable network policy for Prometheus, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set `networkPolicy.enabled` to true. + +If NetworkPolicy is enabled for Prometheus' scrape targets, you may also need to manually create a networkpolicy which allows it. diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/.helmignore b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/.helmignore new file mode 100644 index 0000000000..7653e97e66 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/.helmignore @@ -0,0 +1,25 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ + +unittests/ diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/Chart.yaml new file mode 100644 index 0000000000..dbdb2ae477 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/Chart.yaml @@ -0,0 +1,24 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts +apiVersion: v2 +appVersion: v0.27.0 +description: The Alertmanager handles alerts sent by client applications such as the + Prometheus server. +home: https://prometheus.io/ +icon: https://raw.githubusercontent.com/prometheus/prometheus.github.io/master/assets/prometheus_logo-cb55bb5c346.png +keywords: +- monitoring +kubeVersion: '>=1.19.0-0' +maintainers: +- email: monotek23@gmail.com + name: monotek +- email: naseem@transit.app + name: naseemkullah +name: alertmanager +sources: +- https://github.com/prometheus/alertmanager +type: application +version: 1.11.0 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/README.md b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/README.md new file mode 100644 index 0000000000..d3f4df73a2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/README.md @@ -0,0 +1,62 @@ +# Alertmanager + +As per [prometheus.io documentation](https://prometheus.io/docs/alerting/latest/alertmanager/): +> The Alertmanager handles alerts sent by client applications such as the +> Prometheus server. It takes care of deduplicating, grouping, and routing them +> to the correct receiver integration such as email, PagerDuty, or OpsGenie. It +> also takes care of silencing and inhibition of alerts. + +## Prerequisites + +Kubernetes 1.14+ + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [`helm repo`](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/alertmanager +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] [CHART] --install +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### To 1.0 + +The [configmap-reload](https://github.com/jimmidyson/configmap-reload) container was replaced by the [prometheus-config-reloader](https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader). +Extra command-line arguments specified via configmapReload.prometheus.extraArgs are not compatible and will break with the new prometheus-config-reloader, refer to the [sources](https://github.com/prometheus-operator/prometheus-operator/blob/main/cmd/prometheus-config-reloader/main.go) in order to make the appropriate adjustment to the extea command-line arguments. +The `networking.k8s.io/v1beta1` is no longer supported. use [`networking.k8s.io/v1`](https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingressclass-v122). + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands: + +```console +helm show values prometheus-community/alertmanager +``` diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/ci/config-reload-values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/ci/config-reload-values.yaml new file mode 100644 index 0000000000..cba5de8e29 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/ci/config-reload-values.yaml @@ -0,0 +1,2 @@ +configmapReload: + enabled: true diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/NOTES.txt new file mode 100644 index 0000000000..46ea5bee59 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/NOTES.txt @@ -0,0 +1,21 @@ +1. Get the application URL by running these commands: +{{- if .Values.ingress.enabled }} +{{- range $host := .Values.ingress.hosts }} + {{- range .paths }} + http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ .path }} + {{- end }} +{{- end }} +{{- else if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ include "alertmanager.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "alertmanager.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ include "alertmanager.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get --namespace {{ include "alertmanager.namespace" . }} svc -w {{ include "alertmanager.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ include "alertmanager.namespace" . }} {{ include "alertmanager.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}") + echo http://$SERVICE_IP:{{ .Values.service.port }} +{{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ include "alertmanager.namespace" . }} -l "app.kubernetes.io/name={{ include "alertmanager.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + echo "Visit http://127.0.0.1:{{ .Values.service.port }} to use your application" + kubectl --namespace {{ include "alertmanager.namespace" . }} port-forward $POD_NAME {{ .Values.service.port }}:80 +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/_helpers.tpl new file mode 100644 index 0000000000..827b6ee9f7 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/_helpers.tpl @@ -0,0 +1,92 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "alertmanager.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "alertmanager.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "alertmanager.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "alertmanager.labels" -}} +helm.sh/chart: {{ include "alertmanager.chart" . }} +{{ include "alertmanager.selectorLabels" . }} +{{- with .Chart.AppVersion }} +app.kubernetes.io/version: {{ . | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "alertmanager.selectorLabels" -}} +app.kubernetes.io/name: {{ include "alertmanager.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "alertmanager.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "alertmanager.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +Define Ingress apiVersion +*/}} +{{- define "alertmanager.ingress.apiVersion" -}} +{{- printf "networking.k8s.io/v1" }} +{{- end }} + +{{/* +Define Pdb apiVersion +*/}} +{{- define "alertmanager.pdb.apiVersion" -}} +{{- if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +{{- printf "policy/v1" }} +{{- else }} +{{- printf "policy/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Allow overriding alertmanager namespace +*/}} +{{- define "alertmanager.namespace" -}} +{{- if .Values.namespaceOverride -}} +{{- .Values.namespaceOverride -}} +{{- else -}} +{{- .Release.Namespace -}} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/configmap.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/configmap.yaml new file mode 100644 index 0000000000..9e5882dc86 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/configmap.yaml @@ -0,0 +1,21 @@ +{{- if .Values.config.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.configAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +data: + alertmanager.yml: | + {{- $config := omit .Values.config "enabled" }} + {{- toYaml $config | default "{}" | nindent 4 }} + {{- range $key, $value := .Values.templates }} + {{ $key }}: |- + {{- $value | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingress.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingress.yaml new file mode 100644 index 0000000000..e729a8ad3b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingress.yaml @@ -0,0 +1,44 @@ +{{- if .Values.ingress.enabled }} +{{- $fullName := include "alertmanager.fullname" . }} +{{- $svcPort := .Values.service.port }} +apiVersion: {{ include "alertmanager.ingress.apiVersion" . }} +kind: Ingress +metadata: + name: {{ $fullName }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + {{- if .Values.ingress.className }} + ingressClassName: {{ .Values.ingress.className }} + {{- end }} + {{- if .Values.ingress.tls }} + tls: + {{- range .Values.ingress.tls }} + - hosts: + {{- range .hosts }} + - {{ . | quote }} + {{- end }} + secretName: {{ .secretName }} + {{- end }} + {{- end }} + rules: + {{- range .Values.ingress.hosts }} + - host: {{ .host | quote }} + http: + paths: + {{- range .paths }} + - path: {{ .path }} + pathType: {{ .pathType }} + backend: + service: + name: {{ $fullName }} + port: + number: {{ $svcPort }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingressperreplica.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingressperreplica.yaml new file mode 100644 index 0000000000..6f5a023500 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/ingressperreplica.yaml @@ -0,0 +1,56 @@ +{{- if and .Values.servicePerReplica.enabled .Values.ingressPerReplica.enabled }} +{{- $pathType := .Values.ingressPerReplica.pathType }} +{{- $count := .Values.replicaCount | int -}} +{{- $servicePort := .Values.service.port -}} +{{- $ingressValues := .Values.ingressPerReplica -}} +{{- $fullName := include "alertmanager.fullname" . }} +apiVersion: v1 +kind: List +metadata: + name: {{ $fullName }}-ingressperreplica + namespace: {{ include "alertmanager.namespace" . }} +items: +{{- range $i, $e := until $count }} + - kind: Ingress + apiVersion: {{ include "alertmanager.ingress.apiVersion" $ }} + metadata: + name: {{ $fullName }}-{{ $i }} + namespace: {{ include "alertmanager.namespace" $ }} + labels: + {{- include "alertmanager.labels" $ | nindent 8 }} + {{- if $ingressValues.labels }} +{{ toYaml $ingressValues.labels | indent 8 }} + {{- end }} + {{- if $ingressValues.annotations }} + annotations: +{{ toYaml $ingressValues.annotations | indent 8 }} + {{- end }} + spec: + {{- if $ingressValues.className }} + ingressClassName: {{ $ingressValues.className }} + {{- end }} + rules: + - host: {{ $ingressValues.hostPrefix }}-{{ $i }}.{{ $ingressValues.hostDomain }} + http: + paths: + {{- range $p := $ingressValues.paths }} + - path: {{ tpl $p $ }} + pathType: {{ $pathType }} + backend: + service: + name: {{ $fullName }}-{{ $i }} + port: + name: http + {{- end -}} + {{- if or $ingressValues.tlsSecretName $ingressValues.tlsSecretPerReplica.enabled }} + tls: + - hosts: + - {{ $ingressValues.hostPrefix }}-{{ $i }}.{{ $ingressValues.hostDomain }} + {{- if $ingressValues.tlsSecretPerReplica.enabled }} + secretName: {{ $ingressValues.tlsSecretPerReplica.prefix }}-{{ $i }} + {{- else }} + secretName: {{ $ingressValues.tlsSecretName }} + {{- end }} + {{- end }} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/pdb.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/pdb.yaml new file mode 100644 index 0000000000..103e9ecde1 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/pdb.yaml @@ -0,0 +1,14 @@ +{{- if .Values.podDisruptionBudget }} +apiVersion: {{ include "alertmanager.pdb.apiVersion" . }} +kind: PodDisruptionBudget +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + selector: + matchLabels: + {{- include "alertmanager.selectorLabels" . | nindent 6 }} + {{- toYaml .Values.podDisruptionBudget | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceaccount.yaml new file mode 100644 index 0000000000..bc9ccaaff9 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceaccount.yaml @@ -0,0 +1,14 @@ +{{- if .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "alertmanager.serviceAccountName" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceperreplica.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceperreplica.yaml new file mode 100644 index 0000000000..faa75b3ba2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/serviceperreplica.yaml @@ -0,0 +1,44 @@ +{{- if and .Values.servicePerReplica.enabled }} +{{- $count := .Values.replicaCount | int -}} +{{- $serviceValues := .Values.servicePerReplica -}} +apiVersion: v1 +kind: List +metadata: + name: {{ include "alertmanager.fullname" . }}-serviceperreplica + namespace: {{ include "alertmanager.namespace" . }} +items: +{{- range $i, $e := until $count }} + - apiVersion: v1 + kind: Service + metadata: + name: {{ include "alertmanager.fullname" $ }}-{{ $i }} + namespace: {{ include "alertmanager.namespace" $ }} + labels: + {{- include "alertmanager.labels" $ | nindent 8 }} + {{- if $serviceValues.annotations }} + annotations: +{{ toYaml $serviceValues.annotations | indent 8 }} + {{- end }} + spec: + {{- if $serviceValues.clusterIP }} + clusterIP: {{ $serviceValues.clusterIP }} + {{- end }} + {{- if $serviceValues.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := $serviceValues.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} + {{- end }} + {{- if ne $serviceValues.type "ClusterIP" }} + externalTrafficPolicy: {{ $serviceValues.externalTrafficPolicy }} + {{- end }} + ports: + - name: http + port: {{ $.Values.service.port }} + targetPort: http + selector: + {{- include "alertmanager.selectorLabels" $ | nindent 8 }} + statefulset.kubernetes.io/pod-name: {{ include "alertmanager.fullname" $ }}-{{ $i }} + type: "{{ $serviceValues.type }}" +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/services.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/services.yaml new file mode 100644 index 0000000000..eefb9ce161 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/services.yaml @@ -0,0 +1,75 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.service.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + {{- if .Values.service.ipDualStack.enabled }} + ipFamilies: {{ toYaml .Values.service.ipDualStack.ipFamilies | nindent 4 }} + ipFamilyPolicy: {{ .Values.service.ipDualStack.ipFamilyPolicy }} + {{- end }} + type: {{ .Values.service.type }} + {{- with .Values.service.loadBalancerIP }} + loadBalancerIP: {{ . }} + {{- end }} + {{- with .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := . }} + - {{ $cidr }} + {{- end }} + {{- end }} + ports: + - port: {{ .Values.service.port }} + targetPort: http + protocol: TCP + name: http + {{- if (and (eq .Values.service.type "NodePort") .Values.service.nodePort) }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + {{- with .Values.service.extraPorts }} + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + {{- include "alertmanager.selectorLabels" . | nindent 4 }} +--- +apiVersion: v1 +kind: Service +metadata: + name: {{ include "alertmanager.fullname" . }}-headless + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.service.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + clusterIP: None + ports: + - port: {{ .Values.service.port }} + targetPort: http + protocol: TCP + name: http + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - port: {{ .Values.service.clusterPort }} + targetPort: clusterpeer-tcp + protocol: TCP + name: cluster-tcp + - port: {{ .Values.service.clusterPort }} + targetPort: clusterpeer-udp + protocol: UDP + name: cluster-udp + {{- end }} + {{- with .Values.service.extraPorts }} + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + {{- include "alertmanager.selectorLabels" . | nindent 4 }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/statefulset.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/statefulset.yaml new file mode 100644 index 0000000000..2bdafc86db --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/statefulset.yaml @@ -0,0 +1,251 @@ +{{- $svcClusterPort := .Values.service.clusterPort }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + name: {{ include "alertmanager.fullname" . }} + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.statefulSet.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + replicas: {{ .Values.replicaCount }} + minReadySeconds: {{ .Values.minReadySeconds }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + selector: + matchLabels: + {{- include "alertmanager.selectorLabels" . | nindent 6 }} + serviceName: {{ include "alertmanager.fullname" . }}-headless + template: + metadata: + labels: + {{- include "alertmanager.selectorLabels" . | nindent 8 }} + {{- with .Values.podLabels }} + {{- toYaml . | nindent 8 }} + {{- end }} + annotations: + {{- if not .Values.configmapReload.enabled }} + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + spec: + automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} + {{- with .Values.imagePullSecrets }} + imagePullSecrets: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "alertmanager.serviceAccountName" . }} + {{- with .Values.dnsConfig }} + dnsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.hostAliases }} + hostAliases: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.schedulerName }} + schedulerName: {{ . }} + {{- end }} + {{- if or .Values.podAntiAffinity .Values.affinity }} + affinity: + {{- end }} + {{- with .Values.affinity }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if eq .Values.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "alertmanager.name" . }}]} + {{- else if eq .Values.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "alertmanager.name" . }}]} + {{- end }} + {{- with .Values.priorityClassName }} + priorityClassName: {{ . }} + {{- end }} + {{- with .Values.topologySpreadConstraints }} + topologySpreadConstraints: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + securityContext: + {{- toYaml .Values.podSecurityContext | nindent 8 }} + {{- with .Values.extraInitContainers }} + initContainers: + {{- toYaml . | nindent 8 }} + {{- end }} + containers: + {{- if .Values.configmapReload.enabled }} + - name: {{ .Chart.Name }}-{{ .Values.configmapReload.name }} + image: "{{ .Values.configmapReload.image.repository }}:{{ .Values.configmapReload.image.tag }}" + imagePullPolicy: "{{ .Values.configmapReload.image.pullPolicy }}" + {{- with .Values.configmapReload.extraEnv }} + env: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + {{- if and (hasKey .Values.configmapReload.extraArgs "config-file" | not) (hasKey .Values.configmapReload.extraArgs "watched-dir" | not) }} + - --watched-dir=/etc/alertmanager + {{- end }} + {{- if not (hasKey .Values.configmapReload.extraArgs "reload-url") }} + - --reload-url=http://127.0.0.1:9093/-/reload + {{- end }} + {{- range $key, $value := .Values.configmapReload.extraArgs }} + - --{{ $key }}={{ $value }} + {{- end }} + resources: + {{- toYaml .Values.configmapReload.resources | nindent 12 }} + {{- with .Values.configmapReload.containerPort }} + ports: + - containerPort: {{ . }} + {{- end }} + {{- with .Values.configmapReload.securityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config + mountPath: /etc/alertmanager + {{- if .Values.configmapReload.extraVolumeMounts }} + {{- toYaml .Values.configmapReload.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- end }} + - name: {{ .Chart.Name }} + securityContext: + {{- toYaml .Values.securityContext | nindent 12 }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + env: + - name: POD_IP + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.podIP + {{- if .Values.extraEnv }} + {{- toYaml .Values.extraEnv | nindent 12 }} + {{- end }} + {{- with .Values.command }} + command: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + - --storage.path=/alertmanager + {{- if not (hasKey .Values.extraArgs "config.file") }} + - --config.file=/etc/alertmanager/alertmanager.yml + {{- end }} + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - --cluster.advertise-address=[$(POD_IP)]:{{ $svcClusterPort }} + - --cluster.listen-address=0.0.0.0:{{ $svcClusterPort }} + {{- end }} + {{- if gt (int .Values.replicaCount) 1}} + {{- $fullName := include "alertmanager.fullname" . }} + {{- range $i := until (int .Values.replicaCount) }} + - --cluster.peer={{ $fullName }}-{{ $i }}.{{ $fullName }}-headless:{{ $svcClusterPort }} + {{- end }} + {{- end }} + {{- if .Values.additionalPeers }} + {{- range $item := .Values.additionalPeers }} + - --cluster.peer={{ $item }} + {{- end }} + {{- end }} + {{- range $key, $value := .Values.extraArgs }} + - --{{ $key }}={{ $value }} + {{- end }} + {{- if .Values.baseURL }} + - --web.external-url={{ .Values.baseURL }} + {{- end }} + ports: + - name: http + containerPort: 9093 + protocol: TCP + {{- if or (gt (int .Values.replicaCount) 1) (.Values.additionalPeers) }} + - name: clusterpeer-tcp + containerPort: {{ $svcClusterPort }} + protocol: TCP + - name: clusterpeer-udp + containerPort: {{ $svcClusterPort }} + protocol: UDP + {{- end }} + livenessProbe: + {{- toYaml .Values.livenessProbe | nindent 12 }} + readinessProbe: + {{- toYaml .Values.readinessProbe | nindent 12 }} + resources: + {{- toYaml .Values.resources | nindent 12 }} + volumeMounts: + {{- if .Values.config.enabled }} + - name: config + mountPath: /etc/alertmanager + {{- end }} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + - name: storage + mountPath: /alertmanager + {{- if .Values.extraVolumeMounts }} + {{- toYaml .Values.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- with .Values.extraContainers }} + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + {{- if .Values.config.enabled }} + - name: config + configMap: + name: {{ include "alertmanager.fullname" . }} + {{- end }} + {{- range .Values.extraSecretMounts }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} + {{- if .Values.extraVolumes }} + {{- toYaml .Values.extraVolumes | nindent 8 }} + {{- end }} + {{- if .Values.persistence.enabled }} + volumeClaimTemplates: + - metadata: + name: storage + spec: + accessModes: + {{- toYaml .Values.persistence.accessModes | nindent 10 }} + resources: + requests: + storage: {{ .Values.persistence.size }} + {{- if .Values.persistence.storageClass }} + {{- if (eq "-" .Values.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: {{ .Values.persistence.storageClass }} + {{- end }} + {{- end }} + {{- else }} + - name: storage + emptyDir: {} + {{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/tests/test-connection.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/tests/test-connection.yaml new file mode 100644 index 0000000000..410eba5bd8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/templates/tests/test-connection.yaml @@ -0,0 +1,20 @@ +{{- if .Values.testFramework.enabled }} +apiVersion: v1 +kind: Pod +metadata: + name: "{{ include "alertmanager.fullname" . }}-test-connection" + labels: + {{- include "alertmanager.labels" . | nindent 4 }} + {{- with .Values.testFramework.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + namespace: {{ include "alertmanager.namespace" . }} +spec: + containers: + - name: wget + image: busybox + command: ['wget'] + args: ['{{ include "alertmanager.fullname" . }}:{{ .Values.service.port }}'] + restartPolicy: Never +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.schema.json b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.schema.json new file mode 100644 index 0000000000..48c6e9a631 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.schema.json @@ -0,0 +1,923 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema", + "title": "alertmanager", + "description": "The Alertmanager handles alerts sent by client applications such as the Prometheus server.", + "type": "object", + "required": [ + "replicaCount", + "image", + "serviceAccount", + "service", + "persistence", + "config" + ], + "definitions": { + "image": { + "description": "Container image parameters.", + "type": "object", + "required": ["repository"], + "additionalProperties": false, + "properties": { + "repository": { + "description": "Image repository. Path to the image with registry(quay.io) or without(prometheus/alertmanager) for docker.io.", + "type": "string" + }, + "pullPolicy": { + "description": "Image pull policy. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.", + "type": "string", + "enum": [ + "Never", + "IfNotPresent", + "Always" + ], + "default": "IfNotPresent" + }, + "tag": { + "description": "Use chart appVersion by default.", + "type": "string", + "default": "" + } + } + }, + "resources": { + "description": "Resource limits and requests for the Container.", + "type": "object", + "properties": { + "limits": { + "description": "Resource limits for the Container.", + "type": "object", + "properties": { + "cpu": { + "description": "CPU request for the Container.", + "type": "string" + }, + "memory": { + "description": "Memory request for the Container.", + "type": "string" + } + } + }, + "requests": { + "description": "Resource requests for the Container.", + "type": "object", + "properties": { + "cpu": { + "description": "CPU request for the Container.", + "type": "string" + }, + "memory": { + "description": "Memory request for the Container.", + "type": "string" + } + } + } + } + }, + "securityContext": { + "description": "Security context for the container.", + "type": "object", + "properties": { + "capabilities": { + "description": "Specifies the capabilities to be dropped by the container.", + "type": "object", + "properties": { + "drop": { + "description": "List of capabilities to be dropped.", + "type": "array", + "items": { + "type": "string" + } + } + } + }, + "readOnlyRootFilesystem": { + "description": "Specifies whether the root file system should be mounted as read-only.", + "type": "boolean" + }, + "runAsUser": { + "description": "Specifies the UID (User ID) to run the container as.", + "type": "integer" + }, + "runAsNonRoot": { + "description": "Specifies whether to run the container as a non-root user.", + "type": "boolean" + }, + "runAsGroup": { + "description": "Specifies the GID (Group ID) to run the container as.", + "type": "integer" + } + } + }, + "volumeMounts": { + "description": "List of volume mounts for the Container.", + "type": "array", + "items": { + "description": "Volume mounts for the Container.", + "type": "object", + "required": ["name", "mountPath"], + "properties": { + "name": { + "description": "The name of the volume to mount.", + "type": "string" + }, + "mountPath": { + "description": "The mount path for the volume.", + "type": "string" + }, + "readOnly": { + "description": "Specifies if the volume should be mounted in read-only mode.", + "type": "boolean" + } + } + } + }, + "env": { + "description": "List of environment variables for the Container.", + "type": "array", + "items": { + "description": "Environment variables for the Container.", + "type": "object", + "required": ["name"], + "properties": { + "name": { + "description": "The name of the environment variable.", + "type": "string" + }, + "value": { + "description": "The value of the environment variable.", + "type": "string" + } + } + } + }, + "config": { + "description": "https://prometheus.io/docs/alerting/latest/configuration/", + "duration": { + "type": "string", + "pattern": "^((([0-9]+)y)?(([0-9]+)w)?(([0-9]+)d)?(([0-9]+)h)?(([0-9]+)m)?(([0-9]+)s)?(([0-9]+)ms)?|0)$" + }, + "labelname": { + "type": "string", + "pattern": "^[a-zA-Z_][a-zA-Z0-9_]*$|^...$" + }, + "route": { + "description": "Alert routing configuration.", + "type": "object", + "properties": { + "receiver": { + "description": "The default receiver to send alerts to.", + "type": "string" + }, + "group_by": { + "description": "The labels by which incoming alerts are grouped together.", + "type": "array", + "items": { + "type": "string", + "$ref": "#/definitions/config/labelname" + } + }, + "continue": { + "description": "Whether an alert should continue matching subsequent sibling nodes.", + "type": "boolean", + "default": false + }, + "matchers": { + "description": "A list of matchers that an alert has to fulfill to match the node.", + "type": "array", + "items": { + "type": "string" + } + }, + "group_wait": { + "description": "How long to initially wait to send a notification for a group of alerts.", + "$ref": "#/definitions/config/duration" + }, + "group_interval": { + "description": "How long to wait before sending a notification about new alerts that are added to a group of alerts for which an initial notification has already been sent.", + "$ref": "#/definitions/config/duration" + }, + "repeat_interval": { + "description": "How long to wait before sending a notification again if it has already been sent successfully for an alert.", + "$ref": "#/definitions/config/duration" + }, + "mute_time_intervals": { + "description": "Times when the route should be muted.", + "type": "array", + "items": { + "type": "string" + } + }, + "active_time_intervals": { + "description": "Times when the route should be active.", + "type": "array", + "items": { + "type": "string" + } + }, + "routes": { + "description": "Zero or more child routes.", + "type": "array", + "items": { + "type": "object", + "$ref": "#/definitions/config/route" + } + } + } + } + } + }, + "properties": { + "replicaCount": { + "description": "Number of desired pods.", + "type": "integer", + "default": 1, + "minimum": 0 + }, + "image": { + "description": "Container image parameters.", + "$ref": "#/definitions/image" + }, + "baseURL": { + "description": "External URL where alertmanager is reachable.", + "type": "string", + "default": "", + "examples": [ + "https://alertmanager.example.com" + ] + }, + "extraArgs": { + "description": "Additional alertmanager container arguments. Use args without '--', only 'key: value' syntax.", + "type": "object", + "default": {} + }, + "extraSecretMounts": { + "description": "Additional Alertmanager Secret mounts.", + "type": "array", + "default": [], + "items": { + "type": "object", + "required": ["name", "mountPath", "secretName"], + "properties": { + "name": { + "type": "string" + }, + "mountPath": { + "type": "string" + }, + "subPath": { + "type": "string", + "default": "" + }, + "secretName": { + "type": "string" + }, + "readOnly": { + "type": "boolean", + "default": false + } + } + } + }, + "imagePullSecrets": { + "description": "The property allows you to configure multiple image pull secrets.", + "type": "array", + "default": [], + "items": { + "type": "object", + "required": ["name"], + "properties": { + "name": { + "description": "Specifies the Secret name of the image pull secret.", + "type": "string" + } + } + } + }, + "nameOverride": { + "description": "Override value for the name of the Helm chart.", + "type": "string", + "default": "" + }, + "fullnameOverride": { + "description": "Override value for the fully qualified app name.", + "type": "string", + "default": "" + }, + "namespaceOverride": { + "description": "Override deployment namespace.", + "type": "string", + "default": "" + }, + "automountServiceAccountToken": { + "description": "Specifies whether to automatically mount the ServiceAccount token into the Pod's filesystem.", + "type": "boolean", + "default": true + }, + "serviceAccount": { + "description": "Contains properties related to the service account configuration.", + "type": "object", + "required": ["create"], + "properties": { + "create": { + "description": "Specifies whether a service account should be created.", + "type": "boolean", + "default": true + }, + "annotations": { + "description": "Annotations to add to the service account.", + "type": "object", + "default": {} + }, + "name": { + "description": "The name of the service account to use. If not set and create is true, a name is generated using the fullname template.", + "type": "string", + "default": "" + } + } + }, + "schedulerName": { + "description": "Sets the schedulerName in the alertmanager pod.", + "type": "string", + "default": "" + }, + "priorityClassName": { + "description": "Sets the priorityClassName in the alertmanager pod.", + "type": "string", + "default": "" + }, + "podSecurityContext": { + "description": "Pod security context configuration.", + "type": "object", + "properties": { + "fsGroup": { + "description": "The fsGroup value for the pod's security context.", + "type": "integer", + "default": 65534 + }, + "runAsUser": { + "description": "The UID to run the pod's containers as.", + "type": "integer" + }, + "runAsGroup": { + "description": "The GID to run the pod's containers as.", + "type": "integer" + } + } + }, + "dnsConfig": { + "description": "DNS configuration for the pod.", + "type": "object", + "properties": { + "nameservers": { + "description": "List of DNS server IP addresses.", + "type": "array", + "items": { + "type": "string" + } + }, + "searches": { + "description": "List of DNS search domains.", + "type": "array", + "items": { + "type": "string" + } + }, + "options": { + "description": "List of DNS options.", + "type": "array", + "items": { + "description": "DNS options.", + "type": "object", + "required": ["name"], + "properties": { + "name": { + "description": "The name of the DNS option.", + "type": "string" + }, + "value": { + "description": "The value of the DNS option.", + "type": "string" + } + } + } + } + } + }, + "hostAliases": { + "description": "List of host aliases.", + "type": "array", + "items": { + "description": "Host aliases configuration.", + "type": "object", + "required": ["ip", "hostnames"], + "properties": { + "ip": { + "description": "IP address associated with the host alias.", + "type": "string" + }, + "hostnames": { + "description": "List of hostnames associated with the IP address.", + "type": "array", + "items": { + "type": "string" + } + } + } + } + }, + "securityContext": { + "description": "Security context for the container.", + "$ref": "#/definitions/securityContext" + }, + "additionalPeers": { + "description": "Additional peers for a alertmanager.", + "type": "array", + "items": { + "type": "string" + } + }, + "extraInitContainers": { + "description": "Additional InitContainers to initialize the pod.", + "type": "array", + "default": [], + "items": { + "required": ["name", "image"], + "properties": { + "name": { + "description": "The name of the InitContainer.", + "type": "string" + }, + "image": { + "description": "The container image to use for the InitContainer.", + "type": "string" + }, + "pullPolicy": { + "description": "Image pull policy. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.", + "type": "string", + "enum": [ + "Never", + "IfNotPresent", + "Always" + ], + "default": "IfNotPresent" + }, + "command": { + "description": "The command to run in the InitContainer.", + "type": "array", + "items": { + "type": "string" + } + }, + "args": { + "description": "Additional command arguments for the InitContainer.", + "type": "array", + "items": { + "type": "string" + } + }, + "ports": { + "description": "List of ports to expose from the container.", + "type": "array", + "items": { + "type": "object" + } + }, + "env": { + "description": "List of environment variables for the InitContainer.", + "$ref": "#/definitions/env" + }, + "envFrom": { + "description": "List of sources to populate environment variables in the container.", + "type": "array", + "items": { + "type": "object" + } + }, + "volumeMounts": { + "description": "List of volume mounts for the InitContainer.", + "$ref": "#/definitions/volumeMounts" + }, + "resources": { + "description": "Resource requirements for the InitContainer.", + "$ref": "#/definitions/resources" + }, + "securityContext": { + "$ref": "#/definitions/securityContext", + "description": "The security context for the InitContainer." + } + } + } + }, + "extraContainers": { + "description": "Additional containers to add to the stateful set.", + "type": "array", + "default": [], + "items": { + "required": ["name", "image"], + "properties": { + "name": { + "description": "The name of the InitContainer.", + "type": "string" + }, + "image": { + "description": "The container image to use for the InitContainer.", + "type": "string" + }, + "pullPolicy": { + "description": "Image pull policy. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.", + "type": "string", + "enum": [ + "Never", + "IfNotPresent", + "Always" + ], + "default": "IfNotPresent" + }, + "command": { + "description": "The command to run in the InitContainer.", + "type": "array", + "items": { + "type": "string" + } + }, + "args": { + "description": "Additional command arguments for the InitContainer.", + "type": "array", + "items": { + "type": "string" + } + }, + "ports": { + "description": "List of ports to expose from the container.", + "type": "array", + "items": { + "type": "object" + } + }, + "env": { + "description": "List of environment variables for the InitContainer.", + "$ref": "#/definitions/env" + }, + "envFrom": { + "description": "List of sources to populate environment variables in the container.", + "type": "array", + "items": { + "type": "object" + } + }, + "volumeMounts": { + "description": "List of volume mounts for the InitContainer.", + "$ref": "#/definitions/volumeMounts" + }, + "resources": { + "description": "Resource requirements for the InitContainer.", + "$ref": "#/definitions/resources" + }, + "securityContext": { + "$ref": "#/definitions/securityContext", + "description": "The security context for the InitContainer." + } + } + } + }, + "resources": { + "description": "Resource limits and requests for the pod.", + "$ref": "#/definitions/resources" + }, + "livenessProbe": { + "description": "Liveness probe configuration.", + "type": "object" + }, + "readinessProbe": { + "description": "Readiness probe configuration.", + "type": "object" + }, + "service": { + "description": "Service configuration.", + "type": "object", + "required": ["type", "port"], + "properties": { + "annotations": { + "description": "Annotations to add to the service.", + "type": "object" + }, + "type": { + "description": "Service type.", + "type": "string" + }, + "port": { + "description": "Port number for the service.", + "type": "integer" + }, + "clusterPort": { + "description": "Port number for the cluster.", + "type": "integer" + }, + "loadBalancerIP": { + "description": "External IP to assign when the service type is LoadBalancer.", + "type": "string" + }, + "loadBalancerSourceRanges": { + "description": "IP ranges to allow access to the loadBalancerIP.", + "type": "array", + "items": { + "type": "string" + } + }, + "nodePort": { + "description": "Specific nodePort to force when service type is NodePort.", + "type": "integer" + } + } + }, + "ingress": { + "description": "Ingress configuration.", + "type": "object", + "properties": { + "enabled": { + "description": "Indicates if Ingress is enabled.", + "type": "boolean" + }, + "className": { + "description": "Ingress class name.", + "type": "string" + }, + "annotations": { + "description": "Annotations to add to the Ingress.", + "type": "object" + }, + "hosts": { + "description": "Host and path configuration for the Ingress.", + "type": "array", + "items": { + "type": "object", + "properties": { + "host": { + "description": "Host name for the Ingress.", + "type": "string" + }, + "paths": { + "description": "Path configuration for the Ingress.", + "type": "array", + "items": { + "type": "object", + "properties": { + "path": { + "description": "Path for the Ingress.", + "type": "string" + }, + "pathType": { + "description": "Path type for the Ingress.", + "type": "string" + } + } + } + } + } + } + }, + "tls": { + "description": "TLS configuration for the Ingress.", + "type": "array", + "items": { + "type": "object", + "properties": { + "secretName": { + "description": "Name of the secret for TLS.", + "type": "string" + }, + "hosts": { + "description": "Host names for the TLS configuration.", + "type": "array", + "items": { + "type": "string" + } + } + } + } + } + } + }, + "nodeSelector": { + "description": "Node selector for pod assignment.", + "type": "object" + }, + "tolerations": { + "description": "Tolerations for pod assignment.", + "type": "array" + }, + "affinity": { + "description": "Affinity rules for pod assignment.", + "type": "object" + }, + "podAntiAffinity": { + "description": "Pod anti-affinity configuration.", + "type": "string", + "enum": ["", "soft", "hard"], + "default": "" + }, + "podAntiAffinityTopologyKey": { + "description": "Topology key to use for pod anti-affinity.", + "type": "string" + }, + "topologySpreadConstraints": { + "description": "Topology spread constraints for pod assignment.", + "type": "array", + "items": { + "type": "object", + "required": ["maxSkew", "topologyKey", "whenUnsatisfiable", "labelSelector"], + "properties": { + "maxSkew": { + "type": "integer" + }, + "topologyKey": { + "type": "string" + }, + "whenUnsatisfiable": { + "type": "string", + "enum": ["DoNotSchedule", "ScheduleAnyway"] + }, + "labelSelector": { + "type": "object", + "required": ["matchLabels"], + "properties": { + "matchLabels": { + "type": "object" + } + } + } + } + } + }, + "statefulSet": { + "description": "StatefulSet configuration for managing pods.", + "type": "object", + "properties": { + "annotations": { + "type": "object" + } + } + }, + "podAnnotations": { + "description": "Annotations to add to the pods.", + "type": "object" + }, + "podLabels": { + "description": "Labels to add to the pods.", + "type": "object" + }, + "podDisruptionBudget": { + "description": "Pod disruption budget configuration.", + "type": "object", + "properties": { + "maxUnavailable": { + "type": "integer" + }, + "minAvailable": { + "type": "integer" + } + } + }, + "command": { + "description": "The command to be executed in the container.", + "type": "array", + "items": { + "type": "string" + } + }, + "persistence": { + "description": "Persistence configuration for storing data.", + "type": "object", + "required": ["enabled", "size"], + "properties": { + "enabled": { + "type": "boolean" + }, + "storageClass": { + "type": "string" + }, + "accessModes": { + "type": "array", + "items": { + "type": "string" + } + }, + "size": { + "type": "string" + } + } + }, + "configAnnotations": { + "description": "Annotations to be added to the Alertmanager configuration.", + "type": "object" + }, + "config": { + "description": "Alertmanager configuration.", + "type": "object", + "properties": { + "enabled": { + "description": "Whether to create alermanager configmap or not.", + "type": "boolean" + }, + "global": { + "description": "Global configuration options.", + "type": "object" + }, + "templates": { + "description": "Alertmanager template files.", + "type": "array", + "items": { + "type": "string" + } + }, + "receivers": { + "description": "Alert receivers configuration.", + "type": "array", + "items": { + "type": "object", + "required": ["name"], + "properties": { + "name": { + "description": "The unique name of the receiver.", + "type": "string" + } + } + } + }, + "route": { + "description": "Alert routing configuration.", + "type": "object", + "$ref": "#/definitions/config/route" + } + } + }, + "configmapReload": { + "description": "Monitors ConfigMap changes and POSTs to a URL.", + "type": "object", + "properties": { + "enabled": { + "description": "Specifies whether the configmap-reload container should be deployed.", + "type": "boolean", + "default": false + }, + "name": { + "description": "The name of the configmap-reload container.", + "type": "string" + }, + "image": { + "description": "The container image for the configmap-reload container.", + "$ref": "#/definitions/image" + }, + "containerPort": { + "description": "Port number for the configmap-reload container.", + "type": "integer" + }, + "resources": { + "description": "Resource requests and limits for the configmap-reload container.", + "$ref": "#/definitions/resources" + } + } + }, + "templates": { + "description": "Custom templates used by Alertmanager.", + "type": "object" + }, + "extraVolumeMounts": { + "description": "List of volume mounts for the Container.", + "$ref": "#/definitions/volumeMounts" + }, + "extraVolumes": { + "description": "Additional volumes to be mounted in the Alertmanager pod.", + "type": "array", + "default": [], + "items": { + "type": "object", + "required": ["name"], + "properties": { + "name": { + "type": "string" + } + } + } + }, + "extraEnv": { + "description": "List of environment variables for the Container.", + "$ref": "#/definitions/env" + }, + "testFramework": { + "description": "Configuration for the test Pod.", + "type": "object", + "properties": { + "enabled": { + "description": "Specifies whether the test Pod is enabled.", + "type": "boolean", + "default": false + }, + "annotations": { + "description": "Annotations to be added to the test Pod.", + "type": "object" + } + } + } + } +} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.yaml new file mode 100644 index 0000000000..f8a9d243b3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/alertmanager/values.yaml @@ -0,0 +1,379 @@ +# yaml-language-server: $schema=values.schema.json +# Default values for alertmanager. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +replicaCount: 1 + +# Number of old history to retain to allow rollback +# Default Kubernetes value is set to 10 +revisionHistoryLimit: 10 + +image: + repository: quay.io/prometheus/alertmanager + pullPolicy: IfNotPresent + # Overrides the image tag whose default is the chart appVersion. + tag: "" + +# Full external URL where alertmanager is reachable, used for backlinks. +baseURL: "" + +extraArgs: {} + +## Additional Alertmanager Secret mounts +# Defines additional mounts with secrets. Secrets must be manually created in the namespace. +extraSecretMounts: [] + # - name: secret-files + # mountPath: /etc/secrets + # subPath: "" + # secretName: alertmanager-secret-files + # readOnly: true + +imagePullSecrets: [] +nameOverride: "" +fullnameOverride: "" +## namespaceOverride overrides the namespace which the resources will be deployed in +namespaceOverride: "" + +automountServiceAccountToken: true + +serviceAccount: + # Specifies whether a service account should be created + create: true + # Annotations to add to the service account + annotations: {} + # The name of the service account to use. + # If not set and create is true, a name is generated using the fullname template + name: "" + +# Sets priorityClassName in alertmanager pod +priorityClassName: "" + +# Sets schedulerName in alertmanager pod +schedulerName: "" + +podSecurityContext: + fsGroup: 65534 +dnsConfig: {} + # nameservers: + # - 1.2.3.4 + # searches: + # - ns1.svc.cluster-domain.example + # - my.dns.search.suffix + # options: + # - name: ndots + # value: "2" + # - name: edns0 +hostAliases: [] + # - ip: "127.0.0.1" + # hostnames: + # - "foo.local" + # - "bar.local" + # - ip: "10.1.2.3" + # hostnames: + # - "foo.remote" + # - "bar.remote" +securityContext: + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + runAsUser: 65534 + runAsNonRoot: true + runAsGroup: 65534 + +additionalPeers: [] + +## Additional InitContainers to initialize the pod +## +extraInitContainers: [] + +## Additional containers to add to the stateful set. This will allow to setup sidecarContainers like a proxy to integrate +## alertmanager with an external tool like teams that has not direct integration. +## +extraContainers: [] + +livenessProbe: + httpGet: + path: / + port: http + +readinessProbe: + httpGet: + path: / + port: http + +service: + annotations: {} + labels: {} + type: ClusterIP + port: 9093 + clusterPort: 9094 + loadBalancerIP: "" # Assign ext IP when Service type is LoadBalancer + loadBalancerSourceRanges: [] # Only allow access to loadBalancerIP from these IPs + # if you want to force a specific nodePort. Must be use with service.type=NodePort + # nodePort: + + # Optionally specify extra list of additional ports exposed on both services + extraPorts: [] + + # ip dual stack + ipDualStack: + enabled: false + ipFamilies: ["IPv6", "IPv4"] + ipFamilyPolicy: "PreferDualStack" + +# Configuration for creating a separate Service for each statefulset Alertmanager replica +# +servicePerReplica: + enabled: false + annotations: {} + + # Loadbalancer source IP ranges + # Only used if servicePerReplica.type is "LoadBalancer" + loadBalancerSourceRanges: [] + + # Denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints + # + externalTrafficPolicy: Cluster + + # Service type + # + type: ClusterIP + +ingress: + enabled: false + className: "" + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: "true" + hosts: + - host: alertmanager.domain.com + paths: + - path: / + pathType: ImplementationSpecific + tls: [] + # - secretName: chart-example-tls + # hosts: + # - alertmanager.domain.com + +# Configuration for creating an Ingress that will map to each Alertmanager replica service +# alertmanager.servicePerReplica must be enabled +# +ingressPerReplica: + enabled: false + + # className for the ingresses + # + className: "" + + annotations: {} + labels: {} + + # Final form of the hostname for each per replica ingress is + # {{ ingressPerReplica.hostPrefix }}-{{ $replicaNumber }}.{{ ingressPerReplica.hostDomain }} + # + # Prefix for the per replica ingress that will have `-$replicaNumber` + # appended to the end + hostPrefix: "alertmanager" + # Domain that will be used for the per replica ingress + hostDomain: "domain.com" + + # Paths to use for ingress rules + # + paths: + - / + + # PathType for ingress rules + # + pathType: ImplementationSpecific + + # Secret name containing the TLS certificate for alertmanager per replica ingress + # Secret must be manually created in the namespace + tlsSecretName: "" + + # Separated secret for each per replica Ingress. Can be used together with cert-manager + # + tlsSecretPerReplica: + enabled: false + # Final form of the secret for each per replica ingress is + # {{ tlsSecretPerReplica.prefix }}-{{ $replicaNumber }} + # + prefix: "alertmanager" + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 10m + # memory: 32Mi + +nodeSelector: {} + +tolerations: [] + +affinity: {} + +## Pod anti-affinity can prevent the scheduler from placing Alertmanager replicas on the same node. +## The default value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. +## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. +## The value "" will disable pod anti-affinity so that no anti-affinity rules will be configured. +## +podAntiAffinity: "" + +## If anti-affinity is enabled sets the topologyKey to use for anti-affinity. +## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone +## +podAntiAffinityTopologyKey: kubernetes.io/hostname + +## Topology spread constraints rely on node labels to identify the topology domain(s) that each Node is in. +## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +topologySpreadConstraints: [] + # - maxSkew: 1 + # topologyKey: failure-domain.beta.kubernetes.io/zone + # whenUnsatisfiable: DoNotSchedule + # labelSelector: + # matchLabels: + # app.kubernetes.io/instance: alertmanager + +statefulSet: + annotations: {} + +## Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to +## be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). +## This is an alpha field from kubernetes 1.22 until 1.24 which requires enabling the StatefulSetMinReadySeconds +## feature gate. +## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#minimum-ready-seconds +minReadySeconds: 0 + +podAnnotations: {} +podLabels: {} + +# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +podDisruptionBudget: {} + # maxUnavailable: 1 + # minAvailable: 1 + +command: [] + +persistence: + enabled: true + ## Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. + ## + # storageClass: "-" + accessModes: + - ReadWriteOnce + size: 50Mi + +configAnnotations: {} + ## For example if you want to provide private data from a secret vault + ## https://github.com/banzaicloud/bank-vaults/tree/main/charts/vault-secrets-webhook + ## P.s.: Add option `configMapMutation: true` for vault-secrets-webhook + # vault.security.banzaicloud.io/vault-role: "admin" + # vault.security.banzaicloud.io/vault-addr: "https://vault.vault.svc.cluster.local:8200" + # vault.security.banzaicloud.io/vault-skip-verify: "true" + # vault.security.banzaicloud.io/vault-path: "kubernetes" + ## Example for inject secret + # slack_api_url: '${vault:secret/data/slack-hook-alerts#URL}' + +config: + enabled: true + global: {} + # slack_api_url: '' + + templates: + - '/etc/alertmanager/*.tmpl' + + receivers: + - name: default-receiver + # slack_configs: + # - channel: '@you' + # send_resolved: true + + route: + group_wait: 10s + group_interval: 5m + receiver: default-receiver + repeat_interval: 3h + +## Monitors ConfigMap changes and POSTs to a URL +## Ref: https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader +## +configmapReload: + ## If false, the configmap-reload container will not be deployed + ## + enabled: false + + ## configmap-reload container name + ## + name: configmap-reload + + ## configmap-reload container image + ## + image: + repository: quay.io/prometheus-operator/prometheus-config-reloader + tag: v0.66.0 + pullPolicy: IfNotPresent + + # containerPort: 9533 + + ## configmap-reload resource requests and limits + ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## + resources: {} + + extraArgs: {} + + ## Optionally specify extra list of additional volumeMounts + extraVolumeMounts: [] + # - name: extras + # mountPath: /usr/share/extras + # readOnly: true + + ## Optionally specify extra environment variables to add to alertmanager container + extraEnv: [] + # - name: FOO + # value: BAR + + securityContext: {} + # capabilities: + # drop: + # - ALL + # readOnlyRootFilesystem: true + # runAsUser: 65534 + # runAsNonRoot: true + # runAsGroup: 65534 + +templates: {} +# alertmanager.tmpl: |- + +## Optionally specify extra list of additional volumeMounts +extraVolumeMounts: [] + # - name: extras + # mountPath: /usr/share/extras + # readOnly: true + +## Optionally specify extra list of additional volumes +extraVolumes: [] + # - name: extras + # emptyDir: {} + +## Optionally specify extra environment variables to add to alertmanager container +extraEnv: [] + # - name: FOO + # value: BAR + +testFramework: + enabled: false + annotations: + "helm.sh/hook": test-success + # "helm.sh/hook-delete-policy": "before-hook-creation,hook-succeeded" diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/.helmignore b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/.helmignore new file mode 100644 index 0000000000..f0c1319444 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/Chart.yaml new file mode 100644 index 0000000000..9dea659498 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/Chart.yaml @@ -0,0 +1,26 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts +apiVersion: v2 +appVersion: 2.12.0 +description: Install kube-state-metrics to generate and expose cluster-level metrics +home: https://github.com/kubernetes/kube-state-metrics/ +keywords: +- metric +- monitoring +- prometheus +- kubernetes +maintainers: +- email: tariq.ibrahim@mulesoft.com + name: tariq1890 +- email: manuel@rueg.eu + name: mrueg +- email: david@0xdc.me + name: dotdc +name: kube-state-metrics +sources: +- https://github.com/kubernetes/kube-state-metrics/ +type: application +version: 5.21.0 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/README.md b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/README.md new file mode 100644 index 0000000000..843be89e69 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/README.md @@ -0,0 +1,85 @@ +# kube-state-metrics Helm Chart + +Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics). + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/kube-state-metrics [flags] +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] prometheus-community/kube-state-metrics [flags] +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### Migrating from stable/kube-state-metrics and kubernetes/kube-state-metrics + +You can upgrade in-place: + +1. [get repository info](#get-repository-info) +1. [upgrade](#upgrading-chart) your existing release name using the new chart repository + +## Upgrading to v3.0.0 + +v3.0.0 includes kube-state-metrics v2.0, see the [changelog](https://github.com/kubernetes/kube-state-metrics/blob/release-2.0/CHANGELOG.md) for major changes on the application-side. + +The upgraded chart now the following changes: + +* Dropped support for helm v2 (helm v3 or later is required) +* collectors key was renamed to resources +* namespace key was renamed to namespaces + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments: + +```console +helm show values prometheus-community/kube-state-metrics +``` + +### kube-rbac-proxy + +You can enable `kube-state-metrics` endpoint protection using `kube-rbac-proxy`. By setting `kubeRBACProxy.enabled: true`, this chart will deploy one RBAC proxy container per endpoint (metrics & telemetry). +To authorize access, authenticate your requests (via a `ServiceAccount` for example) with a `ClusterRole` attached such as: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kube-state-metrics-read +rules: + - apiGroups: [ "" ] + resources: ["services/kube-state-metrics"] + verbs: + - get +``` + +See [kube-rbac-proxy examples](https://github.com/brancz/kube-rbac-proxy/tree/master/examples/resource-attributes) for more details. diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/NOTES.txt new file mode 100644 index 0000000000..3589c24ec3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/NOTES.txt @@ -0,0 +1,23 @@ +kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. +The exposed metrics can be found here: +https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics + +The metrics are exported on the HTTP endpoint /metrics on the listening port. +In your case, {{ template "kube-state-metrics.fullname" . }}.{{ template "kube-state-metrics.namespace" . }}.svc.cluster.local:{{ .Values.service.port }}/metrics + +They are served either as plaintext or protobuf depending on the Accept header. +They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. + +{{- if .Values.kubeRBACProxy.enabled}} + +kube-rbac-proxy endpoint protections is enabled: +- Metrics endpoints are now HTTPS +- Ensure that the client authenticates the requests (e.g. via service account) with the following role permissions: +``` +rules: + - apiGroups: [ "" ] + resources: ["services/{{ template "kube-state-metrics.fullname" . }}"] + verbs: + - get +``` +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/_helpers.tpl new file mode 100644 index 0000000000..a4358c87a1 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/_helpers.tpl @@ -0,0 +1,156 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "kube-state-metrics.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "kube-state-metrics.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "kube-state-metrics.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} + {{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "kube-state-metrics.namespace" -}} + {{- if .Values.namespaceOverride -}} + {{- .Values.namespaceOverride -}} + {{- else -}} + {{- .Release.Namespace -}} + {{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "kube-state-metrics.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate basic labels +*/}} +{{- define "kube-state-metrics.labels" }} +helm.sh/chart: {{ template "kube-state-metrics.chart" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/component: metrics +app.kubernetes.io/part-of: {{ template "kube-state-metrics.name" . }} +{{- include "kube-state-metrics.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +{{- if .Values.customLabels }} +{{ toYaml .Values.customLabels }} +{{- end }} +{{- if .Values.releaseLabel }} +release: {{ .Release.Name }} +{{- end }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "kube-state-metrics.selectorLabels" }} +{{- if .Values.selectorOverride }} +{{ toYaml .Values.selectorOverride }} +{{- else }} +app.kubernetes.io/name: {{ include "kube-state-metrics.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} +{{- end }} + +{{/* Sets default scrape limits for servicemonitor */}} +{{- define "servicemonitor.scrapeLimits" -}} +{{- with .sampleLimit }} +sampleLimit: {{ . }} +{{- end }} +{{- with .targetLimit }} +targetLimit: {{ . }} +{{- end }} +{{- with .labelLimit }} +labelLimit: {{ . }} +{{- end }} +{{- with .labelNameLengthLimit }} +labelNameLengthLimit: {{ . }} +{{- end }} +{{- with .labelValueLengthLimit }} +labelValueLengthLimit: {{ . }} +{{- end }} +{{- end -}} + +{{/* +Formats imagePullSecrets. Input is (dict "Values" .Values "imagePullSecrets" .{specific imagePullSecrets}) +*/}} +{{- define "kube-state-metrics.imagePullSecrets" -}} +{{- range (concat .Values.global.imagePullSecrets .imagePullSecrets) }} + {{- if eq (typeOf .) "map[string]interface {}" }} +- {{ toYaml . | trim }} + {{- else }} +- name: {{ . }} + {{- end }} +{{- end }} +{{- end -}} + +{{/* +The image to use for kube-state-metrics +*/}} +{{- define "kube-state-metrics.image" -}} +{{- if .Values.image.sha }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }} +{{- else }} +{{- printf "%s/%s:%s@%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }} +{{- end }} +{{- else }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- else }} +{{- printf "%s/%s:%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +The image to use for kubeRBACProxy +*/}} +{{- define "kubeRBACProxy.image" -}} +{{- if .Values.kubeRBACProxy.image.sha }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }} +{{- else }} +{{- printf "%s/%s:%s@%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }} +{{- end }} +{{- else }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }} +{{- else }} +{{- printf "%s/%s:%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml new file mode 100644 index 0000000000..025cd47a88 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.networkPolicy.enabled (eq .Values.networkPolicy.flavor "cilium") }} +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +metadata: + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +spec: + endpointSelector: + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + egress: + {{- if and .Values.networkPolicy.cilium .Values.networkPolicy.cilium.kubeApiServerSelector }} + {{ toYaml .Values.networkPolicy.cilium.kubeApiServerSelector | nindent 6 }} + {{- else }} + - toEntities: + - kube-apiserver + {{- end }} + ingress: + - toPorts: + - ports: + - port: {{ .Values.service.port | quote }} + protocol: TCP + {{- if .Values.selfMonitor.enabled }} + - port: {{ .Values.selfMonitor.telemetryPort | default 8081 | quote }} + protocol: TCP + {{ end }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..cf9f628d04 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/clusterrolebinding.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.rbac.create .Values.rbac.useClusterRole -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole +{{- if .Values.rbac.useExistingRole }} + name: {{ .Values.rbac.useExistingRole }} +{{- else }} + name: {{ template "kube-state-metrics.fullname" . }} +{{- end }} +subjects: +- kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/crs-configmap.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/crs-configmap.yaml new file mode 100644 index 0000000000..d38a75a51d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/crs-configmap.yaml @@ -0,0 +1,16 @@ +{{- if .Values.customResourceState.enabled}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-customresourcestate-config + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} +data: + config.yaml: | + {{- toYaml .Values.customResourceState.config | nindent 4 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/deployment.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/deployment.yaml new file mode 100644 index 0000000000..5d32f82fa7 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/deployment.yaml @@ -0,0 +1,313 @@ +apiVersion: apps/v1 +{{- if .Values.autosharding.enabled }} +kind: StatefulSet +{{- else }} +kind: Deployment +{{- end }} +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: +{{ toYaml .Values.annotations | indent 4 }} + {{- end }} +spec: + selector: + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + replicas: {{ .Values.replicas }} + {{- if not .Values.autosharding.enabled }} + strategy: + type: {{ .Values.updateStrategy | default "RollingUpdate" }} + {{- end }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + {{- if .Values.autosharding.enabled }} + serviceName: {{ template "kube-state-metrics.fullname" . }} + volumeClaimTemplates: [] + {{- end }} + template: + metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 8 }} + {{- if .Values.podAnnotations }} + annotations: +{{ toYaml .Values.podAnnotations | indent 8 }} + {{- end }} + spec: + automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} + hostNetwork: {{ .Values.hostNetwork }} + serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.priorityClassName }} + priorityClassName: {{ .Values.priorityClassName }} + {{- end }} + {{- with .Values.initContainers }} + initContainers: + {{- toYaml . | nindent 6 }} + {{- end }} + containers: + {{- $servicePort := ternary 9090 (.Values.service.port | default 8080) .Values.kubeRBACProxy.enabled}} + {{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}} + - name: {{ template "kube-state-metrics.name" . }} + {{- if .Values.autosharding.enabled }} + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + {{- end }} + args: + {{- if .Values.extraArgs }} + {{- .Values.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --port={{ $servicePort }} + {{- if .Values.collectors }} + - --resources={{ .Values.collectors | join "," }} + {{- end }} + {{- if .Values.metricLabelsAllowlist }} + - --metric-labels-allowlist={{ .Values.metricLabelsAllowlist | join "," }} + {{- end }} + {{- if .Values.metricAnnotationsAllowList }} + - --metric-annotations-allowlist={{ .Values.metricAnnotationsAllowList | join "," }} + {{- end }} + {{- if .Values.metricAllowlist }} + - --metric-allowlist={{ .Values.metricAllowlist | join "," }} + {{- end }} + {{- if .Values.metricDenylist }} + - --metric-denylist={{ .Values.metricDenylist | join "," }} + {{- end }} + {{- $namespaces := list }} + {{- if .Values.namespaces }} + {{- range $ns := join "," .Values.namespaces | split "," }} + {{- $namespaces = append $namespaces (tpl $ns $) }} + {{- end }} + {{- end }} + {{- if .Values.releaseNamespace }} + {{- $namespaces = append $namespaces ( include "kube-state-metrics.namespace" . ) }} + {{- end }} + {{- if $namespaces }} + - --namespaces={{ $namespaces | mustUniq | join "," }} + {{- end }} + {{- if .Values.namespacesDenylist }} + - --namespaces-denylist={{ tpl (.Values.namespacesDenylist | join ",") $ }} + {{- end }} + {{- if .Values.autosharding.enabled }} + - --pod=$(POD_NAME) + - --pod-namespace=$(POD_NAMESPACE) + {{- end }} + {{- if .Values.kubeconfig.enabled }} + - --kubeconfig=/opt/k8s/.kube/config + {{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - --telemetry-host=127.0.0.1 + - --telemetry-port={{ $telemetryPort }} + {{- else }} + {{- if .Values.selfMonitor.telemetryHost }} + - --telemetry-host={{ .Values.selfMonitor.telemetryHost }} + {{- end }} + {{- if .Values.selfMonitor.telemetryPort }} + - --telemetry-port={{ $telemetryPort }} + {{- end }} + {{- end }} + {{- if .Values.customResourceState.enabled }} + - --custom-resource-state-config-file=/etc/customresourcestate/config.yaml + {{- end }} + {{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumeMounts) }} + volumeMounts: + {{- if .Values.kubeconfig.enabled }} + - name: kubeconfig + mountPath: /opt/k8s/.kube/ + readOnly: true + {{- end }} + {{- if .Values.customResourceState.enabled }} + - name: customresourcestate-config + mountPath: /etc/customresourcestate + readOnly: true + {{- end }} + {{- if .Values.volumeMounts }} +{{ toYaml .Values.volumeMounts | indent 8 }} + {{- end }} + {{- end }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + image: {{ include "kube-state-metrics.image" . }} + {{- if eq .Values.kubeRBACProxy.enabled false }} + ports: + - containerPort: {{ .Values.service.port | default 8080}} + name: "http" + {{- if .Values.selfMonitor.enabled }} + - containerPort: {{ $telemetryPort }} + name: "metrics" + {{- end }} + {{- end }} + livenessProbe: + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + httpGet: + {{- if .Values.hostNetwork }} + host: 127.0.0.1 + {{- end }} + httpHeaders: + {{- range $_, $header := .Values.livenessProbe.httpGet.httpHeaders }} + - name: {{ $header.name }} + value: {{ $header.value }} + {{- end }} + path: /healthz + port: {{ $servicePort }} + scheme: {{ upper .Values.livenessProbe.httpGet.scheme }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + readinessProbe: + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + httpGet: + {{- if .Values.hostNetwork }} + host: 127.0.0.1 + {{- end }} + httpHeaders: + {{- range $_, $header := .Values.readinessProbe.httpGet.httpHeaders }} + - name: {{ $header.name }} + value: {{ $header.value }} + {{- end }} + path: / + port: {{ $servicePort }} + scheme: {{ upper .Values.readinessProbe.httpGet.scheme }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + resources: +{{ toYaml .Values.resources | indent 10 }} +{{- if .Values.containerSecurityContext }} + securityContext: +{{ toYaml .Values.containerSecurityContext | indent 10 }} +{{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - name: kube-rbac-proxy-http + args: + {{- if .Values.kubeRBACProxy.extraArgs }} + {{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --secure-listen-address=:{{ .Values.service.port | default 8080}} + - --upstream=http://127.0.0.1:{{ $servicePort }}/ + - --proxy-endpoints-port=8888 + - --config-file=/etc/kube-rbac-proxy-config/config-file.yaml + volumeMounts: + - name: kube-rbac-proxy-config + mountPath: /etc/kube-rbac-proxy-config + {{- with .Values.kubeRBACProxy.volumeMounts }} + {{- toYaml . | nindent 10 }} + {{- end }} + imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }} + image: {{ include "kubeRBACProxy.image" . }} + ports: + - containerPort: {{ .Values.service.port | default 8080}} + name: "http" + - containerPort: 8888 + name: "http-healthz" + readinessProbe: + httpGet: + scheme: HTTPS + port: 8888 + path: healthz + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.kubeRBACProxy.resources }} + resources: +{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }} +{{- end }} +{{- if .Values.kubeRBACProxy.containerSecurityContext }} + securityContext: +{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }} +{{- end }} + {{- if .Values.selfMonitor.enabled }} + - name: kube-rbac-proxy-telemetry + args: + {{- if .Values.kubeRBACProxy.extraArgs }} + {{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --secure-listen-address=:{{ .Values.selfMonitor.telemetryPort | default 8081 }} + - --upstream=http://127.0.0.1:{{ $telemetryPort }}/ + - --proxy-endpoints-port=8889 + - --config-file=/etc/kube-rbac-proxy-config/config-file.yaml + volumeMounts: + - name: kube-rbac-proxy-config + mountPath: /etc/kube-rbac-proxy-config + {{- with .Values.kubeRBACProxy.volumeMounts }} + {{- toYaml . | nindent 10 }} + {{- end }} + imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }} + image: {{ include "kubeRBACProxy.image" . }} + ports: + - containerPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + name: "metrics" + - containerPort: 8889 + name: "metrics-healthz" + readinessProbe: + httpGet: + scheme: HTTPS + port: 8889 + path: healthz + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.kubeRBACProxy.resources }} + resources: +{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }} +{{- end }} +{{- if .Values.kubeRBACProxy.containerSecurityContext }} + securityContext: +{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }} +{{- end }} + {{- end }} + {{- end }} + {{- with .Values.containers }} + {{- toYaml . | nindent 6 }} + {{- end }} +{{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} + imagePullSecrets: + {{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.imagePullSecrets) | indent 8 }} + {{- end }} + {{- if .Values.affinity }} + affinity: +{{ toYaml .Values.affinity | indent 8 }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: +{{ toYaml .Values.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: +{{ toYaml .Values.tolerations | indent 8 }} + {{- end }} + {{- if .Values.topologySpreadConstraints }} + topologySpreadConstraints: +{{ toYaml .Values.topologySpreadConstraints | indent 8 }} + {{- end }} + {{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumes) (.Values.kubeRBACProxy.enabled) }} + volumes: + {{- if .Values.kubeconfig.enabled}} + - name: kubeconfig + secret: + secretName: {{ template "kube-state-metrics.fullname" . }}-kubeconfig + {{- end }} + {{- if .Values.kubeRBACProxy.enabled}} + - name: kube-rbac-proxy-config + configMap: + name: {{ template "kube-state-metrics.fullname" . }}-rbac-config + {{- end }} + {{- if .Values.customResourceState.enabled}} + - name: customresourcestate-config + configMap: + name: {{ template "kube-state-metrics.fullname" . }}-customresourcestate-config + {{- end }} + {{- if .Values.volumes }} +{{ toYaml .Values.volumes | indent 8 }} + {{- end }} + {{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/extra-manifests.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/extra-manifests.yaml new file mode 100644 index 0000000000..567f7bf329 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/extra-manifests.yaml @@ -0,0 +1,4 @@ +{{ range .Values.extraManifests }} +--- +{{ tpl (toYaml .) $ }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/kubeconfig-secret.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/kubeconfig-secret.yaml new file mode 100644 index 0000000000..6af0084502 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/kubeconfig-secret.yaml @@ -0,0 +1,12 @@ +{{- if .Values.kubeconfig.enabled -}} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-kubeconfig + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +type: Opaque +data: + config: '{{ .Values.kubeconfig.secret }}' +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/networkpolicy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/networkpolicy.yaml new file mode 100644 index 0000000000..309b38ec54 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/networkpolicy.yaml @@ -0,0 +1,43 @@ +{{- if and .Values.networkPolicy.enabled (eq .Values.networkPolicy.flavor "kubernetes") }} +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +spec: + {{- if .Values.networkPolicy.egress }} + ## Deny all egress by default + egress: + {{- toYaml .Values.networkPolicy.egress | nindent 4 }} + {{- end }} + ingress: + {{- if .Values.networkPolicy.ingress }} + {{- toYaml .Values.networkPolicy.ingress | nindent 4 }} + {{- else }} + ## Allow ingress on default ports by default + - ports: + - port: {{ .Values.service.port | default 8080 }} + protocol: TCP + {{- if .Values.selfMonitor.enabled }} + {{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}} + - port: {{ $telemetryPort }} + protocol: TCP + {{- end }} + {{- end }} + podSelector: + {{- if .Values.networkPolicy.podSelector }} + {{- toYaml .Values.networkPolicy.podSelector | nindent 4 }} + {{- else }} + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + {{- end }} + policyTypes: + - Ingress + - Egress +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/pdb.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/pdb.yaml new file mode 100644 index 0000000000..3771b511de --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/pdb.yaml @@ -0,0 +1,18 @@ +{{- if .Values.podDisruptionBudget -}} +{{ if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" -}} +apiVersion: policy/v1 +{{- else -}} +apiVersion: policy/v1beta1 +{{- end }} +kind: PodDisruptionBudget +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +spec: + selector: + matchLabels: + app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }} +{{ toYaml .Values.podDisruptionBudget | indent 2 }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/podsecuritypolicy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..8905e113e8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/podsecuritypolicy.yaml @@ -0,0 +1,39 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +{{- if .Values.podSecurityPolicy.annotations }} + annotations: +{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }} +{{- end }} +spec: + privileged: false + volumes: + - 'secret' +{{- if .Values.podSecurityPolicy.additionalVolumes }} +{{ toYaml .Values.podSecurityPolicy.additionalVolumes | indent 4 }} +{{- end }} + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'MustRunAsNonRoot' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrole.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrole.yaml new file mode 100644 index 0000000000..654e4a3d57 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrole.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: psp-{{ template "kube-state-metrics.fullname" . }} +rules: +{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }} +{{- if semverCompare "> 1.15.0-0" $kubeTargetVersion }} +- apiGroups: ['policy'] +{{- else }} +- apiGroups: ['extensions'] +{{- end }} + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ template "kube-state-metrics.fullname" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml new file mode 100644 index 0000000000..5b62a18bdf --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: psp-{{ template "kube-state-metrics.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp-{{ template "kube-state-metrics.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rbac-configmap.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rbac-configmap.yaml new file mode 100644 index 0000000000..671dc9d660 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rbac-configmap.yaml @@ -0,0 +1,22 @@ +{{- if .Values.kubeRBACProxy.enabled}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-rbac-config + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} +data: + config-file.yaml: |+ + authorization: + resourceAttributes: + namespace: {{ template "kube-state-metrics.namespace" . }} + apiVersion: v1 + resource: services + subresource: {{ template "kube-state-metrics.fullname" . }} + name: {{ template "kube-state-metrics.fullname" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/role.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/role.yaml new file mode 100644 index 0000000000..d33687f2d1 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/role.yaml @@ -0,0 +1,212 @@ +{{- if and (eq .Values.rbac.create true) (not .Values.rbac.useExistingRole) -}} +{{- range (ternary (join "," .Values.namespaces | split "," ) (list "") (eq $.Values.rbac.useClusterRole false)) }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +{{- if eq $.Values.rbac.useClusterRole false }} +kind: Role +{{- else }} +kind: ClusterRole +{{- end }} +metadata: + labels: + {{- include "kube-state-metrics.labels" $ | indent 4 }} + name: {{ template "kube-state-metrics.fullname" $ }} +{{- if eq $.Values.rbac.useClusterRole false }} + namespace: {{ . }} +{{- end }} +rules: +{{ if has "certificatesigningrequests" $.Values.collectors }} +- apiGroups: ["certificates.k8s.io"] + resources: + - certificatesigningrequests + verbs: ["list", "watch"] +{{ end -}} +{{ if has "configmaps" $.Values.collectors }} +- apiGroups: [""] + resources: + - configmaps + verbs: ["list", "watch"] +{{ end -}} +{{ if has "cronjobs" $.Values.collectors }} +- apiGroups: ["batch"] + resources: + - cronjobs + verbs: ["list", "watch"] +{{ end -}} +{{ if has "daemonsets" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - daemonsets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "deployments" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - deployments + verbs: ["list", "watch"] +{{ end -}} +{{ if has "endpoints" $.Values.collectors }} +- apiGroups: [""] + resources: + - endpoints + verbs: ["list", "watch"] +{{ end -}} +{{ if has "endpointslices" $.Values.collectors }} +- apiGroups: ["discovery.k8s.io"] + resources: + - endpointslices + verbs: ["list", "watch"] +{{ end -}} +{{ if has "horizontalpodautoscalers" $.Values.collectors }} +- apiGroups: ["autoscaling"] + resources: + - horizontalpodautoscalers + verbs: ["list", "watch"] +{{ end -}} +{{ if has "ingresses" $.Values.collectors }} +- apiGroups: ["extensions", "networking.k8s.io"] + resources: + - ingresses + verbs: ["list", "watch"] +{{ end -}} +{{ if has "jobs" $.Values.collectors }} +- apiGroups: ["batch"] + resources: + - jobs + verbs: ["list", "watch"] +{{ end -}} +{{ if has "leases" $.Values.collectors }} +- apiGroups: ["coordination.k8s.io"] + resources: + - leases + verbs: ["list", "watch"] +{{ end -}} +{{ if has "limitranges" $.Values.collectors }} +- apiGroups: [""] + resources: + - limitranges + verbs: ["list", "watch"] +{{ end -}} +{{ if has "mutatingwebhookconfigurations" $.Values.collectors }} +- apiGroups: ["admissionregistration.k8s.io"] + resources: + - mutatingwebhookconfigurations + verbs: ["list", "watch"] +{{ end -}} +{{ if has "namespaces" $.Values.collectors }} +- apiGroups: [""] + resources: + - namespaces + verbs: ["list", "watch"] +{{ end -}} +{{ if has "networkpolicies" $.Values.collectors }} +- apiGroups: ["networking.k8s.io"] + resources: + - networkpolicies + verbs: ["list", "watch"] +{{ end -}} +{{ if has "nodes" $.Values.collectors }} +- apiGroups: [""] + resources: + - nodes + verbs: ["list", "watch"] +{{ end -}} +{{ if has "persistentvolumeclaims" $.Values.collectors }} +- apiGroups: [""] + resources: + - persistentvolumeclaims + verbs: ["list", "watch"] +{{ end -}} +{{ if has "persistentvolumes" $.Values.collectors }} +- apiGroups: [""] + resources: + - persistentvolumes + verbs: ["list", "watch"] +{{ end -}} +{{ if has "poddisruptionbudgets" $.Values.collectors }} +- apiGroups: ["policy"] + resources: + - poddisruptionbudgets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "pods" $.Values.collectors }} +- apiGroups: [""] + resources: + - pods + verbs: ["list", "watch"] +{{ end -}} +{{ if has "replicasets" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - replicasets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "replicationcontrollers" $.Values.collectors }} +- apiGroups: [""] + resources: + - replicationcontrollers + verbs: ["list", "watch"] +{{ end -}} +{{ if has "resourcequotas" $.Values.collectors }} +- apiGroups: [""] + resources: + - resourcequotas + verbs: ["list", "watch"] +{{ end -}} +{{ if has "secrets" $.Values.collectors }} +- apiGroups: [""] + resources: + - secrets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "services" $.Values.collectors }} +- apiGroups: [""] + resources: + - services + verbs: ["list", "watch"] +{{ end -}} +{{ if has "statefulsets" $.Values.collectors }} +- apiGroups: ["apps"] + resources: + - statefulsets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "storageclasses" $.Values.collectors }} +- apiGroups: ["storage.k8s.io"] + resources: + - storageclasses + verbs: ["list", "watch"] +{{ end -}} +{{ if has "validatingwebhookconfigurations" $.Values.collectors }} +- apiGroups: ["admissionregistration.k8s.io"] + resources: + - validatingwebhookconfigurations + verbs: ["list", "watch"] +{{ end -}} +{{ if has "volumeattachments" $.Values.collectors }} +- apiGroups: ["storage.k8s.io"] + resources: + - volumeattachments + verbs: ["list", "watch"] +{{ end -}} +{{- if $.Values.kubeRBACProxy.enabled }} +- apiGroups: ["authentication.k8s.io"] + resources: + - tokenreviews + verbs: ["create"] +- apiGroups: ["authorization.k8s.io"] + resources: + - subjectaccessreviews + verbs: ["create"] +{{- end }} +{{- if $.Values.customResourceState.enabled }} +- apiGroups: ["apiextensions.k8s.io"] + resources: + - customresourcedefinitions + verbs: ["list", "watch"] +{{- end }} +{{ if $.Values.rbac.extraRules }} +{{ toYaml $.Values.rbac.extraRules }} +{{ end }} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rolebinding.yaml new file mode 100644 index 0000000000..330651b73f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/rolebinding.yaml @@ -0,0 +1,24 @@ +{{- if and (eq .Values.rbac.create true) (eq .Values.rbac.useClusterRole false) -}} +{{- range (join "," $.Values.namespaces) | split "," }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" $ | indent 4 }} + name: {{ template "kube-state-metrics.fullname" $ }} + namespace: {{ . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role +{{- if (not $.Values.rbac.useExistingRole) }} + name: {{ template "kube-state-metrics.fullname" $ }} +{{- else }} + name: {{ $.Values.rbac.useExistingRole }} +{{- end }} +subjects: +- kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" $ }} + namespace: {{ template "kube-state-metrics.namespace" $ }} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/service.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/service.yaml new file mode 100644 index 0000000000..90c235148f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/service.yaml @@ -0,0 +1,53 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + annotations: + {{- if .Values.prometheusScrape }} + prometheus.io/scrape: '{{ .Values.prometheusScrape }}' + {{- end }} + {{- if .Values.service.annotations }} + {{- toYaml .Values.service.annotations | nindent 4 }} + {{- end }} +spec: + type: "{{ .Values.service.type }}" + {{- if .Values.service.ipDualStack.enabled }} + ipFamilies: {{ toYaml .Values.service.ipDualStack.ipFamilies | nindent 4 }} + ipFamilyPolicy: {{ .Values.service.ipDualStack.ipFamilyPolicy }} + {{- end }} + ports: + - name: "http" + protocol: TCP + port: {{ .Values.service.port | default 8080}} + {{- if .Values.service.nodePort }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + targetPort: {{ .Values.service.port | default 8080}} + {{ if .Values.selfMonitor.enabled }} + - name: "metrics" + protocol: TCP + port: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + targetPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + {{- if .Values.selfMonitor.telemetryNodePort }} + nodePort: {{ .Values.selfMonitor.telemetryNodePort }} + {{- end }} + {{ end }} +{{- if .Values.service.loadBalancerIP }} + loadBalancerIP: "{{ .Values.service.loadBalancerIP }}" +{{- end }} +{{- if .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := .Values.service.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} +{{- end }} +{{- if .Values.autosharding.enabled }} + clusterIP: None +{{- else if .Values.service.clusterIP }} + clusterIP: "{{ .Values.service.clusterIP }}" +{{- end }} + selector: + {{- include "kube-state-metrics.selectorLabels" . | indent 4 }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/serviceaccount.yaml new file mode 100644 index 0000000000..c302bc7ca0 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/serviceaccount.yaml @@ -0,0 +1,18 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }} +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- if .Values.serviceAccount.annotations }} + annotations: +{{ toYaml .Values.serviceAccount.annotations | indent 4 }} +{{- end }} +{{- if or .Values.serviceAccount.imagePullSecrets .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.serviceAccount.imagePullSecrets) | indent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/servicemonitor.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/servicemonitor.yaml new file mode 100644 index 0000000000..99d7fa9246 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/servicemonitor.yaml @@ -0,0 +1,120 @@ +{{- if .Values.prometheus.monitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- with .Values.prometheus.monitor.additionalLabels }} + {{- tpl (toYaml . | nindent 4) $ }} + {{- end }} + {{- with .Values.prometheus.monitor.annotations }} + annotations: + {{- tpl (toYaml . | nindent 4) $ }} + {{- end }} +spec: + jobLabel: {{ default "app.kubernetes.io/name" .Values.prometheus.monitor.jobLabel }} + {{- with .Values.prometheus.monitor.targetLabels }} + targetLabels: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + {{- with .Values.prometheus.monitor.podTargetLabels }} + podTargetLabels: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + {{- include "servicemonitor.scrapeLimits" .Values.prometheus.monitor | indent 2 }} + {{- if .Values.prometheus.monitor.namespaceSelector }} + namespaceSelector: + matchNames: + {{- with .Values.prometheus.monitor.namespaceSelector }} + {{- toYaml . | nindent 6 }} + {{- end }} + {{- end }} + selector: + matchLabels: + {{- with .Values.prometheus.monitor.selectorOverride }} + {{- toYaml . | nindent 6 }} + {{- else }} + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + {{- end }} + endpoints: + - port: http + {{- if or .Values.prometheus.monitor.http.interval .Values.prometheus.monitor.interval }} + interval: {{ .Values.prometheus.monitor.http.interval | default .Values.prometheus.monitor.interval }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.scrapeTimeout .Values.prometheus.monitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.prometheus.monitor.http.scrapeTimeout | default .Values.prometheus.monitor.scrapeTimeout }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.proxyUrl .Values.prometheus.monitor.proxyUrl }} + proxyUrl: {{ .Values.prometheus.monitor.http.proxyUrl | default .Values.prometheus.monitor.proxyUrl }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.enableHttp2 .Values.prometheus.monitor.enableHttp2 }} + enableHttp2: {{ .Values.prometheus.monitor.http.enableHttp2 | default .Values.prometheus.monitor.enableHttp2 }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.honorLabels .Values.prometheus.monitor.honorLabels }} + honorLabels: true + {{- end }} + {{- if or .Values.prometheus.monitor.http.metricRelabelings .Values.prometheus.monitor.metricRelabelings }} + metricRelabelings: + {{- toYaml (.Values.prometheus.monitor.http.metricRelabelings | default .Values.prometheus.monitor.metricRelabelings) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.relabelings .Values.prometheus.monitor.relabelings }} + relabelings: + {{- toYaml (.Values.prometheus.monitor.http.relabelings | default .Values.prometheus.monitor.relabelings) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.scheme .Values.prometheus.monitor.scheme }} + scheme: {{ .Values.prometheus.monitor.http.scheme | default .Values.prometheus.monitor.scheme }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.tlsConfig .Values.prometheus.monitor.tlsConfig }} + tlsConfig: + {{- toYaml (.Values.prometheus.monitor.http.tlsConfig | default .Values.prometheus.monitor.tlsConfig) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.http.bearerTokenFile .Values.prometheus.monitor.bearerTokenFile }} + bearerTokenFile: {{ .Values.prometheus.monitor.http.bearerTokenFile | default .Values.prometheus.monitor.bearerTokenFile }} + {{- end }} + {{- with (.Values.prometheus.monitor.http.bearerTokenSecret | default .Values.prometheus.monitor.bearerTokenSecret) }} + bearerTokenSecret: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.selfMonitor.enabled }} + - port: metrics + {{- if or .Values.prometheus.monitor.metrics.interval .Values.prometheus.monitor.interval }} + interval: {{ .Values.prometheus.monitor.metrics.interval | default .Values.prometheus.monitor.interval }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.scrapeTimeout .Values.prometheus.monitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.prometheus.monitor.metrics.scrapeTimeout | default .Values.prometheus.monitor.scrapeTimeout }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.proxyUrl .Values.prometheus.monitor.proxyUrl }} + proxyUrl: {{ .Values.prometheus.monitor.metrics.proxyUrl | default .Values.prometheus.monitor.proxyUrl }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.enableHttp2 .Values.prometheus.monitor.enableHttp2 }} + enableHttp2: {{ .Values.prometheus.monitor.metrics.enableHttp2 | default .Values.prometheus.monitor.enableHttp2 }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.honorLabels .Values.prometheus.monitor.honorLabels }} + honorLabels: true + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.metricRelabelings .Values.prometheus.monitor.metricRelabelings }} + metricRelabelings: + {{- toYaml (.Values.prometheus.monitor.metrics.metricRelabelings | default .Values.prometheus.monitor.metricRelabelings) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.relabelings .Values.prometheus.monitor.relabelings }} + relabelings: + {{- toYaml (.Values.prometheus.monitor.metrics.relabelings | default .Values.prometheus.monitor.relabelings) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.scheme .Values.prometheus.monitor.scheme }} + scheme: {{ .Values.prometheus.monitor.metrics.scheme | default .Values.prometheus.monitor.scheme }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.tlsConfig .Values.prometheus.monitor.tlsConfig }} + tlsConfig: + {{- toYaml (.Values.prometheus.monitor.metrics.tlsConfig | default .Values.prometheus.monitor.tlsConfig) | nindent 8 }} + {{- end }} + {{- if or .Values.prometheus.monitor.metrics.bearerTokenFile .Values.prometheus.monitor.bearerTokenFile }} + bearerTokenFile: {{ .Values.prometheus.monitor.metrics.bearerTokenFile | default .Values.prometheus.monitor.bearerTokenFile }} + {{- end }} + {{- with (.Values.prometheus.monitor.metrics.bearerTokenSecret | default .Values.prometheus.monitor.bearerTokenSecret) }} + bearerTokenSecret: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-role.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-role.yaml new file mode 100644 index 0000000000..489de147c1 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-role.yaml @@ -0,0 +1,26 @@ +{{- if and .Values.autosharding.enabled .Values.rbac.create -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +rules: +- apiGroups: + - "" + resources: + - pods + verbs: + - get +- apiGroups: + - apps + resourceNames: + - {{ template "kube-state-metrics.fullname" . }} + resources: + - statefulsets + verbs: + - get + - list + - watch +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml new file mode 100644 index 0000000000..73b37a4f64 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml @@ -0,0 +1,17 @@ +{{- if and .Values.autosharding.enabled .Values.rbac.create -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml new file mode 100644 index 0000000000..f46305b517 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml @@ -0,0 +1,44 @@ +{{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") (.Values.verticalPodAutoscaler.enabled) }} +apiVersion: autoscaling.k8s.io/v1 +kind: VerticalPodAutoscaler +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +spec: + {{- with .Values.verticalPodAutoscaler.recommenders }} + recommenders: + {{- toYaml . | nindent 4 }} + {{- end }} + resourcePolicy: + containerPolicies: + - containerName: {{ template "kube-state-metrics.name" . }} + {{- with .Values.verticalPodAutoscaler.controlledResources }} + controlledResources: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.controlledValues }} + controlledValues: {{ .Values.verticalPodAutoscaler.controlledValues }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.maxAllowed }} + maxAllowed: + {{ toYaml .Values.verticalPodAutoscaler.maxAllowed | nindent 8 }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.minAllowed }} + minAllowed: + {{ toYaml .Values.verticalPodAutoscaler.minAllowed | nindent 8 }} + {{- end }} + targetRef: + apiVersion: apps/v1 + {{- if .Values.autosharding.enabled }} + kind: StatefulSet + {{- else }} + kind: Deployment + {{- end }} + name: {{ template "kube-state-metrics.fullname" . }} + {{- with .Values.verticalPodAutoscaler.updatePolicy }} + updatePolicy: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/values.yaml new file mode 100644 index 0000000000..2a9781138a --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/kube-state-metrics/values.yaml @@ -0,0 +1,522 @@ +# Default values for kube-state-metrics. +prometheusScrape: true +image: + registry: registry.k8s.io + repository: kube-state-metrics/kube-state-metrics + # If unset use v + .Charts.appVersion + tag: "" + sha: "" + pullPolicy: IfNotPresent + +imagePullSecrets: [] +# - name: "image-pull-secret" + +global: + # To help compatibility with other charts which use global.imagePullSecrets. + # Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). + # global: + # imagePullSecrets: + # - name: pullSecret1 + # - name: pullSecret2 + # or + # global: + # imagePullSecrets: + # - pullSecret1 + # - pullSecret2 + imagePullSecrets: [] + # + # Allow parent charts to override registry hostname + imageRegistry: "" + +# If set to true, this will deploy kube-state-metrics as a StatefulSet and the data +# will be automatically sharded across <.Values.replicas> pods using the built-in +# autodiscovery feature: https://github.com/kubernetes/kube-state-metrics#automated-sharding +# This is an experimental feature and there are no stability guarantees. +autosharding: + enabled: false + +replicas: 1 + +# Change the deployment strategy when autosharding is disabled. +# ref: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#strategy +# The default is "RollingUpdate" as per Kubernetes defaults. +# During a release, 'RollingUpdate' can lead to two running instances for a short period of time while 'Recreate' can create a small gap in data. +# updateStrategy: Recreate + +# Number of old history to retain to allow rollback +# Default Kubernetes value is set to 10 +revisionHistoryLimit: 10 + +# List of additional cli arguments to configure kube-state-metrics +# for example: --enable-gzip-encoding, --log-file, etc. +# all the possible args can be found here: https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cli-arguments.md +extraArgs: [] + +# If false then the user will opt out of automounting API credentials. +automountServiceAccountToken: true + +service: + port: 8080 + # Default to clusterIP for backward compatibility + type: ClusterIP + ipDualStack: + enabled: false + ipFamilies: ["IPv6", "IPv4"] + ipFamilyPolicy: "PreferDualStack" + nodePort: 0 + loadBalancerIP: "" + # Only allow access to the loadBalancerIP from these IPs + loadBalancerSourceRanges: [] + clusterIP: "" + annotations: {} + +## Additional labels to add to all resources +customLabels: {} + # app: kube-state-metrics + +## Override selector labels +selectorOverride: {} + +## set to true to add the release label so scraping of the servicemonitor with kube-prometheus-stack works out of the box +releaseLabel: false + +hostNetwork: false + +rbac: + # If true, create & use RBAC resources + create: true + + # Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to it, rolename set here. + # useExistingRole: your-existing-role + + # If set to false - Run without Cluteradmin privs needed - ONLY works if namespace is also set (if useExistingRole is set this name is used as ClusterRole or Role to bind to) + useClusterRole: true + + # Add permissions for CustomResources' apiGroups in Role/ClusterRole. Should be used in conjunction with Custom Resource State Metrics configuration + # Example: + # - apiGroups: ["monitoring.coreos.com"] + # resources: ["prometheuses"] + # verbs: ["list", "watch"] + extraRules: [] + +# Configure kube-rbac-proxy. When enabled, creates one kube-rbac-proxy container per exposed HTTP endpoint (metrics and telemetry if enabled). +# The requests are served through the same service but requests are then HTTPS. +kubeRBACProxy: + enabled: false + image: + registry: quay.io + repository: brancz/kube-rbac-proxy + tag: v0.18.0 + sha: "" + pullPolicy: IfNotPresent + + # List of additional cli arguments to configure kube-rbac-prxy + # for example: --tls-cipher-suites, --log-file, etc. + # all the possible args can be found here: https://github.com/brancz/kube-rbac-proxy#usage + extraArgs: [] + + ## Specify security settings for a Container + ## Allows overrides and additional options compared to (Pod) securityContext + ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + containerSecurityContext: + readOnlyRootFilesystem: true + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + + resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 64Mi + # requests: + # cpu: 10m + # memory: 32Mi + + ## volumeMounts enables mounting custom volumes in rbac-proxy containers + ## Useful for TLS certificates and keys + volumeMounts: [] + # - mountPath: /etc/tls + # name: kube-rbac-proxy-tls + # readOnly: true + +serviceAccount: + # Specifies whether a ServiceAccount should be created, require rbac true + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + # Reference to one or more secrets to be used when pulling images + # ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + imagePullSecrets: [] + # ServiceAccount annotations. + # Use case: AWS EKS IAM roles for service accounts + # ref: https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html + annotations: {} + # If false then the user will opt out of automounting API credentials. + automountServiceAccountToken: true + +prometheus: + monitor: + enabled: false + annotations: {} + additionalLabels: {} + namespace: "" + namespaceSelector: [] + jobLabel: "" + targetLabels: [] + podTargetLabels: [] + ## SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. + ## + sampleLimit: 0 + + ## TargetLimit defines a limit on the number of scraped targets that will be accepted. + ## + targetLimit: 0 + + ## Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelLimit: 0 + + ## Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelNameLengthLimit: 0 + + ## Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelValueLengthLimit: 0 + selectorOverride: {} + + ## kube-state-metrics endpoint + http: + interval: "" + scrapeTimeout: "" + proxyUrl: "" + ## Whether to enable HTTP2 for servicemonitor + enableHttp2: false + honorLabels: false + metricRelabelings: [] + relabelings: [] + scheme: "" + ## File to read bearer token for scraping targets + bearerTokenFile: "" + ## Secret to mount to read bearer token for scraping targets. The secret needs + ## to be in the same namespace as the service monitor and accessible by the + ## Prometheus Operator + bearerTokenSecret: {} + # name: secret-name + # key: key-name + tlsConfig: {} + + ## selfMonitor endpoint + metrics: + interval: "" + scrapeTimeout: "" + proxyUrl: "" + ## Whether to enable HTTP2 for servicemonitor + enableHttp2: false + honorLabels: false + metricRelabelings: [] + relabelings: [] + scheme: "" + ## File to read bearer token for scraping targets + bearerTokenFile: "" + ## Secret to mount to read bearer token for scraping targets. The secret needs + ## to be in the same namespace as the service monitor and accessible by the + ## Prometheus Operator + bearerTokenSecret: {} + # name: secret-name + # key: key-name + tlsConfig: {} + +## Specify if a Pod Security Policy for kube-state-metrics must be created +## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +podSecurityPolicy: + enabled: false + annotations: {} + ## Specify pod annotations + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl + ## + # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + + additionalVolumes: [] + +## Configure network policy for kube-state-metrics +networkPolicy: + enabled: false + # networkPolicy.flavor -- Flavor of the network policy to use. + # Can be: + # * kubernetes for networking.k8s.io/v1/NetworkPolicy + # * cilium for cilium.io/v2/CiliumNetworkPolicy + flavor: kubernetes + + ## Configure the cilium network policy kube-apiserver selector + # cilium: + # kubeApiServerSelector: + # - toEntities: + # - kube-apiserver + + # egress: + # - {} + # ingress: + # - {} + # podSelector: + # matchLabels: + # app.kubernetes.io/name: kube-state-metrics + +securityContext: + enabled: true + runAsGroup: 65534 + runAsUser: 65534 + fsGroup: 65534 + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + +## Specify security settings for a Container +## Allows overrides and additional options compared to (Pod) securityContext +## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +containerSecurityContext: + readOnlyRootFilesystem: true + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + +## Node labels for pod assignment +## Ref: https://kubernetes.io/docs/user-guide/node-selection/ +nodeSelector: {} + +## Affinity settings for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +affinity: {} + +## Tolerations for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +tolerations: [] + +## Topology spread constraints for pod assignment +## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +topologySpreadConstraints: [] + +# Annotations to be added to the deployment/statefulset +annotations: {} + +# Annotations to be added to the pod +podAnnotations: {} + +## Assign a PriorityClassName to pods if set +# priorityClassName: "" + +# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +podDisruptionBudget: {} + +# Comma-separated list of metrics to be exposed. +# This list comprises of exact metric names and/or regex patterns. +# The allowlist and denylist are mutually exclusive. +metricAllowlist: [] + +# Comma-separated list of metrics not to be enabled. +# This list comprises of exact metric names and/or regex patterns. +# The allowlist and denylist are mutually exclusive. +metricDenylist: [] + +# Comma-separated list of additional Kubernetes label keys that will be used in the resource's +# labels metric. By default the metric contains only name and namespace labels. +# To include additional labels, provide a list of resource names in their plural form and Kubernetes +# label keys you would like to allow for them (Example: '=namespaces=[k8s-label-1,k8s-label-n,...],pods=[app],...)'. +# A single '*' can be provided per resource instead to allow any labels, but that has +# severe performance implications (Example: '=pods=[*]'). +metricLabelsAllowlist: [] + # - namespaces=[k8s-label-1,k8s-label-n] + +# Comma-separated list of Kubernetes annotations keys that will be used in the resource' +# labels metric. By default the metric contains only name and namespace labels. +# To include additional annotations provide a list of resource names in their plural form and Kubernetes +# annotation keys you would like to allow for them (Example: '=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...)'. +# A single '*' can be provided per resource instead to allow any annotations, but that has +# severe performance implications (Example: '=pods=[*]'). +metricAnnotationsAllowList: [] + # - pods=[k8s-annotation-1,k8s-annotation-n] + +# Available collectors for kube-state-metrics. +# By default, all available resources are enabled, comment out to disable. +collectors: + - certificatesigningrequests + - configmaps + - cronjobs + - daemonsets + - deployments + - endpoints + - horizontalpodautoscalers + - ingresses + - jobs + - leases + - limitranges + - mutatingwebhookconfigurations + - namespaces + - networkpolicies + - nodes + - persistentvolumeclaims + - persistentvolumes + - poddisruptionbudgets + - pods + - replicasets + - replicationcontrollers + - resourcequotas + - secrets + - services + - statefulsets + - storageclasses + - validatingwebhookconfigurations + - volumeattachments + +# Enabling kubeconfig will pass the --kubeconfig argument to the container +kubeconfig: + enabled: false + # base64 encoded kube-config file + secret: + +# Enabling support for customResourceState, will create a configMap including your config that will be read from kube-state-metrics +customResourceState: + enabled: false + # Add (Cluster)Role permissions to list/watch the customResources defined in the config to rbac.extraRules + config: {} + +# Enable only the release namespace for collecting resources. By default all namespaces are collected. +# If releaseNamespace and namespaces are both set a merged list will be collected. +releaseNamespace: false + +# Comma-separated list(string) or yaml list of namespaces to be enabled for collecting resources. By default all namespaces are collected. +namespaces: "" + +# Comma-separated list of namespaces not to be enabled. If namespaces and namespaces-denylist are both set, +# only namespaces that are excluded in namespaces-denylist will be used. +namespacesDenylist: "" + +## Override the deployment namespace +## +namespaceOverride: "" + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 64Mi + # requests: + # cpu: 10m + # memory: 32Mi + +## Provide a k8s version to define apiGroups for podSecurityPolicy Cluster Role. +## For example: kubeTargetVersionOverride: 1.14.9 +## +kubeTargetVersionOverride: "" + +# Enable self metrics configuration for service and Service Monitor +# Default values for telemetry configuration can be overridden +# If you set telemetryNodePort, you must also set service.type to NodePort +selfMonitor: + enabled: false + # telemetryHost: 0.0.0.0 + # telemetryPort: 8081 + # telemetryNodePort: 0 + +# Enable vertical pod autoscaler support for kube-state-metrics +verticalPodAutoscaler: + enabled: false + + # Recommender responsible for generating recommendation for the object. + # List should be empty (then the default recommender will generate the recommendation) + # or contain exactly one recommender. + # recommenders: [] + # - name: custom-recommender-performance + + # List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory + controlledResources: [] + # Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits. + # controlledValues: RequestsAndLimits + + # Define the max allowed resources for the pod + maxAllowed: {} + # cpu: 200m + # memory: 100Mi + # Define the min allowed resources for the pod + minAllowed: {} + # cpu: 200m + # memory: 100Mi + + # updatePolicy: + # Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction + # minReplicas: 1 + # Specifies whether recommended updates are applied when a Pod is started and whether recommended updates + # are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto". + # updateMode: Auto + +# volumeMounts are used to add custom volume mounts to deployment. +# See example below +volumeMounts: [] +# - mountPath: /etc/config +# name: config-volume + +# volumes are used to add custom volumes to deployment +# See example below +volumes: [] +# - configMap: +# name: cm-for-volume +# name: config-volume + +# Extra manifests to deploy as an array +extraManifests: [] + # - apiVersion: v1 + # kind: ConfigMap + # metadata: + # labels: + # name: prometheus-extra + # data: + # extra-data: "value" + +## Containers allows injecting additional containers. +containers: [] + # - name: crd-init + # image: kiwigrid/k8s-sidecar:latest + +## InitContainers allows injecting additional initContainers. +initContainers: [] + # - name: crd-sidecar + # image: kiwigrid/k8s-sidecar:latest + +## Liveness probe +## +livenessProbe: + failureThreshold: 3 + httpGet: + httpHeaders: [] + scheme: http + initialDelaySeconds: 5 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 + +## Readiness probe +## +readinessProbe: + failureThreshold: 3 + httpGet: + httpHeaders: [] + scheme: http + initialDelaySeconds: 5 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 5 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/.helmignore b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/.helmignore new file mode 100644 index 0000000000..f0c1319444 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/Chart.yaml new file mode 100644 index 0000000000..5af1f445e8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/Chart.yaml @@ -0,0 +1,25 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts +apiVersion: v2 +appVersion: 1.8.1 +description: A Helm chart for prometheus node-exporter +home: https://github.com/prometheus/node_exporter/ +keywords: +- node-exporter +- prometheus +- exporter +maintainers: +- email: gianrubio@gmail.com + name: gianrubio +- email: zanhsieh@gmail.com + name: zanhsieh +- email: rootsandtrees@posteo.de + name: zeritti +name: prometheus-node-exporter +sources: +- https://github.com/prometheus/node_exporter/ +type: application +version: 4.37.0 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/README.md b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/README.md new file mode 100644 index 0000000000..ef83844102 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/README.md @@ -0,0 +1,96 @@ +# Prometheus Node Exporter + +Prometheus exporter for hardware and OS metrics exposed by *NIX kernels, written in Go with pluggable metric collectors. + +This chart bootstraps a Prometheus [Node Exporter](http://github.com/prometheus/node_exporter) daemonset on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/prometheus-node-exporter +``` + +_See [configuration](#configuring) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] prometheus-community/prometheus-node-exporter --install +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### 3.x to 4.x + +Starting from version 4.0.0, the `node exporter` chart is using the [Kubernetes recommended labels](https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/). Therefore you have to delete the daemonset before you upgrade. + +```console +kubectl delete daemonset -l app=prometheus-node-exporter +helm upgrade -i prometheus-node-exporter prometheus-community/prometheus-node-exporter +``` + +If you use your own custom [ServiceMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitor) or [PodMonitor](https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#podmonitor), please ensure to upgrade their `selector` fields accordingly to the new labels. + +### From 2.x to 3.x + +Change the following: + +```yaml +hostRootFsMount: true +``` + +to: + +```yaml +hostRootFsMount: + enabled: true + mountPropagation: HostToContainer +``` + +## Configuring + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands: + +```console +helm show values prometheus-community/prometheus-node-exporter +``` + +### kube-rbac-proxy + +You can enable `prometheus-node-exporter` endpoint protection using `kube-rbac-proxy`. By setting `kubeRBACProxy.enabled: true`, this chart will deploy a RBAC proxy container protecting the node-exporter endpoint. +To authorize access, authenticate your requests (via a `ServiceAccount` for example) with a `ClusterRole` attached such as: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: prometheus-node-exporter-read +rules: + - apiGroups: [ "" ] + resources: ["services/node-exporter-prometheus-node-exporter"] + verbs: + - get +``` + +See [kube-rbac-proxy examples](https://github.com/brancz/kube-rbac-proxy/tree/master/examples/resource-attributes) for more details. diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/ci/port-values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/ci/port-values.yaml new file mode 100644 index 0000000000..dbfb4b67ff --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/ci/port-values.yaml @@ -0,0 +1,3 @@ +service: + targetPort: 9102 + port: 9102 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/NOTES.txt new file mode 100644 index 0000000000..db8584def8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/NOTES.txt @@ -0,0 +1,29 @@ +1. Get the application URL by running these commands: +{{- if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ template "prometheus-node-exporter.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus-node-exporter.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ template "prometheus-node-exporter.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc -w {{ template "prometheus-node-exporter.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ template "prometheus-node-exporter.namespace" . }} {{ template "prometheus-node-exporter.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP:{{ .Values.service.port }} +{{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ template "prometheus-node-exporter.namespace" . }} -l "app.kubernetes.io/name={{ template "prometheus-node-exporter.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + echo "Visit http://127.0.0.1:9100 to use your application" + kubectl port-forward --namespace {{ template "prometheus-node-exporter.namespace" . }} $POD_NAME 9100 +{{- end }} + +{{- if .Values.kubeRBACProxy.enabled}} + +kube-rbac-proxy endpoint protections is enabled: +- Metrics endpoints is now HTTPS +- Ensure that the client authenticates the requests (e.g. via service account) with the following role permissions: +``` +rules: + - apiGroups: [ "" ] + resources: ["services/{{ template "prometheus-node-exporter.fullname" . }}"] + verbs: + - get +``` +{{- end }} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/_helpers.tpl new file mode 100644 index 0000000000..8e84832cbf --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/_helpers.tpl @@ -0,0 +1,202 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "prometheus-node-exporter.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "prometheus-node-exporter.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "prometheus-node-exporter.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "prometheus-node-exporter.labels" -}} +helm.sh/chart: {{ include "prometheus-node-exporter.chart" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/component: metrics +app.kubernetes.io/part-of: {{ include "prometheus-node-exporter.name" . }} +{{ include "prometheus-node-exporter.selectorLabels" . }} +{{- with .Chart.AppVersion }} +app.kubernetes.io/version: {{ . | quote }} +{{- end }} +{{- with .Values.podLabels }} +{{ toYaml . }} +{{- end }} +{{- if .Values.releaseLabel }} +release: {{ .Release.Name }} +{{- end }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "prometheus-node-exporter.selectorLabels" -}} +app.kubernetes.io/name: {{ include "prometheus-node-exporter.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + + +{{/* +Create the name of the service account to use +*/}} +{{- define "prometheus-node-exporter.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "prometheus-node-exporter.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +The image to use +*/}} +{{- define "prometheus-node-exporter.image" -}} +{{- if .Values.image.sha }} +{{- fail "image.sha forbidden. Use image.digest instead" }} +{{- else if .Values.image.digest }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.digest }} +{{- else }} +{{- printf "%s/%s:%s@%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.digest }} +{{- end }} +{{- else }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- else }} +{{- printf "%s/%s:%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "prometheus-node-exporter.namespace" -}} +{{- if .Values.namespaceOverride }} +{{- .Values.namespaceOverride }} +{{- else }} +{{- .Release.Namespace }} +{{- end }} +{{- end }} + +{{/* +Create the namespace name of the service monitor +*/}} +{{- define "prometheus-node-exporter.monitor-namespace" -}} +{{- if .Values.namespaceOverride }} +{{- .Values.namespaceOverride }} +{{- else }} +{{- if .Values.prometheus.monitor.namespace }} +{{- .Values.prometheus.monitor.namespace }} +{{- else }} +{{- .Release.Namespace }} +{{- end }} +{{- end }} +{{- end }} + +{{/* Sets default scrape limits for servicemonitor */}} +{{- define "servicemonitor.scrapeLimits" -}} +{{- with .sampleLimit }} +sampleLimit: {{ . }} +{{- end }} +{{- with .targetLimit }} +targetLimit: {{ . }} +{{- end }} +{{- with .labelLimit }} +labelLimit: {{ . }} +{{- end }} +{{- with .labelNameLengthLimit }} +labelNameLengthLimit: {{ . }} +{{- end }} +{{- with .labelValueLengthLimit }} +labelValueLengthLimit: {{ . }} +{{- end }} +{{- end }} + +{{/* +Formats imagePullSecrets. Input is (dict "Values" .Values "imagePullSecrets" .{specific imagePullSecrets}) +*/}} +{{- define "prometheus-node-exporter.imagePullSecrets" -}} +{{- range (concat .Values.global.imagePullSecrets .imagePullSecrets) }} + {{- if eq (typeOf .) "map[string]interface {}" }} +- {{ toYaml . | trim }} + {{- else }} +- name: {{ . }} + {{- end }} +{{- end }} +{{- end -}} + +{{/* +Create the namespace name of the pod monitor +*/}} +{{- define "prometheus-node-exporter.podmonitor-namespace" -}} +{{- if .Values.namespaceOverride }} +{{- .Values.namespaceOverride }} +{{- else }} +{{- if .Values.prometheus.podMonitor.namespace }} +{{- .Values.prometheus.podMonitor.namespace }} +{{- else }} +{{- .Release.Namespace }} +{{- end }} +{{- end }} +{{- end }} + +{{/* Sets default scrape limits for podmonitor */}} +{{- define "podmonitor.scrapeLimits" -}} +{{- with .sampleLimit }} +sampleLimit: {{ . }} +{{- end }} +{{- with .targetLimit }} +targetLimit: {{ . }} +{{- end }} +{{- with .labelLimit }} +labelLimit: {{ . }} +{{- end }} +{{- with .labelNameLengthLimit }} +labelNameLengthLimit: {{ . }} +{{- end }} +{{- with .labelValueLengthLimit }} +labelValueLengthLimit: {{ . }} +{{- end }} +{{- end }} + +{{/* Sets sidecar volumeMounts */}} +{{- define "prometheus-node-exporter.sidecarVolumeMounts" -}} +{{- range $_, $mount := $.Values.sidecarVolumeMount }} +- name: {{ $mount.name }} + mountPath: {{ $mount.mountPath }} + readOnly: {{ $mount.readOnly }} +{{- end }} +{{- range $_, $mount := $.Values.sidecarHostVolumeMounts }} +- name: {{ $mount.name }} + mountPath: {{ $mount.mountPath }} + readOnly: {{ $mount.readOnly }} +{{- if $mount.mountPropagation }} + mountPropagation: {{ $mount.mountPropagation }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrole.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrole.yaml new file mode 100644 index 0000000000..c256dba73d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrole.yaml @@ -0,0 +1,19 @@ +{{- if and (eq .Values.rbac.create true) (eq .Values.kubeRBACProxy.enabled true) -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} +rules: + {{- if $.Values.kubeRBACProxy.enabled }} + - apiGroups: [ "authentication.k8s.io" ] + resources: + - tokenreviews + verbs: [ "create" ] + - apiGroups: [ "authorization.k8s.io" ] + resources: + - subjectaccessreviews + verbs: [ "create" ] + {{- end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..653305ad9e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/clusterrolebinding.yaml @@ -0,0 +1,20 @@ +{{- if and (eq .Values.rbac.create true) (eq .Values.kubeRBACProxy.enabled true) -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + name: {{ template "prometheus-node-exporter.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole +{{- if .Values.rbac.useExistingRole }} + name: {{ .Values.rbac.useExistingRole }} +{{- else }} + name: {{ template "prometheus-node-exporter.fullname" . }} +{{- end }} +subjects: +- kind: ServiceAccount + name: {{ template "prometheus-node-exporter.serviceAccountName" . }} + namespace: {{ template "prometheus-node-exporter.namespace" . }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/daemonset.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/daemonset.yaml new file mode 100644 index 0000000000..23896a230a --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/daemonset.yaml @@ -0,0 +1,311 @@ +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + {{- with .Values.daemonsetAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + selector: + matchLabels: + {{- include "prometheus-node-exporter.selectorLabels" . | nindent 6 }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + {{- with .Values.updateStrategy }} + updateStrategy: + {{- toYaml . | nindent 4 }} + {{- end }} + template: + metadata: + {{- with .Values.podAnnotations }} + annotations: + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 8 }} + spec: + automountServiceAccountToken: {{ ternary true false (or .Values.serviceAccount.automountServiceAccountToken .Values.kubeRBACProxy.enabled) }} + {{- with .Values.securityContext }} + securityContext: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.priorityClassName }} + priorityClassName: {{ . }} + {{- end }} + {{- with .Values.extraInitContainers }} + initContainers: + {{- toYaml . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "prometheus-node-exporter.serviceAccountName" . }} + {{- with .Values.terminationGracePeriodSeconds }} + terminationGracePeriodSeconds: {{ . }} + {{- end }} + containers: + {{- $servicePort := ternary .Values.kubeRBACProxy.port .Values.service.port .Values.kubeRBACProxy.enabled }} + - name: node-exporter + image: {{ include "prometheus-node-exporter.image" . }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + args: + - --path.procfs=/host/proc + - --path.sysfs=/host/sys + {{- if .Values.hostRootFsMount.enabled }} + - --path.rootfs=/host/root + {{- if semverCompare ">=1.4.0-0" (coalesce .Values.version .Values.image.tag .Chart.AppVersion) }} + - --path.udev.data=/host/root/run/udev/data + {{- end }} + {{- end }} + - --web.listen-address=[$(HOST_IP)]:{{ $servicePort }} + {{- with .Values.extraArgs }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + env: + - name: HOST_IP + {{- if .Values.kubeRBACProxy.enabled }} + value: 127.0.0.1 + {{- else if .Values.service.listenOnAllInterfaces }} + value: 0.0.0.0 + {{- else }} + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: status.hostIP + {{- end }} + {{- range $key, $value := .Values.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- if eq .Values.kubeRBACProxy.enabled false }} + ports: + - name: {{ .Values.service.portName }} + containerPort: {{ .Values.service.port }} + protocol: TCP + {{- end }} + livenessProbe: + failureThreshold: {{ .Values.livenessProbe.failureThreshold }} + httpGet: + {{- if .Values.kubeRBACProxy.enabled }} + host: 127.0.0.1 + {{- end }} + httpHeaders: + {{- range $_, $header := .Values.livenessProbe.httpGet.httpHeaders }} + - name: {{ $header.name }} + value: {{ $header.value }} + {{- end }} + path: / + port: {{ $servicePort }} + scheme: {{ upper .Values.livenessProbe.httpGet.scheme }} + initialDelaySeconds: {{ .Values.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.livenessProbe.periodSeconds }} + successThreshold: {{ .Values.livenessProbe.successThreshold }} + timeoutSeconds: {{ .Values.livenessProbe.timeoutSeconds }} + readinessProbe: + failureThreshold: {{ .Values.readinessProbe.failureThreshold }} + httpGet: + {{- if .Values.kubeRBACProxy.enabled }} + host: 127.0.0.1 + {{- end }} + httpHeaders: + {{- range $_, $header := .Values.readinessProbe.httpGet.httpHeaders }} + - name: {{ $header.name }} + value: {{ $header.value }} + {{- end }} + path: / + port: {{ $servicePort }} + scheme: {{ upper .Values.readinessProbe.httpGet.scheme }} + initialDelaySeconds: {{ .Values.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.readinessProbe.periodSeconds }} + successThreshold: {{ .Values.readinessProbe.successThreshold }} + timeoutSeconds: {{ .Values.readinessProbe.timeoutSeconds }} + {{- with .Values.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.terminationMessageParams.enabled }} + {{- with .Values.terminationMessageParams }} + terminationMessagePath: {{ .terminationMessagePath }} + terminationMessagePolicy: {{ .terminationMessagePolicy }} + {{- end }} + {{- end }} + volumeMounts: + - name: proc + mountPath: /host/proc + {{- with .Values.hostProcFsMount.mountPropagation }} + mountPropagation: {{ . }} + {{- end }} + readOnly: true + - name: sys + mountPath: /host/sys + {{- with .Values.hostSysFsMount.mountPropagation }} + mountPropagation: {{ . }} + {{- end }} + readOnly: true + {{- if .Values.hostRootFsMount.enabled }} + - name: root + mountPath: /host/root + {{- with .Values.hostRootFsMount.mountPropagation }} + mountPropagation: {{ . }} + {{- end }} + readOnly: true + {{- end }} + {{- range $_, $mount := .Values.extraHostVolumeMounts }} + - name: {{ $mount.name }} + mountPath: {{ $mount.mountPath }} + readOnly: {{ $mount.readOnly }} + {{- with $mount.mountPropagation }} + mountPropagation: {{ . }} + {{- end }} + {{- end }} + {{- range $_, $mount := .Values.sidecarVolumeMount }} + - name: {{ $mount.name }} + mountPath: {{ $mount.mountPath }} + readOnly: true + {{- end }} + {{- range $_, $mount := .Values.configmaps }} + - name: {{ $mount.name }} + mountPath: {{ $mount.mountPath }} + {{- end }} + {{- range $_, $mount := .Values.secrets }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + {{- end }} + {{- range .Values.sidecars }} + {{- $overwrites := dict "volumeMounts" (concat (include "prometheus-node-exporter.sidecarVolumeMounts" $ | fromYamlArray) (.volumeMounts | default list) | default list) }} + {{- $defaults := dict "image" (include "prometheus-node-exporter.image" $) "securityContext" $.Values.containerSecurityContext "imagePullPolicy" $.Values.image.pullPolicy }} + - {{- toYaml (merge $overwrites . $defaults) | nindent 10 }} + {{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - name: kube-rbac-proxy + args: + {{- if .Values.kubeRBACProxy.extraArgs }} + {{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 12 }} + {{- end }} + - --secure-listen-address=:{{ .Values.service.port}} + - --upstream=http://127.0.0.1:{{ $servicePort }}/ + - --proxy-endpoints-port={{ .Values.kubeRBACProxy.proxyEndpointsPort }} + - --config-file=/etc/kube-rbac-proxy-config/config-file.yaml + volumeMounts: + - name: kube-rbac-proxy-config + mountPath: /etc/kube-rbac-proxy-config + imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }} + {{- if .Values.kubeRBACProxy.image.sha }} + image: "{{ .Values.global.imageRegistry | default .Values.kubeRBACProxy.image.registry}}/{{ .Values.kubeRBACProxy.image.repository }}:{{ .Values.kubeRBACProxy.image.tag }}@sha256:{{ .Values.kubeRBACProxy.image.sha }}" + {{- else }} + image: "{{ .Values.global.imageRegistry | default .Values.kubeRBACProxy.image.registry}}/{{ .Values.kubeRBACProxy.image.repository }}:{{ .Values.kubeRBACProxy.image.tag }}" + {{- end }} + ports: + - containerPort: {{ .Values.service.port}} + name: {{ .Values.kubeRBACProxy.portName }} + {{- if .Values.kubeRBACProxy.enableHostPort }} + hostPort: {{ .Values.service.port }} + {{- end }} + - containerPort: {{ .Values.kubeRBACProxy.proxyEndpointsPort }} + {{- if .Values.kubeRBACProxy.enableProxyEndpointsHostPort }} + hostPort: {{ .Values.kubeRBACProxy.proxyEndpointsPort }} + {{- end }} + name: "http-healthz" + readinessProbe: + httpGet: + scheme: HTTPS + port: {{ .Values.kubeRBACProxy.proxyEndpointsPort }} + path: healthz + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.kubeRBACProxy.resources }} + resources: + {{- toYaml .Values.kubeRBACProxy.resources | nindent 12 }} + {{- end }} + {{- if .Values.terminationMessageParams.enabled }} + {{- with .Values.terminationMessageParams }} + terminationMessagePath: {{ .terminationMessagePath }} + terminationMessagePolicy: {{ .terminationMessagePolicy }} + {{- end }} + {{- end }} + {{- with .Values.kubeRBACProxy.env }} + env: + {{- range $key, $value := $.Values.kubeRBACProxy.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- end }} + {{- if .Values.kubeRBACProxy.containerSecurityContext }} + securityContext: + {{ toYaml .Values.kubeRBACProxy.containerSecurityContext | nindent 12 }} + {{- end }} + {{- end }} + {{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} + imagePullSecrets: + {{- include "prometheus-node-exporter.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.imagePullSecrets) | indent 8 }} + {{- end }} + hostNetwork: {{ .Values.hostNetwork }} + hostPID: {{ .Values.hostPID }} + {{- with .Values.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.dnsConfig }} + dnsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.restartPolicy }} + restartPolicy: {{ . }} + {{- end }} + {{- with .Values.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: proc + hostPath: + path: /proc + - name: sys + hostPath: + path: /sys + {{- if .Values.hostRootFsMount.enabled }} + - name: root + hostPath: + path: / + {{- end }} + {{- range $_, $mount := .Values.extraHostVolumeMounts }} + - name: {{ $mount.name }} + hostPath: + path: {{ $mount.hostPath }} + {{- with $mount.type }} + type: {{ . }} + {{- end }} + {{- end }} + {{- range $_, $mount := .Values.sidecarVolumeMount }} + - name: {{ $mount.name }} + emptyDir: + medium: Memory + {{- end }} + {{- range $_, $mount := .Values.sidecarHostVolumeMounts }} + - name: {{ $mount.name }} + hostPath: + path: {{ $mount.hostPath }} + {{- end }} + {{- range $_, $mount := .Values.configmaps }} + - name: {{ $mount.name }} + configMap: + name: {{ $mount.name }} + {{- end }} + {{- range $_, $mount := .Values.secrets }} + - name: {{ $mount.name }} + secret: + secretName: {{ $mount.name }} + {{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - name: kube-rbac-proxy-config + configMap: + name: {{ template "prometheus-node-exporter.fullname" . }}-rbac-config + {{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/endpoints.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/endpoints.yaml new file mode 100644 index 0000000000..45eeb8d966 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/endpoints.yaml @@ -0,0 +1,18 @@ +{{- if .Values.endpoints }} +apiVersion: v1 +kind: Endpoints +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} +subsets: + - addresses: + {{- range .Values.endpoints }} + - ip: {{ . }} + {{- end }} + ports: + - name: {{ .Values.service.portName }} + port: 9100 + protocol: TCP +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/extra-manifests.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/extra-manifests.yaml new file mode 100644 index 0000000000..2b21b71062 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/extra-manifests.yaml @@ -0,0 +1,4 @@ +{{ range .Values.extraManifests }} +--- +{{ tpl . $ }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/networkpolicy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/networkpolicy.yaml new file mode 100644 index 0000000000..825722729d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/networkpolicy.yaml @@ -0,0 +1,23 @@ +{{- if .Values.networkPolicy.enabled }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" $ | nindent 4 }} + {{- with .Values.service.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + ingress: + - ports: + - port: {{ .Values.service.port }} + policyTypes: + - Egress + - Ingress + podSelector: + matchLabels: + {{- include "prometheus-node-exporter.selectorLabels" . | nindent 6 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/podmonitor.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/podmonitor.yaml new file mode 100644 index 0000000000..f88da6a34e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/podmonitor.yaml @@ -0,0 +1,91 @@ +{{- if .Values.prometheus.podMonitor.enabled }} +apiVersion: {{ .Values.prometheus.podMonitor.apiVersion | default "monitoring.coreos.com/v1" }} +kind: PodMonitor +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.podmonitor-namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + {{- with .Values.prometheus.podMonitor.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + jobLabel: {{ default "app.kubernetes.io/name" .Values.prometheus.podMonitor.jobLabel }} + {{- include "podmonitor.scrapeLimits" .Values.prometheus.podMonitor | nindent 2 }} + selector: + matchLabels: + {{- with .Values.prometheus.podMonitor.selectorOverride }} + {{- toYaml . | nindent 6 }} + {{- else }} + {{- include "prometheus-node-exporter.selectorLabels" . | nindent 6 }} + {{- end }} + namespaceSelector: + matchNames: + - {{ include "prometheus-node-exporter.namespace" . }} + {{- with .Values.prometheus.podMonitor.attachMetadata }} + attachMetadata: + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.podTargetLabels }} + podTargetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} + podMetricsEndpoints: + - port: {{ .Values.service.portName }} + {{- with .Values.prometheus.podMonitor.scheme }} + scheme: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.path }} + path: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.basicAuth }} + basicAuth: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.bearerTokenSecret }} + bearerTokenSecret: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.authorization }} + authorization: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.oauth2 }} + oauth2: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.proxyUrl }} + proxyUrl: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.honorTimestamps }} + honorTimestamps: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.honorLabels }} + honorLabels: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + {{- with .Values.prometheus.podMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.podMonitor.metricRelabelings }} + metricRelabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + enableHttp2: {{ default false .Values.prometheus.podMonitor.enableHttp2 }} + filterRunning: {{ default true .Values.prometheus.podMonitor.filterRunning }} + followRedirects: {{ default false .Values.prometheus.podMonitor.followRedirects }} + {{- with .Values.prometheus.podMonitor.params }} + params: + {{- toYaml . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrole.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrole.yaml new file mode 100644 index 0000000000..8957317243 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrole.yaml @@ -0,0 +1,14 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: psp-{{ include "prometheus-node-exporter.fullname" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} +rules: +- apiGroups: ['extensions'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ include "prometheus-node-exporter.fullname" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrolebinding.yaml new file mode 100644 index 0000000000..333370173b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp-clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: psp-{{ include "prometheus-node-exporter.fullname" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp-{{ include "prometheus-node-exporter.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp.yaml new file mode 100644 index 0000000000..4896c84daa --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/psp.yaml @@ -0,0 +1,49 @@ +{{- if and .Values.rbac.create .Values.rbac.pspEnabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + {{- with .Values.rbac.pspAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + privileged: false + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + - 'hostPath' + hostNetwork: true + hostIPC: false + hostPID: true + hostPorts: + - min: 0 + max: 65535 + runAsUser: + # Permits the container to run with root privileges as well. + rule: 'RunAsAny' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Allow adding the root group. + - min: 0 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Allow adding the root group. + - min: 0 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/rbac-configmap.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/rbac-configmap.yaml new file mode 100644 index 0000000000..814e110337 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/rbac-configmap.yaml @@ -0,0 +1,16 @@ +{{- if .Values.kubeRBACProxy.enabled}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "prometheus-node-exporter.fullname" . }}-rbac-config + namespace: {{ include "prometheus-node-exporter.namespace" . }} +data: + config-file.yaml: |+ + authorization: + resourceAttributes: + namespace: {{ template "prometheus-node-exporter.namespace" . }} + apiVersion: v1 + resource: services + subresource: {{ template "prometheus-node-exporter.fullname" . }} + name: {{ template "prometheus-node-exporter.fullname" . }} +{{- end }} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/service.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/service.yaml new file mode 100644 index 0000000000..8308b7b2b3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/service.yaml @@ -0,0 +1,35 @@ +{{- if .Values.service.enabled }} +apiVersion: v1 +kind: Service +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" $ | nindent 4 }} + {{- with .Values.service.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: +{{- if .Values.service.ipDualStack.enabled }} + ipFamilies: {{ toYaml .Values.service.ipDualStack.ipFamilies | nindent 4 }} + ipFamilyPolicy: {{ .Values.service.ipDualStack.ipFamilyPolicy }} +{{- end }} +{{- if .Values.service.externalTrafficPolicy }} + externalTrafficPolicy: {{ .Values.service.externalTrafficPolicy }} +{{- end }} + type: {{ .Values.service.type }} +{{- if and (eq .Values.service.type "ClusterIP") .Values.service.clusterIP }} + clusterIP: "{{ .Values.service.clusterIP }}" +{{- end }} + ports: + - port: {{ .Values.service.port }} + {{- if ( and (eq .Values.service.type "NodePort" ) (not (empty .Values.service.nodePort)) ) }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + targetPort: {{ .Values.service.targetPort }} + protocol: TCP + name: {{ .Values.service.portName }} + selector: + {{- include "prometheus-node-exporter.selectorLabels" . | nindent 4 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/serviceaccount.yaml new file mode 100644 index 0000000000..462b0cda4b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/serviceaccount.yaml @@ -0,0 +1,18 @@ +{{- if and .Values.rbac.create .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "prometheus-node-exporter.serviceAccountName" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }} +{{- if or .Values.serviceAccount.imagePullSecrets .Values.global.imagePullSecrets }} +imagePullSecrets: + {{- include "prometheus-node-exporter.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.serviceAccount.imagePullSecrets) | indent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/servicemonitor.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/servicemonitor.yaml new file mode 100644 index 0000000000..0d7a42eaec --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/servicemonitor.yaml @@ -0,0 +1,61 @@ +{{- if .Values.prometheus.monitor.enabled }} +apiVersion: {{ .Values.prometheus.monitor.apiVersion | default "monitoring.coreos.com/v1" }} +kind: ServiceMonitor +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.monitor-namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} + {{- with .Values.prometheus.monitor.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + jobLabel: {{ default "app.kubernetes.io/name" .Values.prometheus.monitor.jobLabel }} + {{- include "servicemonitor.scrapeLimits" .Values.prometheus.monitor | nindent 2 }} + {{- with .Values.prometheus.monitor.podTargetLabels }} + podTargetLabels: + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + matchLabels: + {{- with .Values.prometheus.monitor.selectorOverride }} + {{- toYaml . | nindent 6 }} + {{- else }} + {{- include "prometheus-node-exporter.selectorLabels" . | nindent 6 }} + {{- end }} + {{- with .Values.prometheus.monitor.attachMetadata }} + attachMetadata: + {{- toYaml . | nindent 4 }} + {{- end }} + endpoints: + - port: {{ .Values.service.portName }} + scheme: {{ .Values.prometheus.monitor.scheme }} + {{- with .Values.prometheus.monitor.basicAuth }} + basicAuth: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.monitor.bearerTokenFile }} + bearerTokenFile: {{ . }} + {{- end }} + {{- with .Values.prometheus.monitor.tlsConfig }} + tlsConfig: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.monitor.proxyUrl }} + proxyUrl: {{ . }} + {{- end }} + {{- with .Values.prometheus.monitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.prometheus.monitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + {{- with .Values.prometheus.monitor.relabelings }} + relabelings: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.prometheus.monitor.metricRelabelings }} + metricRelabelings: + {{- toYaml . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/verticalpodautoscaler.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/verticalpodautoscaler.yaml new file mode 100644 index 0000000000..2c2705f872 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/templates/verticalpodautoscaler.yaml @@ -0,0 +1,40 @@ +{{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") (.Values.verticalPodAutoscaler.enabled) }} +apiVersion: autoscaling.k8s.io/v1 +kind: VerticalPodAutoscaler +metadata: + name: {{ include "prometheus-node-exporter.fullname" . }} + namespace: {{ include "prometheus-node-exporter.namespace" . }} + labels: + {{- include "prometheus-node-exporter.labels" . | nindent 4 }} +spec: + {{- with .Values.verticalPodAutoscaler.recommenders }} + recommenders: + {{- toYaml . | nindent 4 }} + {{- end }} + resourcePolicy: + containerPolicies: + - containerName: node-exporter + {{- with .Values.verticalPodAutoscaler.controlledResources }} + controlledResources: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.verticalPodAutoscaler.controlledValues }} + controlledValues: {{ . }} + {{- end }} + {{- with .Values.verticalPodAutoscaler.maxAllowed }} + maxAllowed: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.verticalPodAutoscaler.minAllowed }} + minAllowed: + {{- toYaml . | nindent 8 }} + {{- end }} + targetRef: + apiVersion: apps/v1 + kind: DaemonSet + name: {{ include "prometheus-node-exporter.fullname" . }} + {{- with .Values.verticalPodAutoscaler.updatePolicy }} + updatePolicy: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/values.yaml new file mode 100644 index 0000000000..1323567359 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-node-exporter/values.yaml @@ -0,0 +1,533 @@ +# Default values for prometheus-node-exporter. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. +image: + registry: quay.io + repository: prometheus/node-exporter + # Overrides the image tag whose default is {{ printf "v%s" .Chart.AppVersion }} + tag: "" + pullPolicy: IfNotPresent + digest: "" + +imagePullSecrets: [] +# - name: "image-pull-secret" +nameOverride: "" +fullnameOverride: "" + +# Number of old history to retain to allow rollback +# Default Kubernetes value is set to 10 +revisionHistoryLimit: 10 + +global: + # To help compatibility with other charts which use global.imagePullSecrets. + # Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). + # global: + # imagePullSecrets: + # - name: pullSecret1 + # - name: pullSecret2 + # or + # global: + # imagePullSecrets: + # - pullSecret1 + # - pullSecret2 + imagePullSecrets: [] + # + # Allow parent charts to override registry hostname + imageRegistry: "" + +# Configure kube-rbac-proxy. When enabled, creates a kube-rbac-proxy to protect the node-exporter http endpoint. +# The requests are served through the same service but requests are HTTPS. +kubeRBACProxy: + enabled: false + ## Set environment variables as name/value pairs + env: {} + # VARIABLE: value + image: + registry: quay.io + repository: brancz/kube-rbac-proxy + tag: v0.18.0 + sha: "" + pullPolicy: IfNotPresent + + # List of additional cli arguments to configure kube-rbac-proxy + # for example: --tls-cipher-suites, --log-file, etc. + # all the possible args can be found here: https://github.com/brancz/kube-rbac-proxy#usage + extraArgs: [] + + ## Specify security settings for a Container + ## Allows overrides and additional options compared to (Pod) securityContext + ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + containerSecurityContext: {} + + # Specify the port used for the Node exporter container (upstream port) + port: 8100 + # Specify the name of the container port + portName: http + # Configure a hostPort. If true, hostPort will be enabled in the container and set to service.port. + enableHostPort: false + + # Configure Proxy Endpoints Port + # This is the port being probed for readiness + proxyEndpointsPort: 8888 + # Configure a hostPort. If true, hostPort will be enabled in the container and set to proxyEndpointsPort. + enableProxyEndpointsHostPort: false + + resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 64Mi + # requests: + # cpu: 10m + # memory: 32Mi + +service: + enabled: true + type: ClusterIP + clusterIP: "" + port: 9100 + targetPort: 9100 + nodePort: + portName: metrics + listenOnAllInterfaces: true + annotations: + prometheus.io/scrape: "true" + ipDualStack: + enabled: false + ipFamilies: ["IPv6", "IPv4"] + ipFamilyPolicy: "PreferDualStack" + externalTrafficPolicy: "" + +# Set a NetworkPolicy with: +# ingress only on service.port +# no egress permitted +networkPolicy: + enabled: false + +# Additional environment variables that will be passed to the daemonset +env: {} +## env: +## VARIABLE: value + +prometheus: + monitor: + enabled: false + additionalLabels: {} + namespace: "" + + jobLabel: "" + + # List of pod labels to add to node exporter metrics + # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#servicemonitor + podTargetLabels: [] + + scheme: http + basicAuth: {} + bearerTokenFile: + tlsConfig: {} + + ## proxyUrl: URL of a proxy that should be used for scraping. + ## + proxyUrl: "" + + ## Override serviceMonitor selector + ## + selectorOverride: {} + + ## Attach node metadata to discovered targets. Requires Prometheus v2.35.0 and above. + ## + attachMetadata: + node: false + + relabelings: [] + metricRelabelings: [] + interval: "" + scrapeTimeout: 10s + ## prometheus.monitor.apiVersion ApiVersion for the serviceMonitor Resource(defaults to "monitoring.coreos.com/v1") + apiVersion: "" + + ## SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. + ## + sampleLimit: 0 + + ## TargetLimit defines a limit on the number of scraped targets that will be accepted. + ## + targetLimit: 0 + + ## Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelLimit: 0 + + ## Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelNameLengthLimit: 0 + + ## Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelValueLengthLimit: 0 + + # PodMonitor defines monitoring for a set of pods. + # ref. https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.PodMonitor + # Using a PodMonitor may be preferred in some environments where there is very large number + # of Node Exporter endpoints (1000+) behind a single service. + # The PodMonitor is disabled by default. When switching from ServiceMonitor to PodMonitor, + # the time series resulting from the configuration through PodMonitor may have different labels. + # For instance, there will not be the service label any longer which might + # affect PromQL queries selecting that label. + podMonitor: + enabled: false + # Namespace in which to deploy the pod monitor. Defaults to the release namespace. + namespace: "" + # Additional labels, e.g. setting a label for pod monitor selector as set in prometheus + additionalLabels: {} + # release: kube-prometheus-stack + # PodTargetLabels transfers labels of the Kubernetes Pod onto the target. + podTargetLabels: [] + # apiVersion defaults to monitoring.coreos.com/v1. + apiVersion: "" + # Override pod selector to select pod objects. + selectorOverride: {} + # Attach node metadata to discovered targets. Requires Prometheus v2.35.0 and above. + attachMetadata: + node: false + # The label to use to retrieve the job name from. Defaults to label app.kubernetes.io/name. + jobLabel: "" + + # Scheme/protocol to use for scraping. + scheme: "http" + # Path to scrape metrics at. + path: "/metrics" + + # BasicAuth allow an endpoint to authenticate over basic authentication. + # More info: https://prometheus.io/docs/operating/configuration/#endpoint + basicAuth: {} + # Secret to mount to read bearer token for scraping targets. + # The secret needs to be in the same namespace as the pod monitor and accessible by the Prometheus Operator. + # https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#secretkeyselector-v1-core + bearerTokenSecret: {} + # TLS configuration to use when scraping the endpoint. + tlsConfig: {} + # Authorization section for this endpoint. + # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.SafeAuthorization + authorization: {} + # OAuth2 for the URL. Only valid in Prometheus versions 2.27.0 and newer. + # https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#monitoring.coreos.com/v1.OAuth2 + oauth2: {} + + # ProxyURL eg http://proxyserver:2195. Directs scrapes through proxy to this endpoint. + proxyUrl: "" + # Interval at which endpoints should be scraped. If not specified Prometheus’ global scrape interval is used. + interval: "" + # Timeout after which the scrape is ended. If not specified, the Prometheus global scrape interval is used. + scrapeTimeout: "" + # HonorTimestamps controls whether Prometheus respects the timestamps present in scraped data. + honorTimestamps: true + # HonorLabels chooses the metric’s labels on collisions with target labels. + honorLabels: true + # Whether to enable HTTP2. Default false. + enableHttp2: "" + # Drop pods that are not running. (Failed, Succeeded). + # Enabled by default. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase + filterRunning: "" + # FollowRedirects configures whether scrape requests follow HTTP 3xx redirects. Default false. + followRedirects: "" + # Optional HTTP URL parameters + params: {} + + # RelabelConfigs to apply to samples before scraping. Prometheus Operator automatically adds + # relabelings for a few standard Kubernetes fields. The original scrape job’s name + # is available via the __tmp_prometheus_job_name label. + # More info: https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config + relabelings: [] + # MetricRelabelConfigs to apply to samples before ingestion. + metricRelabelings: [] + + # SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. + sampleLimit: 0 + # TargetLimit defines a limit on the number of scraped targets that will be accepted. + targetLimit: 0 + # Per-scrape limit on number of labels that will be accepted for a sample. + # Only valid in Prometheus versions 2.27.0 and newer. + labelLimit: 0 + # Per-scrape limit on length of labels name that will be accepted for a sample. + # Only valid in Prometheus versions 2.27.0 and newer. + labelNameLengthLimit: 0 + # Per-scrape limit on length of labels value that will be accepted for a sample. + # Only valid in Prometheus versions 2.27.0 and newer. + labelValueLengthLimit: 0 + +## Customize the updateStrategy if set +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 200m + # memory: 50Mi + # requests: + # cpu: 100m + # memory: 30Mi + +# Specify the container restart policy passed to the Node Export container +# Possible Values: Always (default)|OnFailure|Never +restartPolicy: null + +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + annotations: {} + imagePullSecrets: [] + automountServiceAccountToken: false + +securityContext: + fsGroup: 65534 + runAsGroup: 65534 + runAsNonRoot: true + runAsUser: 65534 + +containerSecurityContext: + readOnlyRootFilesystem: true + # capabilities: + # add: + # - SYS_TIME + +rbac: + ## If true, create & use RBAC resources + ## + create: true + ## If true, create & use Pod Security Policy resources + ## https://kubernetes.io/docs/concepts/policy/pod-security-policy/ + pspEnabled: true + pspAnnotations: {} + +# for deployments that have node_exporter deployed outside of the cluster, list +# their addresses here +endpoints: [] + +# Expose the service to the host network +hostNetwork: true + +# Share the host process ID namespace +hostPID: true + +# Mount the node's root file system (/) at /host/root in the container +hostRootFsMount: + enabled: true + # Defines how new mounts in existing mounts on the node or in the container + # are propagated to the container or node, respectively. Possible values are + # None, HostToContainer, and Bidirectional. If this field is omitted, then + # None is used. More information on: + # https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation + mountPropagation: HostToContainer + +# Mount the node's proc file system (/proc) at /host/proc in the container +hostProcFsMount: + # Possible values are None, HostToContainer, and Bidirectional + mountPropagation: "" + +# Mount the node's sys file system (/sys) at /host/sys in the container +hostSysFsMount: + # Possible values are None, HostToContainer, and Bidirectional + mountPropagation: "" + +## Assign a group of affinity scheduling rules +## +affinity: {} +# nodeAffinity: +# requiredDuringSchedulingIgnoredDuringExecution: +# nodeSelectorTerms: +# - matchFields: +# - key: metadata.name +# operator: In +# values: +# - target-host-name + +# Annotations to be added to node exporter pods +podAnnotations: + # Fix for very slow GKE cluster upgrades + cluster-autoscaler.kubernetes.io/safe-to-evict: "true" + +# Extra labels to be added to node exporter pods +podLabels: {} + +# Annotations to be added to node exporter daemonset +daemonsetAnnotations: {} + +## set to true to add the release label so scraping of the servicemonitor with kube-prometheus-stack works out of the box +releaseLabel: false + +# Custom DNS configuration to be added to prometheus-node-exporter pods +dnsConfig: {} +# nameservers: +# - 1.2.3.4 +# searches: +# - ns1.svc.cluster-domain.example +# - my.dns.search.suffix +# options: +# - name: ndots +# value: "2" +# - name: edns0 + +## Assign a nodeSelector if operating a hybrid cluster +## +nodeSelector: + kubernetes.io/os: linux + # kubernetes.io/arch: amd64 + +# Specify grace period for graceful termination of pods. Defaults to 30 if null or not specified +terminationGracePeriodSeconds: null + +tolerations: + - effect: NoSchedule + operator: Exists + +# Enable or disable container termination message settings +# https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/ +terminationMessageParams: + enabled: false + # If enabled, specify the path for termination messages + terminationMessagePath: /dev/termination-log + # If enabled, specify the policy for termination messages + terminationMessagePolicy: File + + +## Assign a PriorityClassName to pods if set +# priorityClassName: "" + +## Additional container arguments +## +extraArgs: [] +# - --collector.diskstats.ignored-devices=^(ram|loop|fd|(h|s|v)d[a-z]|nvme\\d+n\\d+p)\\d+$ +# - --collector.textfile.directory=/run/prometheus + +## Additional mounts from the host to node-exporter container +## +extraHostVolumeMounts: [] +# - name: +# hostPath: +# https://kubernetes.io/docs/concepts/storage/volumes/#hostpath-volume-types +# type: "" (Default)|DirectoryOrCreate|Directory|FileOrCreate|File|Socket|CharDevice|BlockDevice +# mountPath: +# readOnly: true|false +# mountPropagation: None|HostToContainer|Bidirectional + +## Additional configmaps to be mounted. +## +configmaps: [] +# - name: +# mountPath: +secrets: [] +# - name: +# mountPath: +## Override the deployment namespace +## +namespaceOverride: "" + +## Additional containers for export metrics to text file; fields image,imagePullPolicy,securityContext take default value from main container +## +sidecars: [] +# - name: nvidia-dcgm-exporter +# image: nvidia/dcgm-exporter:1.4.3 +# volumeMounts: +# - name: tmp +# mountPath: /tmp + +## Volume for sidecar containers +## +sidecarVolumeMount: [] +# - name: collector-textfiles +# mountPath: /run/prometheus +# readOnly: false + +## Additional mounts from the host to sidecar containers +## +sidecarHostVolumeMounts: [] +# - name: +# hostPath: +# mountPath: +# readOnly: true|false +# mountPropagation: None|HostToContainer|Bidirectional + +## Additional InitContainers to initialize the pod +## +extraInitContainers: [] + +## Liveness probe +## +livenessProbe: + failureThreshold: 3 + httpGet: + httpHeaders: [] + scheme: http + initialDelaySeconds: 0 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + +## Readiness probe +## +readinessProbe: + failureThreshold: 3 + httpGet: + httpHeaders: [] + scheme: http + initialDelaySeconds: 0 + periodSeconds: 10 + successThreshold: 1 + timeoutSeconds: 1 + +# Enable vertical pod autoscaler support for prometheus-node-exporter +verticalPodAutoscaler: + enabled: false + + # Recommender responsible for generating recommendation for the object. + # List should be empty (then the default recommender will generate the recommendation) + # or contain exactly one recommender. + # recommenders: + # - name: custom-recommender-performance + + # List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory + controlledResources: [] + # Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits. + # controlledValues: RequestsAndLimits + + # Define the max allowed resources for the pod + maxAllowed: {} + # cpu: 200m + # memory: 100Mi + # Define the min allowed resources for the pod + minAllowed: {} + # cpu: 200m + # memory: 100Mi + + # updatePolicy: + # Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction + # minReplicas: 1 + # Specifies whether recommended updates are applied when a Pod is started and whether recommended updates + # are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto". + # updateMode: Auto + +# Extra manifests to deploy as an array +extraManifests: [] + # - | + # apiVersion: v1 + # kind: ConfigMap + # metadata: + # name: prometheus-extra + # data: + # extra-data: "value" + +# Override version of app, required if image.tag is defined and does not follow semver +version: "" diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/.helmignore b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/.helmignore new file mode 100644 index 0000000000..e90c9f6d23 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/.helmignore @@ -0,0 +1,24 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj + +# OWNERS file for Kubernetes +OWNERS \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/Chart.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/Chart.yaml new file mode 100644 index 0000000000..3d4e84621a --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/Chart.yaml @@ -0,0 +1,24 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts +apiVersion: v2 +appVersion: v1.9.0 +description: A Helm chart for prometheus pushgateway +home: https://github.com/prometheus/pushgateway +keywords: +- pushgateway +- prometheus +maintainers: +- email: gianrubio@gmail.com + name: gianrubio +- email: christian.staude@staffbase.com + name: cstaud +- email: rootsandtrees@posteo.de + name: zeritti +name: prometheus-pushgateway +sources: +- https://github.com/prometheus/pushgateway +type: application +version: 2.14.0 diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/README.md b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/README.md new file mode 100644 index 0000000000..cc6645fdf9 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/README.md @@ -0,0 +1,88 @@ +# Prometheus Pushgateway + +This chart bootstraps a Prometheus [Pushgateway](http://github.com/prometheus/pushgateway) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. + +An optional prometheus `ServiceMonitor` can be enabled, should you wish to use this gateway with [Prometheus Operator](https://github.com/coreos/prometheus-operator). + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/prometheus-pushgateway +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] prometheus-community/prometheus-pushgateway --install +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### To 2.0.0 + +Chart API version has been upgraded to v2 so Helm 3 is needed from now on. + +Docker image tag is used from Chart.yaml appVersion field by default now. + +Version 2.0.0 also adapted [Helm label and annotation best practices](https://helm.sh/docs/chart_best_practices/labels/). Specifically, labels mapping is listed below: + +```console +OLD => NEW +---------------------------------------- +heritage => app.kubernetes.io/managed-by +chart => helm.sh/chart +[container version] => app.kubernetes.io/version +app => app.kubernetes.io/name +release => app.kubernetes.io/instance +``` + +Therefore, depending on the way you've configured the chart, the previous StatefulSet or Deployment need to be deleted before upgrade. + +If `runAsStatefulSet: false` (this is the default): + +```console +kubectl delete deploy -l app=prometheus-pushgateway +``` + +If `runAsStatefulSet: true`: + +```console +kubectl delete sts -l app=prometheus-pushgateway +``` + +After that do the actual upgrade: + +```console +helm upgrade -i prometheus-pushgateway prometheus-community/prometheus-pushgateway +``` + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments, visit the chart's [values.yaml](./values.yaml), or run these configuration commands: + +```console +helm show values prometheus-community/prometheus-pushgateway +``` diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/NOTES.txt new file mode 100644 index 0000000000..263b1d8d49 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/NOTES.txt @@ -0,0 +1,19 @@ +1. Get the application URL by running these commands: +{{- if .Values.ingress.enabled }} +{{- range .Values.ingress.hosts }} + http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }} +{{- end }} +{{- else if contains "NodePort" .Values.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ template "prometheus-pushgateway.namespace" . }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus-pushgateway.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ template "prometheus-pushgateway.namespace" . }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc -w {{ template "prometheus-pushgateway.fullname" . }}' + export SERVICE_IP=$(kubectl get svc --namespace {{ template "prometheus-pushgateway.namespace" . }} {{ template "prometheus-pushgateway.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP:{{ .Values.service.port }} +{{- else if contains "ClusterIP" .Values.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ template "prometheus-pushgateway.namespace" . }} -l "app.kubernetes.io/name={{ template "prometheus-pushgateway.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl port-forward $POD_NAME 9091 + echo "Visit http://127.0.0.1:9091 to use your application" +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/_helpers.tpl new file mode 100644 index 0000000000..dcd42ff36b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/_helpers.tpl @@ -0,0 +1,304 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "prometheus-pushgateway.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Namespace to set on the resources +*/}} +{{- define "prometheus-pushgateway.namespace" -}} + {{- if .Values.namespaceOverride -}} + {{- .Values.namespaceOverride -}} + {{- else -}} + {{- .Release.Namespace -}} + {{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "prometheus-pushgateway.fullname" -}} +{{- if .Values.fullnameOverride }} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- $name := default .Chart.Name .Values.nameOverride }} +{{- if contains $name .Release.Name }} +{{- .Release.Name | trunc 63 | trimSuffix "-" }} +{{- else }} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "prometheus-pushgateway.chart" -}} +{{ printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "prometheus-pushgateway.serviceAccountName" -}} +{{- if .Values.serviceAccount.create }} +{{- default (include "prometheus-pushgateway.fullname" .) .Values.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +Create default labels +*/}} +{{- define "prometheus-pushgateway.defaultLabels" -}} +helm.sh/chart: {{ include "prometheus-pushgateway.chart" . }} +{{ include "prometheus-pushgateway.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- with .Values.podLabels }} +{{ toYaml . }} +{{- end }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "prometheus-pushgateway.selectorLabels" -}} +app.kubernetes.io/name: {{ include "prometheus-pushgateway.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "prometheus-pushgateway.networkPolicy.apiVersion" -}} +{{- if semverCompare ">=1.4-0, <1.7-0" .Capabilities.KubeVersion.GitVersion }} +{{- print "extensions/v1beta1" }} +{{- else if semverCompare "^1.7-0" .Capabilities.KubeVersion.GitVersion }} +{{- print "networking.k8s.io/v1" }} +{{- end }} +{{- end }} + +{{/* +Define PDB apiVersion +*/}} +{{- define "prometheus-pushgateway.pdb.apiVersion" -}} +{{- if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +{{- print "policy/v1" }} +{{- else }} +{{- print "policy/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Define Ingress apiVersion +*/}} +{{- define "prometheus-pushgateway.ingress.apiVersion" -}} +{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }} +{{- print "networking.k8s.io/v1" }} +{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion }} +{{- print "networking.k8s.io/v1beta1" }} +{{- else }} +{{- print "extensions/v1beta1" }} +{{- end }} +{{- end }} + +{{/* +Define webConfiguration +*/}} +{{- define "prometheus-pushgateway.webConfiguration" -}} +basic_auth_users: +{{- range $k, $v := .Values.webConfiguration.basicAuthUsers }} + {{ $k }}: {{ htpasswd "" $v | trimPrefix ":"}} +{{- end }} +{{- end }} + +{{/* +Define Authorization +*/}} +{{- define "prometheus-pushgateway.Authorization" -}} +{{- $users := keys .Values.webConfiguration.basicAuthUsers }} +{{- $user := first $users }} +{{- $password := index .Values.webConfiguration.basicAuthUsers $user }} +{{- $user }}:{{ $password }} +{{- end }} + +{{/* +Returns pod spec +*/}} +{{- define "prometheus-pushgateway.podSpec" -}} +serviceAccountName: {{ include "prometheus-pushgateway.serviceAccountName" . }} +automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} +{{- with .Values.priorityClassName }} +priorityClassName: {{ . | quote }} +{{- end }} +{{- with .Values.hostAliases }} +hostAliases: +{{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.imagePullSecrets }} +imagePullSecrets: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.extraInitContainers }} +initContainers: + {{- toYaml . | nindent 2 }} +{{- end }} +containers: + {{- with .Values.extraContainers }} + {{- toYaml . | nindent 2 }} + {{- end }} + - name: pushgateway + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: {{ .Values.image.pullPolicy }} + {{- with .Values.extraVars }} + env: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- if or .Values.extraArgs .Values.webConfiguration }} + args: + {{- with .Values.extraArgs }} + {{- toYaml . | nindent 6 }} + {{- end }} + {{- if .Values.webConfiguration }} + - --web.config.file=/etc/config/web-config.yaml + {{- end }} + {{- end }} + ports: + - name: metrics + containerPort: 9091 + protocol: TCP + {{- if .Values.liveness.enabled }} + {{- $livenessCommon := omit .Values.liveness.probe "httpGet" }} + livenessProbe: + {{- with .Values.liveness.probe }} + httpGet: + path: {{ .httpGet.path }} + port: {{ .httpGet.port }} + {{- if or .httpGet.httpHeaders $.Values.webConfiguration.basicAuthUsers }} + httpHeaders: + {{- if $.Values.webConfiguration.basicAuthUsers }} + - name: Authorization + value: Basic {{ include "prometheus-pushgateway.Authorization" $ | b64enc }} + {{- end }} + {{- with .httpGet.httpHeaders }} + {{- toYaml . | nindent 10 }} + {{- end }} + {{- end }} + {{- toYaml $livenessCommon | nindent 6 }} + {{- end }} + {{- end }} + {{- if .Values.readiness.enabled }} + {{- $readinessCommon := omit .Values.readiness.probe "httpGet" }} + readinessProbe: + {{- with .Values.readiness.probe }} + httpGet: + path: {{ .httpGet.path }} + port: {{ .httpGet.port }} + {{- if or .httpGet.httpHeaders $.Values.webConfiguration.basicAuthUsers }} + httpHeaders: + {{- if $.Values.webConfiguration.basicAuthUsers }} + - name: Authorization + value: Basic {{ include "prometheus-pushgateway.Authorization" $ | b64enc }} + {{- end }} + {{- with .httpGet.httpHeaders }} + {{- toYaml . | nindent 10 }} + {{- end }} + {{- end }} + {{- toYaml $readinessCommon | nindent 6 }} + {{- end }} + {{- end }} + {{- with .Values.resources }} + resources: + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with .Values.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 6 }} + {{- end }} + volumeMounts: + - name: storage-volume + mountPath: "{{ .Values.persistentVolume.mountPath }}" + subPath: "{{ .Values.persistentVolume.subPath }}" + {{- if .Values.webConfiguration }} + - name: web-config + mountPath: "/etc/config" + {{- end }} + {{- with .Values.extraVolumeMounts }} + {{- toYaml . | nindent 6 }} + {{- end }} +{{- with .Values.nodeSelector }} +nodeSelector: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.tolerations }} +tolerations: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- if or .Values.podAntiAffinity .Values.affinity }} +affinity: +{{- end }} + {{- with .Values.affinity }} + {{- toYaml . | nindent 2 }} + {{- end }} + {{- if eq .Values.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "prometheus-pushgateway.name" . }}]} + {{- else if eq .Values.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + topologyKey: {{ .Values.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ include "prometheus-pushgateway.name" . }}]} + {{- end }} +{{- with .Values.topologySpreadConstraints }} +topologySpreadConstraints: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- with .Values.securityContext }} +securityContext: + {{- toYaml . | nindent 2 }} +{{- end }} +volumes: + {{- $storageVolumeAsPVCTemplate := and .Values.runAsStatefulSet .Values.persistentVolume.enabled -}} + {{- if not $storageVolumeAsPVCTemplate }} + - name: storage-volume + {{- if .Values.persistentVolume.enabled }} + persistentVolumeClaim: + claimName: {{ if .Values.persistentVolume.existingClaim }}{{ .Values.persistentVolume.existingClaim }}{{- else }}{{ include "prometheus-pushgateway.fullname" . }}{{- end }} + {{- else }} + emptyDir: {} + {{- end }} + {{- if .Values.webConfiguration }} + - name: web-config + secret: + secretName: {{ include "prometheus-pushgateway.fullname" . }} + {{- end }} + {{- end }} + {{- if .Values.extraVolumes }} + {{- toYaml .Values.extraVolumes | nindent 2 }} + {{- else if $storageVolumeAsPVCTemplate }} + {{- if .Values.webConfiguration }} + - name: web-config + secret: + secretName: {{ include "prometheus-pushgateway.fullname" . }} + {{- else }} + [] + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/deployment.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/deployment.yaml new file mode 100644 index 0000000000..5d3fafde7d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/deployment.yaml @@ -0,0 +1,29 @@ +{{- if not .Values.runAsStatefulSet }} +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + replicas: {{ .Values.replicaCount }} + {{- with .Values.strategy }} + strategy: + {{- toYaml . | nindent 4 }} + {{- end }} + selector: + matchLabels: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 6 }} + template: + metadata: + {{- with .Values.podAnnotations }} + annotations: + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 8 }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + spec: + {{- include "prometheus-pushgateway.podSpec" . | nindent 6 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/extra-manifests.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/extra-manifests.yaml new file mode 100644 index 0000000000..bafee95185 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/extra-manifests.yaml @@ -0,0 +1,8 @@ +{{- range .Values.extraManifests }} +--- + {{- if typeIs "string" . }} + {{- tpl . $ }} + {{- else }} + {{- tpl (. | toYaml | nindent 0) $ }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/ingress.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/ingress.yaml new file mode 100644 index 0000000000..237ac4a121 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/ingress.yaml @@ -0,0 +1,50 @@ +{{- if .Values.ingress.enabled }} +{{- $serviceName := include "prometheus-pushgateway.fullname" . }} +{{- $servicePort := .Values.service.port }} +{{- $ingressPath := .Values.ingress.path }} +{{- $ingressClassName := .Values.ingress.className }} +{{- $ingressPathType := .Values.ingress.pathType }} +{{- $extraPaths := .Values.ingress.extraPaths }} +apiVersion: {{ include "prometheus-pushgateway.ingress.apiVersion" . }} +kind: Ingress +metadata: + {{- with .Values.ingress.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + {{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion }} + ingressClassName: {{ $ingressClassName }} + {{- end }} + rules: + {{- range $host := .Values.ingress.hosts }} + - host: {{ $host }} + http: + paths: + {{- with $extraPaths }} + {{- toYaml . | nindent 10 }} + {{- end }} + - path: {{ $ingressPath }} + {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }} + pathType: {{ $ingressPathType }} + {{- end }} + backend: + {{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }} + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end -}} + {{- with .Values.ingress.tls }} + tls: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/networkpolicy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/networkpolicy.yaml new file mode 100644 index 0000000000..d3b8019e3f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/networkpolicy.yaml @@ -0,0 +1,26 @@ +{{- if .Values.networkPolicy }} +apiVersion: {{ include "prometheus-pushgateway.networkPolicy.apiVersion" . }} +kind: NetworkPolicy +metadata: + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + {{- if .Values.networkPolicy.customSelectors }} + name: ingress-allow-customselector-{{ template "prometheus-pushgateway.name" . }} + {{- else if .Values.networkPolicy.allowAll }} + name: ingress-allow-all-{{ template "prometheus-pushgateway.name" . }} + {{- else -}} + {{- fail "One of `allowAll` or `customSelectors` must be specified." }} + {{- end }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + podSelector: + matchLabels: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 6 }} + ingress: + - ports: + - port: {{ .Values.service.targetPort }} + {{- with .Values.networkPolicy.customSelectors }} + from: + {{- toYaml . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pdb.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pdb.yaml new file mode 100644 index 0000000000..6051133c68 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pdb.yaml @@ -0,0 +1,14 @@ +{{- if .Values.podDisruptionBudget }} +apiVersion: {{ include "prometheus-pushgateway.pdb.apiVersion" . }} +kind: PodDisruptionBudget +metadata: + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + selector: + matchLabels: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 6 }} + {{- toYaml .Values.podDisruptionBudget | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pushgateway-pvc.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pushgateway-pvc.yaml new file mode 100644 index 0000000000..d2a85f4242 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/pushgateway-pvc.yaml @@ -0,0 +1,29 @@ +{{- if and (not .Values.runAsStatefulSet) .Values.persistentVolume.enabled (not .Values.persistentVolume.existingClaim) }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + {{- with .Values.persistentVolume.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + {{- with .Values.persistentVolumeLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + accessModes: + {{- toYaml .Values.persistentVolume.accessModes | nindent 4 }} + {{- if .Values.persistentVolume.storageClass }} + {{- if (eq "-" .Values.persistentVolume.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.persistentVolume.storageClass }}" + {{- end }} + {{- end }} + resources: + requests: + storage: "{{ .Values.persistentVolume.size }}" +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/secret.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/secret.yaml new file mode 100644 index 0000000000..a8142d1389 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/secret.yaml @@ -0,0 +1,10 @@ +{{- if .Values.webConfiguration }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "prometheus-pushgateway.fullname" . }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} +data: + web-config.yaml: {{ include "prometheus-pushgateway.webConfiguration" . | b64enc}} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/service.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/service.yaml new file mode 100644 index 0000000000..15029f7e30 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/service.yaml @@ -0,0 +1,45 @@ +{{- $stsNoHeadlessSvcTypes := list "LoadBalancer" "NodePort" -}} +apiVersion: v1 +kind: Service +metadata: + {{- with .Values.serviceAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + {{- with .Values.serviceLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + {{- if .Values.service.clusterIP }} + clusterIP: {{ .Values.service.clusterIP }} + {{ else if and .Values.runAsStatefulSet (not (has .Values.service.type $stsNoHeadlessSvcTypes)) }} + clusterIP: None # Headless service + {{- end }} + {{- if .Values.service.ipDualStack.enabled }} + ipFamilies: {{ toYaml .Values.service.ipDualStack.ipFamilies | nindent 4 }} + ipFamilyPolicy: {{ .Values.service.ipDualStack.ipFamilyPolicy }} + {{- end }} + type: {{ .Values.service.type }} + {{- with .Values.service.loadBalancerIP }} + loadBalancerIP: {{ . }} + {{- end }} + {{- if .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := .Values.service.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} + {{- end }} + ports: + - port: {{ .Values.service.port }} + targetPort: {{ .Values.service.targetPort }} + {{- if and (eq .Values.service.type "NodePort") .Values.service.nodePort }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + protocol: TCP + name: {{ .Values.service.portName }} + selector: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 4 }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/serviceaccount.yaml new file mode 100644 index 0000000000..88f1470480 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/serviceaccount.yaml @@ -0,0 +1,17 @@ +{{- if .Values.serviceAccount.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with .Values.serviceAccount.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + {{- with .Values.serviceAccountLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ include "prometheus-pushgateway.serviceAccountName" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +automountServiceAccountToken: {{ .Values.automountServiceAccountToken }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/servicemonitor.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/servicemonitor.yaml new file mode 100644 index 0000000000..ae173199b3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/servicemonitor.yaml @@ -0,0 +1,51 @@ +{{- if .Values.serviceMonitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + {{- if .Values.serviceMonitor.additionalLabels }} + {{- toYaml .Values.serviceMonitor.additionalLabels | nindent 4 }} + {{- end }} + name: {{ include "prometheus-pushgateway.fullname" . }} + {{- if .Values.serviceMonitor.namespace }} + namespace: {{ .Values.serviceMonitor.namespace }} + {{- else }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} + {{- end }} +spec: + endpoints: + - port: {{ .Values.service.portName }} + {{- with .Values.serviceMonitor.interval }} + interval: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.scheme }} + scheme: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.bearerTokenFile }} + bearerTokenFile: {{ . }} + {{- end }} + {{- with .Values.serviceMonitor.tlsConfig }} + tlsConfig: + {{- toYaml .| nindent 6 }} + {{- end }} + {{- with .Values.serviceMonitor.scrapeTimeout }} + scrapeTimeout: {{ . }} + {{- end }} + path: {{ .Values.serviceMonitor.telemetryPath }} + honorLabels: {{ .Values.serviceMonitor.honorLabels }} + {{- with .Values.serviceMonitor.metricRelabelings }} + metricRelabelings: + {{- tpl (toYaml . | nindent 6) $ }} + {{- end }} + {{- with .Values.serviceMonitor.relabelings }} + relabelings: + {{- toYaml . | nindent 6 }} + {{- end }} + namespaceSelector: + matchNames: + - {{ template "prometheus-pushgateway.namespace" . }} + selector: + matchLabels: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 6 }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/statefulset.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/statefulset.yaml new file mode 100644 index 0000000000..8d486a306f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/templates/statefulset.yaml @@ -0,0 +1,49 @@ +{{- if .Values.runAsStatefulSet }} +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 4 }} + name: {{ include "prometheus-pushgateway.fullname" . }} + namespace: {{ template "prometheus-pushgateway.namespace" . }} +spec: + replicas: {{ .Values.replicaCount }} + serviceName: {{ include "prometheus-pushgateway.fullname" . }} + selector: + matchLabels: + {{- include "prometheus-pushgateway.selectorLabels" . | nindent 6 }} + template: + metadata: + {{- with .Values.podAnnotations }} + annotations: + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 8 }} + spec: + {{- include "prometheus-pushgateway.podSpec" . | nindent 6 }} + {{- if .Values.persistentVolume.enabled }} + volumeClaimTemplates: + - metadata: + {{- with .Values.persistentVolume.annotations }} + annotations: + {{- toYaml . | nindent 10 }} + {{- end }} + labels: + {{- include "prometheus-pushgateway.defaultLabels" . | nindent 10 }} + name: storage-volume + spec: + accessModes: + {{ toYaml .Values.persistentVolume.accessModes }} + {{- if .Values.persistentVolume.storageClass }} + {{- if (eq "-" .Values.persistentVolume.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.persistentVolume.storageClass }}" + {{- end }} + {{- end }} + resources: + requests: + storage: "{{ .Values.persistentVolume.size }}" + {{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/values.yaml new file mode 100644 index 0000000000..85f267fdb8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/charts/prometheus-pushgateway/values.yaml @@ -0,0 +1,371 @@ +# Default values for prometheus-pushgateway. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# Provide a name in place of prometheus-pushgateway for `app:` labels +nameOverride: "" + +# Provide a name to substitute for the full names of resources +fullnameOverride: "" + +# Provide a namespace to substitude for the namespace on resources +namespaceOverride: "" + +image: + repository: quay.io/prometheus/pushgateway + # if not set appVersion field from Chart.yaml is used + tag: "" + pullPolicy: IfNotPresent + +# Optional pod imagePullSecrets +imagePullSecrets: [] + +service: + type: ClusterIP + port: 9091 + targetPort: 9091 + # nodePort: 32100 + portName: http + + # Optional - Can be used for headless if value is "None" + clusterIP: "" + + ipDualStack: + enabled: false + ipFamilies: ["IPv6", "IPv4"] + ipFamilyPolicy: "PreferDualStack" + + loadBalancerIP: "" + loadBalancerSourceRanges: [] + +# Whether to automatically mount a service account token into the pod +automountServiceAccountToken: true + +# Optional pod annotations +podAnnotations: {} + +# Optional pod labels +podLabels: {} + +# Optional service annotations +serviceAnnotations: {} + +# Optional service labels +serviceLabels: {} + +# Optional serviceAccount labels +serviceAccountLabels: {} + +# Optional persistentVolume labels +persistentVolumeLabels: {} + +# Optional additional environment variables +extraVars: [] + +## Additional pushgateway container arguments +## +## example: +## extraArgs: +## - --persistence.file=/data/pushgateway.data +## - --persistence.interval=5m +extraArgs: [] + +## Additional InitContainers to initialize the pod +## +extraInitContainers: [] + +# Optional additional containers (sidecar) +extraContainers: [] + # - name: oAuth2-proxy + # args: + # - -https-address=:9092 + # - -upstream=http://localhost:9091 + # - -skip-auth-regex=^/metrics + # - -openshift-delegate-urls={"/":{"group":"monitoring.coreos.com","resource":"prometheuses","verb":"get"}} + # image: openshift/oauth-proxy:v1.1.0 + # ports: + # - containerPort: 9092 + # name: proxy + # resources: + # limits: + # memory: 16Mi + # requests: + # memory: 4Mi + # cpu: 20m + # volumeMounts: + # - mountPath: /etc/prometheus/secrets/pushgateway-tls + # name: secret-pushgateway-tls + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 200m + # memory: 50Mi + # requests: + # cpu: 100m + # memory: 30Mi + +# -- Sets web configuration +# To enable basic authentication, provide basicAuthUsers as a map +webConfiguration: {} + # basicAuthUsers: + # username: password + +liveness: + enabled: true + probe: + httpGet: + path: /-/healthy + port: 9091 + initialDelaySeconds: 10 + timeoutSeconds: 10 + +readiness: + enabled: true + probe: + httpGet: + path: /-/ready + port: 9091 + initialDelaySeconds: 10 + timeoutSeconds: 10 + +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + +## Configure ingress resource that allow you to access the +## pushgateway installation. Set up the URL +## ref: http://kubernetes.io/docs/user-guide/ingress/ +## +ingress: + ## Enable Ingress. + ## + enabled: false + # AWS ALB requires path of /* + className: "" + path: / + pathType: ImplementationSpecific + + ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. + extraPaths: [] + # - path: /* + # backend: + # serviceName: ssl-redirect + # servicePort: use-annotation + + ## Annotations. + ## + # annotations: + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: 'true' + + ## Hostnames. + ## Must be provided if Ingress is enabled. + ## + # hosts: + # - pushgateway.domain.com + + ## TLS configuration. + ## Secrets must be manually created in the namespace. + ## + # tls: + # - secretName: pushgateway-tls + # hosts: + # - pushgateway.domain.com + +tolerations: [] + # - effect: NoSchedule + # operator: Exists + +## Node labels for pushgateway pod assignment +## Ref: https://kubernetes.io/docs/user-guide/node-selection/ +## +nodeSelector: {} + +replicaCount: 1 + +hostAliases: [] + # - ip: "127.0.0.1" + # hostnames: + # - "foo.local" + # - "bar.local" + # - ip: "10.1.2.3" + # hostnames: + # - "foo.remote" + # - "bar.remote" + +## When running more than one replica alongside with persistence, different volumes are needed +## per replica, since sharing a `persistence.file` across replicas does not keep metrics synced. +## For this purpose, you can enable the `runAsStatefulSet` to deploy the pushgateway as a +## StatefulSet instead of as a Deployment. +runAsStatefulSet: false + +## Security context to be added to push-gateway pods +## +securityContext: + fsGroup: 65534 + runAsUser: 65534 + runAsNonRoot: true + +## Security context to be added to push-gateway containers +## Having a separate variable as securityContext differs for pods and containers. +containerSecurityContext: {} +# allowPrivilegeEscalation: false +# readOnlyRootFilesystem: true +# runAsUser: 65534 +# runAsNonRoot: true + +## Affinity for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity +affinity: {} + +## Pod anti-affinity can prevent the scheduler from placing pushgateway replicas on the same node. +## The value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. +## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. +## The default value "" will disable pod anti-affinity so that no anti-affinity rules will be configured (unless set in `affinity`). +## +podAntiAffinity: "" + +## If anti-affinity is enabled sets the topologyKey to use for anti-affinity. +## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone +## +podAntiAffinityTopologyKey: kubernetes.io/hostname + +## Topology spread constraints for pods +## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +topologySpreadConstraints: [] + +# Enable this if you're using https://github.com/coreos/prometheus-operator +serviceMonitor: + enabled: false + namespace: monitoring + + # telemetryPath: HTTP resource path from which to fetch metrics. + # Telemetry path, default /metrics, has to be prefixed accordingly if pushgateway sets a route prefix at start-up. + # + telemetryPath: "/metrics" + + # Fallback to the prometheus default unless specified + # interval: 10s + + ## scheme: HTTP scheme to use for scraping. Can be used with `tlsConfig` for example if using istio mTLS. + # scheme: "" + + ## tlsConfig: TLS configuration to use when scraping the endpoint. For example if using istio mTLS. + ## Of type: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#tlsconfig + # tlsConfig: {} + + # bearerTokenFile: + # Fallback to the prometheus default unless specified + # scrapeTimeout: 30s + + ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with + ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec + additionalLabels: {} + + # Retain the job and instance labels of the metrics pushed to the Pushgateway + # [Scraping Pushgateway](https://github.com/prometheus/pushgateway#configure-the-pushgateway-as-a-target-to-scrape) + honorLabels: true + + ## Metric relabel configs to apply to samples before ingestion. + ## [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs) + metricRelabelings: [] + # - action: keep + # regex: 'kube_(daemonset|deployment|pod|namespace|node|statefulset).+' + # sourceLabels: [__name__] + + ## Relabel configs to apply to samples before ingestion. + ## [Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) + relabelings: [] + # - sourceLabels: [__meta_kubernetes_pod_node_name] + # separator: ; + # regex: ^(.*)$ + # targetLabel: nodename + # replacement: $1 + # action: replace + +# The values to set in the PodDisruptionBudget spec (minAvailable/maxUnavailable) +# If not set then a PodDisruptionBudget will not be created +podDisruptionBudget: {} + +priorityClassName: + +# Deployment Strategy type +strategy: + type: Recreate + +persistentVolume: + ## If true, pushgateway will create/use a Persistent Volume Claim + ## If false, use emptyDir + ## + enabled: false + + ## pushgateway data Persistent Volume access modes + ## Must match those of existing PV or dynamic provisioner + ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## + accessModes: + - ReadWriteOnce + + ## pushgateway data Persistent Volume Claim annotations + ## + annotations: {} + + ## pushgateway data Persistent Volume existing claim name + ## Requires pushgateway.persistentVolume.enabled: true + ## If defined, PVC must be created manually before volume will be bound + existingClaim: "" + + ## pushgateway data Persistent Volume mount root path + ## + mountPath: /data + + ## pushgateway data Persistent Volume size + ## + size: 2Gi + + ## pushgateway data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClass: "-" + + ## Subdirectory of pushgateway data Persistent Volume to mount + ## Useful if the volume's root directory is not empty + ## + subPath: "" + +extraVolumes: [] + # - name: extra + # emptyDir: {} +extraVolumeMounts: [] + # - name: extra + # mountPath: /usr/share/extras + # readOnly: true + +# Configuration for clusters with restrictive network policies in place: +# - allowAll allows access to the PushGateway from any namespace +# - customSelector is a list of pod/namespaceSelectors to allow access from +# These options are mutually exclusive and the latter will take precedence. +networkPolicy: {} + # allowAll: true + # customSelectors: + # - namespaceSelector: + # matchLabels: + # type: admin + # - podSelector: + # matchLabels: + # app: myapp + +# Array of extra K8s objects to deploy (evaluated as a template) +# The value can hold an array of strings as well as objects +extraManifests: [] diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/NOTES.txt new file mode 100644 index 0000000000..fc03c2a5b6 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/NOTES.txt @@ -0,0 +1,113 @@ +The Prometheus server can be accessed via port {{ .Values.server.service.servicePort }} on the following DNS name from within your cluster: +{{ template "prometheus.server.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local + +{{ if .Values.server.ingress.enabled -}} +From outside the cluster, the server URL(s) are: +{{- range .Values.server.ingress.hosts }} +http://{{ . }} +{{- end }} +{{- else }} +Get the Prometheus server URL by running these commands in the same shell: +{{- if contains "NodePort" .Values.server.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus.server.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.server.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "prometheus.server.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "prometheus.server.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP:{{ .Values.server.service.servicePort }} +{{- else if contains "ClusterIP" .Values.server.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "prometheus.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9090 +{{- end }} + + +{{- if .Values.server.persistentVolume.enabled }} +{{- else }} +################################################################################# +###### WARNING: Persistence is disabled!!! You will lose your data when ##### +###### the Server pod is terminated. ##### +################################################################################# +{{- end }} +{{- end }} + +{{ if .Values.alertmanager.enabled }} +The Prometheus alertmanager can be accessed via port {{ .Values.alertmanager.service.port }} on the following DNS name from within your cluster: +{{ template "prometheus.alertmanager.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local + +{{ if .Values.alertmanager.ingress.enabled -}} +From outside the cluster, the alertmanager URL(s) are: +{{- range .Values.alertmanager.ingress.hosts }} +http://{{ . }} +{{- end }} +{{- else }} +Get the Alertmanager URL by running these commands in the same shell: +{{- if contains "NodePort" .Values.alertmanager.service.type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "prometheus.alertmanager.fullname" . }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" .Values.alertmanager.service.type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ template "prometheus.alertmanager.fullname" . }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "prometheus.alertmanager.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP:{{ .Values.alertmanager.service.servicePort }} +{{- else if contains "ClusterIP" .Values.alertmanager.service.type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "alertmanager.name" .Subcharts.alertmanager }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}") + kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9093 +{{- end }} +{{- end }} + +{{- if .Values.alertmanager.persistence.enabled }} +{{- else }} +################################################################################# +###### WARNING: Persistence is disabled!!! You will lose your data when ##### +###### the AlertManager pod is terminated. ##### +################################################################################# +{{- end }} +{{- end }} + +{{- if (index .Values "prometheus-node-exporter" "enabled") }} +################################################################################# +###### WARNING: Pod Security Policy has been disabled by default since ##### +###### it deprecated after k8s 1.25+. use ##### +###### (index .Values "prometheus-node-exporter" "rbac" ##### +###### . "pspEnabled") with (index .Values ##### +###### "prometheus-node-exporter" "rbac" "pspAnnotations") ##### +###### in case you still need it. ##### +################################################################################# +{{- end }} + +{{ if (index .Values "prometheus-pushgateway" "enabled") }} +The Prometheus PushGateway can be accessed via port {{ index .Values "prometheus-pushgateway" "service" "port" }} on the following DNS name from within your cluster: +{{ include "prometheus-pushgateway.fullname" (index .Subcharts "prometheus-pushgateway") }}.{{ .Release.Namespace }}.svc.cluster.local + +{{ if (index .Values "prometheus-pushgateway" "ingress" "enabled") -}} +From outside the cluster, the pushgateway URL(s) are: +{{- range (index .Values "prometheus-pushgateway" "ingress" "hosts") }} +http://{{ . }} +{{- end }} +{{- else }} +Get the PushGateway URL by running these commands in the same shell: +{{- $pushgateway_svc_type := index .Values "prometheus-pushgateway" "service" "type" -}} +{{- if contains "NodePort" $pushgateway_svc_type }} + export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "prometheus-pushgateway.fullname" (index .Subcharts "prometheus-pushgateway") }}) + export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}") + echo http://$NODE_IP:$NODE_PORT +{{- else if contains "LoadBalancer" $pushgateway_svc_type }} + NOTE: It may take a few minutes for the LoadBalancer IP to be available. + You can watch the status of by running 'kubectl get svc --namespace {{ .Release.Namespace }} -w {{ include "prometheus-pushgateway.fullname" (index .Subcharts "prometheus-pushgateway") }}' + + export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "prometheus-pushgateway.fullname" (index .Subcharts "prometheus-pushgateway") }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}') + echo http://$SERVICE_IP:{{ index .Values "prometheus-pushgateway" "service" "port" }} +{{- else if contains "ClusterIP" $pushgateway_svc_type }} + export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ include "prometheus.name" (index .Subcharts "prometheus-pushgateway") }},component=pushgateway" -o jsonpath="{.items[0].metadata.name}") + kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 9091 +{{- end }} +{{- end }} +{{- end }} + +For more information on running Prometheus, visit: +https://prometheus.io/ diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/_helpers.tpl new file mode 100644 index 0000000000..3d8078f022 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/_helpers.tpl @@ -0,0 +1,238 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "prometheus.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "prometheus.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create labels for prometheus +*/}} +{{- define "prometheus.common.matchLabels" -}} +app.kubernetes.io/name: {{ include "prometheus.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end -}} + +{{/* +Create unified labels for prometheus components +*/}} +{{- define "prometheus.common.metaLabels" -}} +app.kubernetes.io/version: {{ .Chart.AppVersion }} +helm.sh/chart: {{ include "prometheus.chart" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/part-of: {{ include "prometheus.name" . }} +{{- with .Values.commonMetaLabels}} +{{ toYaml . }} +{{- end }} +{{- end -}} + +{{- define "prometheus.server.labels" -}} +{{ include "prometheus.server.matchLabels" . }} +{{ include "prometheus.common.metaLabels" . }} +{{- end -}} + +{{- define "prometheus.server.matchLabels" -}} +app.kubernetes.io/component: {{ .Values.server.name }} +{{ include "prometheus.common.matchLabels" . }} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "prometheus.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create a fully qualified ClusterRole name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "prometheus.clusterRoleName" -}} +{{- if .Values.server.clusterRoleNameOverride -}} +{{ .Values.server.clusterRoleNameOverride | trunc 63 | trimSuffix "-" }} +{{- else -}} +{{ include "prometheus.server.fullname" . }} +{{- end -}} +{{- end -}} + +{{/* +Create a fully qualified alertmanager name for communicating and check to ensure that `alertmanager` exists before trying to use it with the user via NOTES.txt +*/}} +{{- define "prometheus.alertmanager.fullname" -}} +{{- if .Subcharts.alertmanager -}} +{{- template "alertmanager.fullname" .Subcharts.alertmanager -}} +{{- else -}} +{{- "alertmanager not found" -}} +{{- end -}} +{{- end -}} + +{{/* +Create a fully qualified Prometheus server name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "prometheus.server.fullname" -}} +{{- if .Values.server.fullnameOverride -}} +{{- .Values.server.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- printf "%s-%s" .Release.Name .Values.server.name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s-%s" .Release.Name $name .Values.server.name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Get KubeVersion removing pre-release information. +*/}} +{{- define "prometheus.kubeVersion" -}} + {{- default .Capabilities.KubeVersion.Version (regexFind "v[0-9]+\\.[0-9]+\\.[0-9]+" .Capabilities.KubeVersion.Version) -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for deployment. +*/}} +{{- define "prometheus.deployment.apiVersion" -}} +{{- print "apps/v1" -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for networkpolicy. +*/}} +{{- define "prometheus.networkPolicy.apiVersion" -}} +{{- print "networking.k8s.io/v1" -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for poddisruptionbudget. +*/}} +{{- define "prometheus.podDisruptionBudget.apiVersion" -}} +{{- if .Capabilities.APIVersions.Has "policy/v1" }} +{{- print "policy/v1" -}} +{{- else -}} +{{- print "policy/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for rbac. +*/}} +{{- define "rbac.apiVersion" -}} +{{- if .Capabilities.APIVersions.Has "rbac.authorization.k8s.io/v1" }} +{{- print "rbac.authorization.k8s.io/v1" -}} +{{- else -}} +{{- print "rbac.authorization.k8s.io/v1beta1" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "ingress.apiVersion" -}} + {{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19.x" (include "prometheus.kubeVersion" .)) -}} + {{- print "networking.k8s.io/v1" -}} + {{- else if .Capabilities.APIVersions.Has "networking.k8s.io/v1beta1" -}} + {{- print "networking.k8s.io/v1beta1" -}} + {{- else -}} + {{- print "extensions/v1beta1" -}} + {{- end -}} +{{- end -}} + +{{/* +Return if ingress is stable. +*/}} +{{- define "ingress.isStable" -}} + {{- eq (include "ingress.apiVersion" .) "networking.k8s.io/v1" -}} +{{- end -}} + +{{/* +Return if ingress supports ingressClassName. +*/}} +{{- define "ingress.supportsIngressClassName" -}} + {{- or (eq (include "ingress.isStable" .) "true") (and (eq (include "ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18.x" (include "prometheus.kubeVersion" .))) -}} +{{- end -}} + +{{/* +Return if ingress supports pathType. +*/}} +{{- define "ingress.supportsPathType" -}} + {{- or (eq (include "ingress.isStable" .) "true") (and (eq (include "ingress.apiVersion" .) "networking.k8s.io/v1beta1") (semverCompare ">= 1.18.x" (include "prometheus.kubeVersion" .))) -}} +{{- end -}} + +{{/* +Create the name of the service account to use for the server component +*/}} +{{- define "prometheus.serviceAccountName.server" -}} +{{- if .Values.serviceAccounts.server.create -}} + {{ default (include "prometheus.server.fullname" .) .Values.serviceAccounts.server.name }} +{{- else -}} + {{ default "default" .Values.serviceAccounts.server.name }} +{{- end -}} +{{- end -}} + +{{/* +Define the prometheus.namespace template if set with forceNamespace or .Release.Namespace is set +*/}} +{{- define "prometheus.namespace" -}} + {{- default .Release.Namespace .Values.forceNamespace -}} +{{- end }} + +{{/* +Define template prometheus.namespaces producing a list of namespaces to monitor +*/}} +{{- define "prometheus.namespaces" -}} +{{- $namespaces := list }} +{{- if and .Values.rbac.create .Values.server.useExistingClusterRoleName }} + {{- if .Values.server.namespaces -}} + {{- range $ns := join "," .Values.server.namespaces | split "," }} + {{- $namespaces = append $namespaces (tpl $ns $) }} + {{- end -}} + {{- end -}} + {{- if .Values.server.releaseNamespace -}} + {{- $namespaces = append $namespaces (include "prometheus.namespace" .) }} + {{- end -}} +{{- end -}} +{{ mustToJson $namespaces }} +{{- end -}} + +{{/* +Define prometheus.server.remoteWrite producing a list of remoteWrite configurations with URL templating +*/}} +{{- define "prometheus.server.remoteWrite" -}} +{{- $remoteWrites := list }} +{{- range $remoteWrite := .Values.server.remoteWrite }} + {{- $remoteWrites = tpl $remoteWrite.url $ | set $remoteWrite "url" | append $remoteWrites }} +{{- end -}} +{{ toYaml $remoteWrites }} +{{- end -}} + +{{/* +Define prometheus.server.remoteRead producing a list of remoteRead configurations with URL templating +*/}} +{{- define "prometheus.server.remoteRead" -}} +{{- $remoteReads := list }} +{{- range $remoteRead := .Values.server.remoteRead }} + {{- $remoteReads = tpl $remoteRead.url $ | set $remoteRead "url" | append $remoteReads }} +{{- end -}} +{{ toYaml $remoteReads }} +{{- end -}} + diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrole.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrole.yaml new file mode 100644 index 0000000000..25e3cec45d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrole.yaml @@ -0,0 +1,56 @@ +{{- if and .Values.rbac.create (empty .Values.server.useExistingClusterRoleName) -}} +apiVersion: {{ template "rbac.apiVersion" . }} +kind: ClusterRole +metadata: + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + name: {{ include "prometheus.clusterRoleName" . }} +rules: +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} + - apiGroups: + - extensions + resources: + - podsecuritypolicies + verbs: + - use + resourceNames: + - {{ template "prometheus.server.fullname" . }} +{{- end }} + - apiGroups: + - "" + resources: + - nodes + - nodes/proxy + - nodes/metrics + - services + - endpoints + - pods + - ingresses + - configmaps + verbs: + - get + - list + - watch + - apiGroups: + - "extensions" + - "networking.k8s.io" + resources: + - ingresses/status + - ingresses + verbs: + - get + - list + - watch + - apiGroups: + - "discovery.k8s.io" + resources: + - endpointslices + verbs: + - get + - list + - watch + - nonResourceURLs: + - "/metrics" + verbs: + - get +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..28f4bda776 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if and .Values.rbac.create (empty .Values.server.namespaces) (empty .Values.server.useExistingClusterRoleName) -}} +apiVersion: {{ template "rbac.apiVersion" . }} +kind: ClusterRoleBinding +metadata: + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + name: {{ include "prometheus.clusterRoleName" . }} +subjects: + - kind: ServiceAccount + name: {{ template "prometheus.serviceAccountName.server" . }} + namespace: {{ include "prometheus.namespace" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "prometheus.clusterRoleName" . }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/cm.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/cm.yaml new file mode 100644 index 0000000000..8713bd1ea2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/cm.yaml @@ -0,0 +1,103 @@ +{{- if (empty .Values.server.configMapOverrideName) -}} +apiVersion: v1 +kind: ConfigMap +metadata: +{{- with .Values.server.configMapAnnotations }} + annotations: + {{- toYaml . | nindent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + {{- with .Values.server.extraConfigmapLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +data: + allow-snippet-annotations: "false" +{{- $root := . -}} +{{- range $key, $value := .Values.ruleFiles }} + {{ $key }}: {{- toYaml $value | indent 2 }} +{{- end }} +{{- range $key, $value := .Values.serverFiles }} + {{ $key }}: | +{{- if eq $key "prometheus.yml" }} + global: +{{ $root.Values.server.global | toYaml | trimSuffix "\n" | indent 6 }} +{{- if $root.Values.server.remoteWrite }} + remote_write: +{{- include "prometheus.server.remoteWrite" $root | nindent 4 }} +{{- end }} +{{- if $root.Values.server.remoteRead }} + remote_read: +{{- include "prometheus.server.remoteRead" $root | nindent 4 }} +{{- end }} +{{- if or $root.Values.server.tsdb $root.Values.server.exemplars }} + storage: +{{- if $root.Values.server.tsdb }} + tsdb: +{{ $root.Values.server.tsdb | toYaml | indent 8 }} +{{- end }} +{{- if $root.Values.server.exemplars }} + exemplars: +{{ $root.Values.server.exemplars | toYaml | indent 8 }} +{{- end }} +{{- end }} +{{- if $root.Values.scrapeConfigFiles }} + scrape_config_files: +{{ toYaml $root.Values.scrapeConfigFiles | indent 4 }} +{{- end }} +{{- end }} +{{- if eq $key "alerts" }} +{{- if and (not (empty $value)) (empty $value.groups) }} + groups: +{{- range $ruleKey, $ruleValue := $value }} + - name: {{ $ruleKey -}}.rules + rules: +{{ $ruleValue | toYaml | trimSuffix "\n" | indent 6 }} +{{- end }} +{{- else }} +{{ toYaml $value | indent 4 }} +{{- end }} +{{- else }} +{{ toYaml $value | default "{}" | indent 4 }} +{{- end }} +{{- if eq $key "prometheus.yml" -}} +{{- if $root.Values.extraScrapeConfigs }} +{{ tpl $root.Values.extraScrapeConfigs $root | indent 4 }} +{{- end -}} +{{- if or ($root.Values.alertmanager.enabled) ($root.Values.server.alertmanagers) }} + alerting: +{{- if $root.Values.alertRelabelConfigs }} +{{ $root.Values.alertRelabelConfigs | toYaml | trimSuffix "\n" | indent 6 }} +{{- end }} + alertmanagers: +{{- if $root.Values.server.alertmanagers }} +{{ toYaml $root.Values.server.alertmanagers | indent 8 }} +{{- else }} + - kubernetes_sd_configs: + - role: pod + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + {{- if $root.Values.alertmanager.prefixURL }} + path_prefix: {{ $root.Values.alertmanager.prefixURL }} + {{- end }} + relabel_configs: + - source_labels: [__meta_kubernetes_namespace] + regex: {{ $root.Release.Namespace }} + action: keep + - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_instance] + regex: {{ $root.Release.Name }} + action: keep + - source_labels: [__meta_kubernetes_pod_label_app_kubernetes_io_name] + regex: {{ default "alertmanager" $root.Values.alertmanager.nameOverride | trunc 63 | trimSuffix "-" }} + action: keep + - source_labels: [__meta_kubernetes_pod_container_port_number] + regex: "9093" + action: keep +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/deploy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/deploy.yaml new file mode 100644 index 0000000000..9c786ee425 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/deploy.yaml @@ -0,0 +1,410 @@ +{{- if not .Values.server.statefulSet.enabled -}} +apiVersion: {{ template "prometheus.deployment.apiVersion" . }} +kind: Deployment +metadata: +{{- if .Values.server.deploymentAnnotations }} + annotations: + {{ toYaml .Values.server.deploymentAnnotations | nindent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +spec: + selector: + matchLabels: + {{- include "prometheus.server.matchLabels" . | nindent 6 }} + replicas: {{ .Values.server.replicaCount }} + revisionHistoryLimit: {{ .Values.server.revisionHistoryLimit }} + {{- if .Values.server.strategy }} + strategy: +{{ toYaml .Values.server.strategy | trim | indent 4 }} + {{ if eq .Values.server.strategy.type "Recreate" }}rollingUpdate: null{{ end }} +{{- end }} + template: + metadata: + {{- if .Values.server.podAnnotations }} + annotations: + {{ toYaml .Values.server.podAnnotations | nindent 8 }} + {{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 8 }} + {{- if .Values.server.podLabels}} + {{ toYaml .Values.server.podLabels | nindent 8 }} + {{- end}} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + spec: +{{- if .Values.server.priorityClassName }} + priorityClassName: "{{ .Values.server.priorityClassName }}" +{{- end }} +{{- if .Values.server.schedulerName }} + schedulerName: "{{ .Values.server.schedulerName }}" +{{- end }} +{{- if semverCompare ">=1.13-0" .Capabilities.KubeVersion.GitVersion }} + {{- if or (.Values.server.enableServiceLinks) (eq (.Values.server.enableServiceLinks | toString) "") }} + enableServiceLinks: true + {{- else }} + enableServiceLinks: false + {{- end }} +{{- end }} + serviceAccountName: {{ template "prometheus.serviceAccountName.server" . }} +{{- if kindIs "bool" .Values.server.automountServiceAccountToken }} + automountServiceAccountToken: {{ .Values.server.automountServiceAccountToken }} +{{- end }} + {{- if .Values.server.extraInitContainers }} + initContainers: +{{ toYaml .Values.server.extraInitContainers | indent 8 }} + {{- end }} + containers: + {{- if .Values.configmapReload.prometheus.enabled }} + - name: {{ template "prometheus.name" . }}-{{ .Values.server.name }}-{{ .Values.configmapReload.prometheus.name }} + {{- if .Values.configmapReload.prometheus.image.digest }} + image: "{{ .Values.configmapReload.prometheus.image.repository }}@{{ .Values.configmapReload.prometheus.image.digest }}" + {{- else }} + image: "{{ .Values.configmapReload.prometheus.image.repository }}:{{ .Values.configmapReload.prometheus.image.tag }}" + {{- end }} + imagePullPolicy: "{{ .Values.configmapReload.prometheus.image.pullPolicy }}" + {{- with .Values.configmapReload.prometheus.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + - --watched-dir=/etc/config + {{- $default_url := "http://127.0.0.1:9090/-/reload" }} + {{- with .Values.server.prefixURL }} + {{- $default_url = printf "http://127.0.0.1:9090%s/-/reload" . }} + {{- end }} + {{- if .Values.configmapReload.prometheus.containerPort }} + - --listen-address=0.0.0.0:{{ .Values.configmapReload.prometheus.containerPort }} + {{- end }} + - --reload-url={{ default $default_url .Values.configmapReload.reloadUrl }} + {{- range $key, $value := .Values.configmapReload.prometheus.extraArgs }} + {{- if $value }} + - --{{ $key }}={{ $value }} + {{- else }} + - --{{ $key }} + {{- end }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraVolumeDirs }} + - --watched-dir={{ . }} + {{- end }} + {{- with .Values.configmapReload.env }} + env: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.configmapReload.prometheus.containerPort }} + ports: + - containerPort: {{ .Values.configmapReload.prometheus.containerPort }} + {{- if .Values.configmapReload.prometheus.containerPortName }} + name: {{ .Values.configmapReload.prometheus.containerPortName }} + {{- end }} + {{- end }} + {{- with .Values.configmapReload.prometheus.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.configmapReload.prometheus.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.configmapReload.prometheus.startupProbe.enabled }} + {{- $startupProbe := omit .Values.configmapReload.prometheus.startupProbe "enabled" }} + startupProbe: + {{- toYaml $startupProbe | nindent 12 }} + {{- end }} + {{- with .Values.configmapReload.prometheus.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config-volume + mountPath: /etc/config + readOnly: true + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ $.Values.configmapReload.prometheus.name }}-{{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- with .Values.configmapReload.prometheus.extraVolumeMounts }} + {{ toYaml . | nindent 12 }} + {{- end }} + {{- end }} + + - name: {{ template "prometheus.name" . }}-{{ .Values.server.name }} + {{- if .Values.server.image.digest }} + image: "{{ .Values.server.image.repository }}@{{ .Values.server.image.digest }}" + {{- else }} + image: "{{ .Values.server.image.repository }}:{{ .Values.server.image.tag | default .Chart.AppVersion}}" + {{- end }} + imagePullPolicy: "{{ .Values.server.image.pullPolicy }}" + {{- with .Values.server.command }} + command: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.server.env }} + env: +{{ toYaml .Values.server.env | indent 12}} + {{- end }} + args: + {{- if .Values.server.defaultFlagsOverride }} + {{ toYaml .Values.server.defaultFlagsOverride | nindent 12}} + {{- else }} + {{- if .Values.server.retention }} + - --storage.tsdb.retention.time={{ .Values.server.retention }} + {{- end }} + {{- if .Values.server.retentionSize }} + - --storage.tsdb.retention.size={{ .Values.server.retentionSize }} + {{- end }} + - --config.file={{ .Values.server.configPath }} + {{- if .Values.server.storagePath }} + - --storage.tsdb.path={{ .Values.server.storagePath }} + {{- else }} + - --storage.tsdb.path={{ .Values.server.persistentVolume.mountPath }} + {{- end }} + - --web.console.libraries=/etc/prometheus/console_libraries + - --web.console.templates=/etc/prometheus/consoles + {{- range .Values.server.extraFlags }} + - --{{ . }} + {{- end }} + {{- range $key, $value := .Values.server.extraArgs }} + {{- if $value }} + - --{{ $key }}={{ $value }} + {{- else }} + - --{{ $key }} + {{- end }} + {{- end }} + {{- if .Values.server.prefixURL }} + - --web.route-prefix={{ .Values.server.prefixURL }} + {{- end }} + {{- if .Values.server.baseURL }} + - --web.external-url={{ .Values.server.baseURL }} + {{- end }} + {{- end }} + ports: + - containerPort: 9090 + {{- if .Values.server.portName }} + name: {{ .Values.server.portName }} + {{- end }} + {{- if .Values.server.hostPort }} + hostPort: {{ .Values.server.hostPort }} + {{- end }} + readinessProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/ready + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- with .Values.server.probeHeaders }} + httpHeaders: +{{- toYaml . | nindent 14 }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + initialDelaySeconds: {{ .Values.server.readinessProbeInitialDelay }} + periodSeconds: {{ .Values.server.readinessProbePeriodSeconds }} + timeoutSeconds: {{ .Values.server.readinessProbeTimeout }} + failureThreshold: {{ .Values.server.readinessProbeFailureThreshold }} + successThreshold: {{ .Values.server.readinessProbeSuccessThreshold }} + livenessProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/healthy + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- with .Values.server.probeHeaders }} + httpHeaders: +{{- toYaml . | nindent 14 }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + initialDelaySeconds: {{ .Values.server.livenessProbeInitialDelay }} + periodSeconds: {{ .Values.server.livenessProbePeriodSeconds }} + timeoutSeconds: {{ .Values.server.livenessProbeTimeout }} + failureThreshold: {{ .Values.server.livenessProbeFailureThreshold }} + successThreshold: {{ .Values.server.livenessProbeSuccessThreshold }} + {{- if .Values.server.startupProbe.enabled }} + startupProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/healthy + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- if .Values.server.probeHeaders }} + httpHeaders: + {{- range .Values.server.probeHeaders}} + - name: {{ .name }} + value: {{ .value }} + {{- end }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + failureThreshold: {{ .Values.server.startupProbe.failureThreshold }} + periodSeconds: {{ .Values.server.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.server.startupProbe.timeoutSeconds }} + {{- end }} + {{- with .Values.server.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config-volume + mountPath: /etc/config + - name: storage-volume + mountPath: {{ .Values.server.persistentVolume.mountPath }} + subPath: "{{ .Values.server.persistentVolume.subPath }}" + {{- range .Values.server.extraHostPathMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.server.extraConfigmapMounts }} + - name: {{ $.Values.server.name }}-{{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.server.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- if .Values.server.extraVolumeMounts }} + {{ toYaml .Values.server.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- with .Values.server.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.server.sidecarContainers }} + {{- range $name, $spec := .Values.server.sidecarContainers }} + - name: {{ $name }} + {{- if kindIs "string" $spec }} + {{- tpl $spec $ | nindent 10 }} + {{- else }} + {{- toYaml $spec | nindent 10 }} + {{- end }} + {{- end }} + {{- end }} + {{- if .Values.server.hostNetwork }} + hostNetwork: true + dnsPolicy: ClusterFirstWithHostNet + {{- else }} + dnsPolicy: {{ .Values.server.dnsPolicy }} + {{- end }} + {{- if .Values.imagePullSecrets }} + imagePullSecrets: +{{ toYaml .Values.imagePullSecrets | indent 8 }} + {{- end }} + {{- if .Values.server.nodeSelector }} + nodeSelector: +{{ toYaml .Values.server.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.server.hostAliases }} + hostAliases: +{{ toYaml .Values.server.hostAliases | indent 8 }} + {{- end }} + {{- if .Values.server.dnsConfig }} + dnsConfig: +{{ toYaml .Values.server.dnsConfig | indent 8 }} + {{- end }} + {{- with .Values.server.securityContext }} + securityContext: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.server.tolerations }} + tolerations: +{{ toYaml .Values.server.tolerations | indent 8 }} + {{- end }} + {{- if or .Values.server.affinity .Values.server.podAntiAffinity }} + affinity: + {{- end }} + {{- with .Values.server.affinity }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if eq .Values.server.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.server.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ template "prometheus.name" . }}]} + {{- else if eq .Values.server.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + topologyKey: {{ .Values.server.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ template "prometheus.name" . }}]} + {{- end }} + {{- with .Values.server.topologySpreadConstraints }} + topologySpreadConstraints: + {{- toYaml . | nindent 8 }} + {{- end }} + terminationGracePeriodSeconds: {{ .Values.server.terminationGracePeriodSeconds }} + volumes: + - name: config-volume + {{- if empty .Values.server.configFromSecret }} + configMap: + name: {{ if .Values.server.configMapOverrideName }}{{ .Release.Name }}-{{ .Values.server.configMapOverrideName }}{{- else }}{{ template "prometheus.server.fullname" . }}{{- end }} + {{- else }} + secret: + secretName: {{ .Values.server.configFromSecret }} + {{- end }} + {{- range .Values.server.extraHostPathMounts }} + - name: {{ .name }} + hostPath: + path: {{ .hostPath }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ $.Values.configmapReload.prometheus.name }}-{{ .name }} + configMap: + name: {{ .configMap }} + {{- end }} + {{- range .Values.server.extraConfigmapMounts }} + - name: {{ $.Values.server.name }}-{{ .name }} + configMap: + name: {{ .configMap }} + {{- end }} + {{- range .Values.server.extraSecretMounts }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ .name }} + configMap: + name: {{ .configMap }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} +{{- if .Values.server.extraVolumes }} +{{ toYaml .Values.server.extraVolumes | indent 8}} +{{- end }} + - name: storage-volume + {{- if .Values.server.persistentVolume.enabled }} + persistentVolumeClaim: + claimName: {{ if .Values.server.persistentVolume.existingClaim }}{{ .Values.server.persistentVolume.existingClaim }}{{- else }}{{ template "prometheus.server.fullname" . }}{{- end }} + {{- else }} + emptyDir: + {{- if .Values.server.emptyDir.sizeLimit }} + sizeLimit: {{ .Values.server.emptyDir.sizeLimit }} + {{- else }} + {} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/extra-manifests.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/extra-manifests.yaml new file mode 100644 index 0000000000..2b21b71062 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/extra-manifests.yaml @@ -0,0 +1,4 @@ +{{ range .Values.extraManifests }} +--- +{{ tpl . $ }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/headless-svc.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/headless-svc.yaml new file mode 100644 index 0000000000..df9db99147 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/headless-svc.yaml @@ -0,0 +1,35 @@ +{{- if .Values.server.statefulSet.enabled -}} +apiVersion: v1 +kind: Service +metadata: +{{- if .Values.server.statefulSet.headless.annotations }} + annotations: +{{ toYaml .Values.server.statefulSet.headless.annotations | indent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +{{- if .Values.server.statefulSet.headless.labels }} +{{ toYaml .Values.server.statefulSet.headless.labels | indent 4 }} +{{- end }} + name: {{ template "prometheus.server.fullname" . }}-headless + namespace: {{ include "prometheus.namespace" . }} +spec: + clusterIP: None + ports: + - name: http + port: {{ .Values.server.statefulSet.headless.servicePort }} + protocol: TCP + targetPort: 9090 + {{- if .Values.server.statefulSet.headless.gRPC.enabled }} + - name: grpc + port: {{ .Values.server.statefulSet.headless.gRPC.servicePort }} + protocol: TCP + targetPort: 10901 + {{- if .Values.server.statefulSet.headless.gRPC.nodePort }} + nodePort: {{ .Values.server.statefulSet.headless.gRPC.nodePort }} + {{- end }} + {{- end }} + + selector: + {{- include "prometheus.server.matchLabels" . | nindent 4 }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/ingress.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/ingress.yaml new file mode 100644 index 0000000000..84341a9c2c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/ingress.yaml @@ -0,0 +1,57 @@ +{{- if .Values.server.ingress.enabled -}} +{{- $ingressApiIsStable := eq (include "ingress.isStable" .) "true" -}} +{{- $ingressSupportsIngressClassName := eq (include "ingress.supportsIngressClassName" .) "true" -}} +{{- $ingressSupportsPathType := eq (include "ingress.supportsPathType" .) "true" -}} +{{- $releaseName := .Release.Name -}} +{{- $serviceName := include "prometheus.server.fullname" . }} +{{- $servicePort := .Values.server.ingress.servicePort | default .Values.server.service.servicePort -}} +{{- $ingressPath := .Values.server.ingress.path -}} +{{- $ingressPathType := .Values.server.ingress.pathType -}} +{{- $extraPaths := .Values.server.ingress.extraPaths -}} +apiVersion: {{ template "ingress.apiVersion" . }} +kind: Ingress +metadata: +{{- if .Values.server.ingress.annotations }} + annotations: +{{ toYaml .Values.server.ingress.annotations | indent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +{{- range $key, $value := .Values.server.ingress.extraLabels }} + {{ $key }}: {{ $value }} +{{- end }} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +spec: + {{- if and $ingressSupportsIngressClassName .Values.server.ingress.ingressClassName }} + ingressClassName: {{ .Values.server.ingress.ingressClassName }} + {{- end }} + rules: + {{- range .Values.server.ingress.hosts }} + {{- $url := splitList "/" . }} + - host: {{ first $url }} + http: + paths: +{{ if $extraPaths }} +{{ toYaml $extraPaths | indent 10 }} +{{- end }} + - path: {{ $ingressPath }} + {{- if $ingressSupportsPathType }} + pathType: {{ $ingressPathType }} + {{- end }} + backend: + {{- if $ingressApiIsStable }} + service: + name: {{ $serviceName }} + port: + number: {{ $servicePort }} + {{- else }} + serviceName: {{ $serviceName }} + servicePort: {{ $servicePort }} + {{- end }} + {{- end -}} +{{- if .Values.server.ingress.tls }} + tls: +{{ toYaml .Values.server.ingress.tls | indent 4 }} + {{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/network-policy.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/network-policy.yaml new file mode 100644 index 0000000000..3254ffc04f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/network-policy.yaml @@ -0,0 +1,16 @@ +{{- if .Values.networkPolicy.enabled }} +apiVersion: {{ template "prometheus.networkPolicy.apiVersion" . }} +kind: NetworkPolicy +metadata: + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +spec: + podSelector: + matchLabels: + {{- include "prometheus.server.matchLabels" . | nindent 6 }} + ingress: + - ports: + - port: 9090 +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pdb.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pdb.yaml new file mode 100644 index 0000000000..7ffe673071 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pdb.yaml @@ -0,0 +1,15 @@ +{{- if .Values.server.podDisruptionBudget.enabled }} +{{- $pdbSpec := omit .Values.server.podDisruptionBudget "enabled" }} +apiVersion: {{ template "prometheus.podDisruptionBudget.apiVersion" . }} +kind: PodDisruptionBudget +metadata: + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "prometheus.server.matchLabels" . | nindent 6 }} + {{- toYaml $pdbSpec | nindent 2 }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/psp.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/psp.yaml new file mode 100644 index 0000000000..5776e25410 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/psp.yaml @@ -0,0 +1,53 @@ +{{- if and .Values.rbac.create .Values.podSecurityPolicy.enabled }} +{{- if .Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy" }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ template "prometheus.server.fullname" . }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + {{- with .Values.server.podSecurityPolicy.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + privileged: false + allowPrivilegeEscalation: false + allowedCapabilities: + - 'CHOWN' + volumes: + - 'configMap' + - 'persistentVolumeClaim' + - 'emptyDir' + - 'secret' + - 'hostPath' + allowedHostPaths: + - pathPrefix: /etc + readOnly: true + - pathPrefix: {{ .Values.server.persistentVolume.mountPath }} + {{- range .Values.server.extraHostPathMounts }} + - pathPrefix: {{ .hostPath }} + readOnly: {{ .readOnly }} + {{- end }} + hostNetwork: false + hostPID: false + hostIPC: false + runAsUser: + rule: 'RunAsAny' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pvc.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pvc.yaml new file mode 100644 index 0000000000..a9dc4fce08 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/pvc.yaml @@ -0,0 +1,43 @@ +{{- if not .Values.server.statefulSet.enabled -}} +{{- if .Values.server.persistentVolume.enabled -}} +{{- if not .Values.server.persistentVolume.existingClaim -}} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + {{- if .Values.server.persistentVolume.annotations }} + annotations: +{{ toYaml .Values.server.persistentVolume.annotations | indent 4 }} + {{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + {{- with .Values.server.persistentVolume.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +spec: + accessModes: +{{ toYaml .Values.server.persistentVolume.accessModes | indent 4 }} +{{- if .Values.server.persistentVolume.storageClass }} +{{- if (eq "-" .Values.server.persistentVolume.storageClass) }} + storageClassName: "" +{{- else }} + storageClassName: "{{ .Values.server.persistentVolume.storageClass }}" +{{- end }} +{{- end }} +{{- if .Values.server.persistentVolume.volumeBindingMode }} + volumeBindingMode: "{{ .Values.server.persistentVolume.volumeBindingMode }}" +{{- end }} + resources: + requests: + storage: "{{ .Values.server.persistentVolume.size }}" +{{- if .Values.server.persistentVolume.selector }} + selector: + {{- toYaml .Values.server.persistentVolume.selector | nindent 4 }} +{{- end -}} +{{- if .Values.server.persistentVolume.volumeName }} + volumeName: "{{ .Values.server.persistentVolume.volumeName }}" +{{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/rolebinding.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/rolebinding.yaml new file mode 100644 index 0000000000..721b38816a --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/rolebinding.yaml @@ -0,0 +1,18 @@ +{{- range include "prometheus.namespaces" . | fromJsonArray }} +--- +apiVersion: {{ template "rbac.apiVersion" $ }} +kind: RoleBinding +metadata: + labels: + {{- include "prometheus.server.labels" $ | nindent 4 }} + name: {{ template "prometheus.server.fullname" $ }} + namespace: {{ . }} +subjects: + - kind: ServiceAccount + name: {{ template "prometheus.serviceAccountName.server" $ }} + namespace: {{ include "prometheus.namespace" $ }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ $.Values.server.useExistingClusterRoleName }} +{{ end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/service.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/service.yaml new file mode 100644 index 0000000000..069f3270d8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/service.yaml @@ -0,0 +1,63 @@ +{{- if .Values.server.service.enabled -}} +apiVersion: v1 +kind: Service +metadata: +{{- if .Values.server.service.annotations }} + annotations: +{{ toYaml .Values.server.service.annotations | indent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +{{- if .Values.server.service.labels }} +{{ toYaml .Values.server.service.labels | indent 4 }} +{{- end }} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +spec: +{{- if .Values.server.service.clusterIP }} + clusterIP: {{ .Values.server.service.clusterIP }} +{{- end }} +{{- if .Values.server.service.externalIPs }} + externalIPs: +{{ toYaml .Values.server.service.externalIPs | indent 4 }} +{{- end }} +{{- if .Values.server.service.loadBalancerIP }} + loadBalancerIP: {{ .Values.server.service.loadBalancerIP }} +{{- end }} +{{- if .Values.server.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := .Values.server.service.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} +{{- end }} + ports: + - name: http + port: {{ .Values.server.service.servicePort }} + protocol: TCP + targetPort: 9090 + {{- if .Values.server.service.nodePort }} + nodePort: {{ .Values.server.service.nodePort }} + {{- end }} + {{- if .Values.server.service.gRPC.enabled }} + - name: grpc + port: {{ .Values.server.service.gRPC.servicePort }} + protocol: TCP + targetPort: 10901 + {{- if .Values.server.service.gRPC.nodePort }} + nodePort: {{ .Values.server.service.gRPC.nodePort }} + {{- end }} + {{- end }} +{{- if .Values.server.service.additionalPorts }} +{{ toYaml .Values.server.service.additionalPorts | indent 4 }} +{{- end }} + selector: + {{- if and .Values.server.statefulSet.enabled .Values.server.service.statefulsetReplica.enabled }} + statefulset.kubernetes.io/pod-name: {{ template "prometheus.server.fullname" . }}-{{ .Values.server.service.statefulsetReplica.replica }} + {{- else -}} + {{- include "prometheus.server.matchLabels" . | nindent 4 }} +{{- if .Values.server.service.sessionAffinity }} + sessionAffinity: {{ .Values.server.service.sessionAffinity }} +{{- end }} + {{- end }} + type: "{{ .Values.server.service.type }}" +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/serviceaccount.yaml new file mode 100644 index 0000000000..6d5ab0c7d9 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/serviceaccount.yaml @@ -0,0 +1,16 @@ +{{- if .Values.serviceAccounts.server.create }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + name: {{ template "prometheus.serviceAccountName.server" . }} + namespace: {{ include "prometheus.namespace" . }} + annotations: +{{ toYaml .Values.serviceAccounts.server.annotations | indent 4 }} +{{- if kindIs "bool" .Values.server.automountServiceAccountToken }} +automountServiceAccountToken: {{ .Values.server.automountServiceAccountToken }} +{{- else if kindIs "bool" .Values.serviceAccounts.server.automountServiceAccountToken }} +automountServiceAccountToken: {{ .Values.serviceAccounts.server.automountServiceAccountToken }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/sts.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/sts.yaml new file mode 100644 index 0000000000..6200555df1 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/sts.yaml @@ -0,0 +1,436 @@ +{{- if .Values.server.statefulSet.enabled -}} +apiVersion: apps/v1 +kind: StatefulSet +metadata: +{{- if .Values.server.statefulSet.annotations }} + annotations: + {{ toYaml .Values.server.statefulSet.annotations | nindent 4 }} +{{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} + {{- if .Values.server.statefulSet.labels}} + {{ toYaml .Values.server.statefulSet.labels | nindent 4 }} + {{- end}} + name: {{ template "prometheus.server.fullname" . }} + namespace: {{ include "prometheus.namespace" . }} +spec: + {{- if semverCompare ">= 1.27.x" (include "prometheus.kubeVersion" .) }} + persistentVolumeClaimRetentionPolicy: + whenDeleted: {{ ternary "Delete" "Retain" .Values.server.statefulSet.pvcDeleteOnStsDelete }} + whenScaled: {{ ternary "Delete" "Retain" .Values.server.statefulSet.pvcDeleteOnStsScale }} + {{- end }} + serviceName: {{ template "prometheus.server.fullname" . }}-headless + selector: + matchLabels: + {{- include "prometheus.server.matchLabels" . | nindent 6 }} + replicas: {{ .Values.server.replicaCount }} + revisionHistoryLimit: {{ .Values.server.revisionHistoryLimit }} + podManagementPolicy: {{ .Values.server.statefulSet.podManagementPolicy }} + template: + metadata: + {{- if .Values.server.podAnnotations }} + annotations: + {{ toYaml .Values.server.podAnnotations | nindent 8 }} + {{- end }} + labels: + {{- include "prometheus.server.labels" . | nindent 8 }} + {{- if .Values.server.podLabels}} + {{ toYaml .Values.server.podLabels | nindent 8 }} + {{- end}} + spec: +{{- if .Values.server.priorityClassName }} + priorityClassName: "{{ .Values.server.priorityClassName }}" +{{- end }} +{{- if .Values.server.schedulerName }} + schedulerName: "{{ .Values.server.schedulerName }}" +{{- end }} +{{- if semverCompare ">=1.13-0" .Capabilities.KubeVersion.GitVersion }} + {{- if or (.Values.server.enableServiceLinks) (eq (.Values.server.enableServiceLinks | toString) "") }} + enableServiceLinks: true + {{- else }} + enableServiceLinks: false + {{- end }} +{{- end }} + serviceAccountName: {{ template "prometheus.serviceAccountName.server" . }} +{{- if kindIs "bool" .Values.server.automountServiceAccountToken }} + automountServiceAccountToken: {{ .Values.server.automountServiceAccountToken }} +{{- end }} + {{- if .Values.server.extraInitContainers }} + initContainers: +{{ toYaml .Values.server.extraInitContainers | indent 8 }} + {{- end }} + containers: + {{- if .Values.configmapReload.prometheus.enabled }} + - name: {{ template "prometheus.name" . }}-{{ .Values.server.name }}-{{ .Values.configmapReload.prometheus.name }} + {{- if .Values.configmapReload.prometheus.image.digest }} + image: "{{ .Values.configmapReload.prometheus.image.repository }}@{{ .Values.configmapReload.prometheus.image.digest }}" + {{- else }} + image: "{{ .Values.configmapReload.prometheus.image.repository }}:{{ .Values.configmapReload.prometheus.image.tag }}" + {{- end }} + imagePullPolicy: "{{ .Values.configmapReload.prometheus.image.pullPolicy }}" + {{- with .Values.configmapReload.prometheus.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + args: + - --watched-dir=/etc/config + {{- $default_url := "http://127.0.0.1:9090/-/reload" }} + {{- with .Values.server.prefixURL }} + {{- $default_url = printf "http://127.0.0.1:9090%s/-/reload" . }} + {{- end }} + {{- if .Values.configmapReload.prometheus.containerPort }} + - --listen-address=0.0.0.0:{{ .Values.configmapReload.prometheus.containerPort }} + {{- end }} + - --reload-url={{ default $default_url .Values.configmapReload.reloadUrl }} + {{- range $key, $value := .Values.configmapReload.prometheus.extraArgs }} + {{- if $value }} + - --{{ $key }}={{ $value }} + {{- else }} + - --{{ $key }} + {{- end }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraVolumeDirs }} + - --watched-dir={{ . }} + {{- end }} + {{- with .Values.configmapReload.env }} + env: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.configmapReload.prometheus.containerPort }} + ports: + - containerPort: {{ .Values.configmapReload.prometheus.containerPort }} + {{- if .Values.configmapReload.prometheus.containerPortName }} + name: {{ .Values.configmapReload.prometheus.containerPortName }} + {{- end }} + {{- end }} + {{- with .Values.configmapReload.prometheus.livenessProbe }} + livenessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.configmapReload.prometheus.readinessProbe }} + readinessProbe: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.configmapReload.prometheus.startupProbe }} + {{- $startupProbe := omit .Values.configmapReload.prometheus.startupProbe "enabled" }} + startupProbe: + {{- toYaml $startupProbe | nindent 12 }} + {{- end }} + {{- with .Values.configmapReload.prometheus.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config-volume + mountPath: /etc/config + readOnly: true + {{- with .Values.configmapReload.prometheus.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ $.Values.configmapReload.prometheus.name }}-{{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- end }} + + - name: {{ template "prometheus.name" . }}-{{ .Values.server.name }} + {{- if .Values.server.image.digest }} + image: "{{ .Values.server.image.repository }}@{{ .Values.server.image.digest }}" + {{- else }} + image: "{{ .Values.server.image.repository }}:{{ .Values.server.image.tag | default .Chart.AppVersion }}" + {{- end }} + imagePullPolicy: "{{ .Values.server.image.pullPolicy }}" + {{- with .Values.server.command }} + command: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.server.env }} + env: +{{ toYaml .Values.server.env | indent 12}} + {{- end }} + args: + {{- if .Values.server.defaultFlagsOverride }} + {{ toYaml .Values.server.defaultFlagsOverride | nindent 12}} + {{- else }} + {{- if .Values.server.prefixURL }} + - --web.route-prefix={{ .Values.server.prefixURL }} + {{- end }} + {{- if .Values.server.retention }} + - --storage.tsdb.retention.time={{ .Values.server.retention }} + {{- end }} + {{- if .Values.server.retentionSize }} + - --storage.tsdb.retention.size={{ .Values.server.retentionSize }} + {{- end }} + - --config.file={{ .Values.server.configPath }} + {{- if .Values.server.storagePath }} + - --storage.tsdb.path={{ .Values.server.storagePath }} + {{- else }} + - --storage.tsdb.path={{ .Values.server.persistentVolume.mountPath }} + {{- end }} + - --web.console.libraries=/etc/prometheus/console_libraries + - --web.console.templates=/etc/prometheus/consoles + {{- range .Values.server.extraFlags }} + - --{{ . }} + {{- end }} + {{- range $key, $value := .Values.server.extraArgs }} + {{- if $value }} + - --{{ $key }}={{ $value }} + {{- else }} + - --{{ $key }} + {{- end }} + {{- end }} + {{- if .Values.server.baseURL }} + - --web.external-url={{ .Values.server.baseURL }} + {{- end }} + {{- end }} + ports: + - containerPort: 9090 + {{- if .Values.server.portName }} + name: {{ .Values.server.portName }} + {{- end }} + {{- if .Values.server.hostPort }} + hostPort: {{ .Values.server.hostPort }} + {{- end }} + readinessProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/ready + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- with .Values.server.probeHeaders }} + httpHeaders: +{{- toYaml . | nindent 14 }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + initialDelaySeconds: {{ .Values.server.readinessProbeInitialDelay }} + periodSeconds: {{ .Values.server.readinessProbePeriodSeconds }} + timeoutSeconds: {{ .Values.server.readinessProbeTimeout }} + failureThreshold: {{ .Values.server.readinessProbeFailureThreshold }} + successThreshold: {{ .Values.server.readinessProbeSuccessThreshold }} + livenessProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/healthy + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- with .Values.server.probeHeaders }} + httpHeaders: +{{- toYaml . | nindent 14 }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + initialDelaySeconds: {{ .Values.server.livenessProbeInitialDelay }} + periodSeconds: {{ .Values.server.livenessProbePeriodSeconds }} + timeoutSeconds: {{ .Values.server.livenessProbeTimeout }} + failureThreshold: {{ .Values.server.livenessProbeFailureThreshold }} + successThreshold: {{ .Values.server.livenessProbeSuccessThreshold }} + {{- if .Values.server.startupProbe.enabled }} + startupProbe: + {{- if not .Values.server.tcpSocketProbeEnabled }} + httpGet: + path: {{ .Values.server.prefixURL }}/-/healthy + port: 9090 + scheme: {{ .Values.server.probeScheme }} + {{- if .Values.server.probeHeaders }} + httpHeaders: + {{- range .Values.server.probeHeaders}} + - name: {{ .name }} + value: {{ .value }} + {{- end }} + {{- end }} + {{- else }} + tcpSocket: + port: 9090 + {{- end }} + failureThreshold: {{ .Values.server.startupProbe.failureThreshold }} + periodSeconds: {{ .Values.server.startupProbe.periodSeconds }} + timeoutSeconds: {{ .Values.server.startupProbe.timeoutSeconds }} + {{- end }} + {{- with .Values.server.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config-volume + mountPath: /etc/config + - name: {{ ternary .Values.server.persistentVolume.statefulSetNameOverride "storage-volume" (and .Values.server.persistentVolume.enabled (not (empty .Values.server.persistentVolume.statefulSetNameOverride))) }} + mountPath: {{ .Values.server.persistentVolume.mountPath }} + subPath: "{{ .Values.server.persistentVolume.subPath }}" + {{- range .Values.server.extraHostPathMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.server.extraConfigmapMounts }} + - name: {{ $.Values.server.name }}-{{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- range .Values.server.extraSecretMounts }} + - name: {{ .name }} + mountPath: {{ .mountPath }} + subPath: {{ .subPath }} + readOnly: {{ .readOnly }} + {{- end }} + {{- if .Values.server.extraVolumeMounts }} + {{ toYaml .Values.server.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- with .Values.server.containerSecurityContext }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + {{- if .Values.server.sidecarContainers }} + {{- range $name, $spec := .Values.server.sidecarContainers }} + - name: {{ $name }} + {{- if kindIs "string" $spec }} + {{- tpl $spec $ | nindent 10 }} + {{- else }} + {{- toYaml $spec | nindent 10 }} + {{- end }} + {{- end }} + {{- end }} + hostNetwork: {{ .Values.server.hostNetwork }} + {{- if .Values.server.dnsPolicy }} + dnsPolicy: {{ .Values.server.dnsPolicy }} + {{- end }} + {{- if .Values.imagePullSecrets }} + imagePullSecrets: +{{ toYaml .Values.imagePullSecrets | indent 8 }} + {{- end }} + {{- if .Values.server.nodeSelector }} + nodeSelector: +{{ toYaml .Values.server.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.server.hostAliases }} + hostAliases: +{{ toYaml .Values.server.hostAliases | indent 8 }} + {{- end }} + {{- if .Values.server.dnsConfig }} + dnsConfig: +{{ toYaml .Values.server.dnsConfig | indent 8 }} + {{- end }} + {{- with .Values.server.securityContext }} + securityContext: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.server.tolerations }} + tolerations: +{{ toYaml .Values.server.tolerations | indent 8 }} + {{- end }} + {{- if or .Values.server.affinity .Values.server.podAntiAffinity }} + affinity: + {{- end }} + {{- with .Values.server.affinity }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if eq .Values.server.podAntiAffinity "hard" }} + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - topologyKey: {{ .Values.server.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ template "prometheus.name" . }}]} + {{- else if eq .Values.server.podAntiAffinity "soft" }} + podAntiAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + podAffinityTerm: + topologyKey: {{ .Values.server.podAntiAffinityTopologyKey }} + labelSelector: + matchExpressions: + - {key: app.kubernetes.io/name, operator: In, values: [{{ template "prometheus.name" . }}]} + {{- end }} + {{- with .Values.server.topologySpreadConstraints }} + topologySpreadConstraints: + {{- toYaml . | nindent 8 }} + {{- end }} + terminationGracePeriodSeconds: {{ .Values.server.terminationGracePeriodSeconds }} + volumes: + - name: config-volume + {{- if empty .Values.server.configFromSecret }} + configMap: + name: {{ if .Values.server.configMapOverrideName }}{{ .Release.Name }}-{{ .Values.server.configMapOverrideName }}{{- else }}{{ template "prometheus.server.fullname" . }}{{- end }} + {{- else }} + secret: + secretName: {{ .Values.server.configFromSecret }} + {{- end }} + {{- range .Values.server.extraHostPathMounts }} + - name: {{ .name }} + hostPath: + path: {{ .hostPath }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ $.Values.configmapReload.prometheus.name }}-{{ .name }} + configMap: + name: {{ .configMap }} + {{- end }} + {{- range .Values.server.extraConfigmapMounts }} + - name: {{ $.Values.server.name }}-{{ .name }} + configMap: + name: {{ .configMap }} + {{- end }} + {{- range .Values.server.extraSecretMounts }} + - name: {{ .name }} + secret: + secretName: {{ .secretName }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} + {{- range .Values.configmapReload.prometheus.extraConfigmapMounts }} + - name: {{ .name }} + configMap: + name: {{ .configMap }} + {{- with .optional }} + optional: {{ . }} + {{- end }} + {{- end }} +{{- if .Values.server.extraVolumes }} +{{ toYaml .Values.server.extraVolumes | indent 8}} +{{- end }} +{{- if .Values.server.persistentVolume.enabled }} + volumeClaimTemplates: + - apiVersion: v1 + kind: PersistentVolumeClaim + metadata: + name: {{ .Values.server.persistentVolume.statefulSetNameOverride | default "storage-volume" }} + {{- if .Values.server.persistentVolume.annotations }} + annotations: +{{ toYaml .Values.server.persistentVolume.annotations | indent 10 }} + {{- end }} + {{- if .Values.server.persistentVolume.labels }} + labels: +{{ toYaml .Values.server.persistentVolume.labels | indent 10 }} + {{- end }} + spec: + accessModes: +{{ toYaml .Values.server.persistentVolume.accessModes | indent 10 }} + resources: + requests: + storage: "{{ .Values.server.persistentVolume.size }}" + {{- if .Values.server.persistentVolume.storageClass }} + {{- if (eq "-" .Values.server.persistentVolume.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.server.persistentVolume.storageClass }}" + {{- end }} + {{- end }} +{{- else }} + - name: storage-volume + emptyDir: + {{- if .Values.server.emptyDir.sizeLimit }} + sizeLimit: {{ .Values.server.emptyDir.sizeLimit }} + {{- else }} + {} + {{- end -}} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/templates/vpa.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/vpa.yaml new file mode 100644 index 0000000000..cd07ad8b7d --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/templates/vpa.yaml @@ -0,0 +1,26 @@ +{{- if .Values.server.verticalAutoscaler.enabled -}} +{{- if .Capabilities.APIVersions.Has "autoscaling.k8s.io/v1/VerticalPodAutoscaler" }} +apiVersion: autoscaling.k8s.io/v1 +{{- else }} +apiVersion: autoscaling.k8s.io/v1beta2 +{{- end }} +kind: VerticalPodAutoscaler +metadata: + name: {{ template "prometheus.server.fullname" . }}-vpa + namespace: {{ include "prometheus.namespace" . }} + labels: + {{- include "prometheus.server.labels" . | nindent 4 }} +spec: + targetRef: + apiVersion: "apps/v1" +{{- if .Values.server.statefulSet.enabled }} + kind: StatefulSet +{{- else }} + kind: Deployment +{{- end }} + name: {{ template "prometheus.server.fullname" . }} + updatePolicy: + updateMode: {{ .Values.server.verticalAutoscaler.updateMode | default "Off" | quote }} + resourcePolicy: + containerPolicies: {{ .Values.server.verticalAutoscaler.containerPolicies | default list | toYaml | trim | nindent 4 }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/values.schema.json b/charts/kasten/k10/7.0.1101/charts/prometheus/values.schema.json new file mode 100644 index 0000000000..b2d8af26c3 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/values.schema.json @@ -0,0 +1,752 @@ +{ + "$schema": "http://json-schema.org/schema#", + "type": "object", + "properties": { + "alertRelabelConfigs": { + "type": "object" + }, + "alertmanager": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "persistence": { + "type": "object", + "properties": { + "size": { + "type": "string" + } + } + }, + "podSecurityContext": { + "type": "object", + "properties": { + "fsGroup": { + "type": "integer" + }, + "runAsGroup": { + "type": "integer" + }, + "runAsNonRoot": { + "type": "boolean" + }, + "runAsUser": { + "type": "integer" + } + } + } + } + }, + "configmapReload": { + "type": "object", + "properties": { + "env": { + "type": "array" + }, + "prometheus": { + "type": "object", + "properties": { + "containerSecurityContext": { + "type": "object" + }, + "enabled": { + "type": "boolean" + }, + "extraArgs": { + "type": "object" + }, + "extraConfigmapMounts": { + "type": "array" + }, + "extraVolumeDirs": { + "type": "array" + }, + "extraVolumeMounts": { + "type": "array" + }, + "image": { + "type": "object", + "properties": { + "digest": { + "type": "string" + }, + "pullPolicy": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + } + } + }, + "name": { + "type": "string" + }, + "resources": { + "type": "object" + } + } + }, + "reloadUrl": { + "type": "string" + } + } + }, + "extraManifests": { + "type": "array" + }, + "extraScrapeConfigs": { + "type": "string" + }, + "forceNamespace": { + "type": "string" + }, + "imagePullSecrets": { + "type": "array" + }, + "kube-state-metrics": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + } + } + }, + "networkPolicy": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + } + } + }, + "podSecurityPolicy": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + } + } + }, + "prometheus-node-exporter": { + "type": "object", + "properties": { + "containerSecurityContext": { + "type": "object", + "properties": { + "allowPrivilegeEscalation": { + "type": "boolean" + } + } + }, + "enabled": { + "type": "boolean" + }, + "rbac": { + "type": "object", + "properties": { + "pspEnabled": { + "type": "boolean" + } + } + } + } + }, + "prometheus-pushgateway": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "serviceAnnotations": { + "type": "object", + "properties": { + "prometheus.io/probe": { + "type": "string" + } + } + } + } + }, + "rbac": { + "type": "object", + "properties": { + "create": { + "type": "boolean" + } + } + }, + "ruleFiles": { + "type": "object" + }, + "server": { + "type": "object", + "properties": { + "affinity": { + "type": "object" + }, + "alertmanagers": { + "type": "array" + }, + "baseURL": { + "type": "string" + }, + "clusterRoleNameOverride": { + "type": "string" + }, + "command": { + "type": "array" + }, + "configMapAnnotations": { + "type": "object" + }, + "configMapOverrideName": { + "type": "string" + }, + "configPath": { + "type": "string" + }, + "containerSecurityContext": { + "type": "object" + }, + "defaultFlagsOverride": { + "type": "array" + }, + "deploymentAnnotations": { + "type": "object" + }, + "dnsConfig": { + "type": "object" + }, + "dnsPolicy": { + "type": "string" + }, + "emptyDir": { + "type": "object", + "properties": { + "sizeLimit": { + "type": "string" + } + } + }, + "enableServiceLinks": { + "type": "boolean" + }, + "env": { + "type": "array" + }, + "exemplars": { + "type": "object" + }, + "extraArgs": { + "type": "object" + }, + "extraConfigmapLabels": { + "type": "object" + }, + "extraConfigmapMounts": { + "type": "array" + }, + "extraFlags": { + "type": "array", + "items": { + "type": "string" + } + }, + "extraHostPathMounts": { + "type": "array" + }, + "extraInitContainers": { + "type": "array" + }, + "extraSecretMounts": { + "type": "array" + }, + "extraVolumeMounts": { + "type": "array" + }, + "extraVolumes": { + "type": "array" + }, + "fullnameOverride": { + "type": "string" + }, + "global": { + "type": "object", + "properties": { + "evaluation_interval": { + "type": "string" + }, + "scrape_interval": { + "type": "string" + }, + "scrape_timeout": { + "type": "string" + } + } + }, + "hostAliases": { + "type": "array" + }, + "hostNetwork": { + "type": "boolean" + }, + "image": { + "type": "object", + "properties": { + "digest": { + "type": "string" + }, + "pullPolicy": { + "type": "string" + }, + "repository": { + "type": "string" + }, + "tag": { + "type": "string" + } + } + }, + "ingress": { + "type": "object", + "properties": { + "annotations": { + "type": "object" + }, + "enabled": { + "type": "boolean" + }, + "extraLabels": { + "type": "object" + }, + "extraPaths": { + "type": "array" + }, + "hosts": { + "type": "array" + }, + "path": { + "type": "string" + }, + "pathType": { + "type": "string" + }, + "tls": { + "type": "array" + } + } + }, + "livenessProbeFailureThreshold": { + "type": "integer" + }, + "livenessProbeInitialDelay": { + "type": "integer" + }, + "livenessProbePeriodSeconds": { + "type": "integer" + }, + "livenessProbeSuccessThreshold": { + "type": "integer" + }, + "livenessProbeTimeout": { + "type": "integer" + }, + "name": { + "type": "string" + }, + "nodeSelector": { + "type": "object" + }, + "persistentVolume": { + "type": "object", + "properties": { + "accessModes": { + "type": "array", + "items": { + "type": "string" + } + }, + "annotations": { + "type": "object" + }, + "enabled": { + "type": "boolean" + }, + "existingClaim": { + "type": "string" + }, + "labels": { + "type": "object" + }, + "mountPath": { + "type": "string" + }, + "size": { + "type": "string" + }, + "statefulSetNameOverride": { + "type": "string" + }, + "subPath": { + "type": "string" + } + } + }, + "podAnnotations": { + "type": "object" + }, + "podAntiAffinity": { + "type": "string", + "enum": ["", "soft", "hard"], + "default": "" + }, + "podAntiAffinityTopologyKey": { + "type": "string" + }, + "podDisruptionBudget": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "maxUnavailable": { + "type": [ + "string", + "integer" + ] + } + } + }, + "podLabels": { + "type": "object" + }, + "podSecurityPolicy": { + "type": "object", + "properties": { + "annotations": { + "type": "object" + } + } + }, + "portName": { + "type": "string" + }, + "prefixURL": { + "type": "string" + }, + "priorityClassName": { + "type": "string" + }, + "probeHeaders": { + "type": "array" + }, + "probeScheme": { + "type": "string" + }, + "readinessProbeFailureThreshold": { + "type": "integer" + }, + "readinessProbeInitialDelay": { + "type": "integer" + }, + "readinessProbePeriodSeconds": { + "type": "integer" + }, + "readinessProbeSuccessThreshold": { + "type": "integer" + }, + "readinessProbeTimeout": { + "type": "integer" + }, + "releaseNamespace": { + "type": "boolean" + }, + "remoteRead": { + "type": "array" + }, + "remoteWrite": { + "type": "array" + }, + "replicaCount": { + "type": "integer" + }, + "resources": { + "type": "object" + }, + "retention": { + "type": "string" + }, + "retentionSize": { + "type": "string" + }, + "revisionHistoryLimit": { + "type": "integer" + }, + "securityContext": { + "type": "object", + "properties": { + "fsGroup": { + "type": "integer" + }, + "runAsGroup": { + "type": "integer" + }, + "runAsNonRoot": { + "type": "boolean" + }, + "runAsUser": { + "type": "integer" + } + } + }, + "service": { + "type": "object", + "properties": { + "additionalPorts": { + "type": "array" + }, + "annotations": { + "type": "object" + }, + "clusterIP": { + "type": "string" + }, + "enabled": { + "type": "boolean" + }, + "externalIPs": { + "type": "array" + }, + "gRPC": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "servicePort": { + "type": "integer" + } + } + }, + "labels": { + "type": "object" + }, + "loadBalancerIP": { + "type": "string" + }, + "loadBalancerSourceRanges": { + "type": "array" + }, + "servicePort": { + "type": "integer" + }, + "sessionAffinity": { + "type": "string" + }, + "statefulsetReplica": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "replica": { + "type": "integer" + } + } + }, + "type": { + "type": "string" + } + } + }, + "sidecarContainers": { + "type": "object" + }, + "sidecarTemplateValues": { + "type": "object" + }, + "startupProbe": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "failureThreshold": { + "type": "integer" + }, + "periodSeconds": { + "type": "integer" + }, + "timeoutSeconds": { + "type": "integer" + } + } + }, + "statefulSet": { + "type": "object", + "properties": { + "annotations": { + "type": "object" + }, + "enabled": { + "type": "boolean" + }, + "headless": { + "type": "object", + "properties": { + "annotations": { + "type": "object" + }, + "gRPC": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + }, + "servicePort": { + "type": "integer" + } + } + }, + "labels": { + "type": "object" + }, + "servicePort": { + "type": "integer" + } + } + }, + "labels": { + "type": "object" + }, + "podManagementPolicy": { + "type": "string" + }, + "pvcDeleteOnStsDelete": { + "type": "boolean" + }, + "pvcDeleteOnStsScale": { + "type": "boolean" + } + } + }, + "storagePath": { + "type": "string" + }, + "strategy": { + "type": "object", + "properties": { + "type": { + "type": "string" + } + } + }, + "tcpSocketProbeEnabled": { + "type": "boolean" + }, + "terminationGracePeriodSeconds": { + "type": "integer" + }, + "tolerations": { + "type": "array" + }, + "topologySpreadConstraints": { + "type": "array" + }, + "tsdb": { + "type": "object" + }, + "verticalAutoscaler": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean" + } + } + } + } + }, + "scrapeConfigFiles": { + "type": "array" + }, + "serverFiles": { + "type": "object", + "properties": { + "alerting_rules.yml": { + "type": "object" + }, + "alerts": { + "type": "object" + }, + "prometheus.yml": { + "type": "object", + "properties": { + "rule_files": { + "type": "array", + "items": { + "type": "string" + } + }, + "scrape_configs": { + "type": "array", + "items": { + "type": "object", + "properties": { + "job_name": { + "type": "string" + }, + "static_configs": { + "type": "array", + "items": { + "type": "object", + "properties": { + "targets": { + "type": "array", + "items": { + "type": "string" + } + } + } + } + } + } + } + } + } + }, + "recording_rules.yml": { + "type": "object" + }, + "rules": { + "type": "object" + } + } + }, + "serviceAccounts": { + "type": "object", + "properties": { + "server": { + "type": "object", + "properties": { + "annotations": { + "type": "object" + }, + "create": { + "type": "boolean" + }, + "name": { + "type": "string" + }, + "automountServiceAccountToken": { + "type": "boolean" + } + } + } + } + } + } +} diff --git a/charts/kasten/k10/7.0.1101/charts/prometheus/values.yaml b/charts/kasten/k10/7.0.1101/charts/prometheus/values.yaml new file mode 100644 index 0000000000..d940566662 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/charts/prometheus/values.yaml @@ -0,0 +1,1309 @@ +# yaml-language-server: $schema=values.schema.json +# Default values for prometheus. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +rbac: + create: true + +podSecurityPolicy: + enabled: false + +imagePullSecrets: [] +# - name: "image-pull-secret" + +## Define serviceAccount names for components. Defaults to component's fully qualified name. +## +serviceAccounts: + server: + create: true + name: "" + annotations: {} + + ## Opt out of automounting Kubernetes API credentials. + ## It will be overriden by server.automountServiceAccountToken value, if set. + # automountServiceAccountToken: false + +## Additional labels to attach to all resources +commonMetaLabels: {} + +## Monitors ConfigMap changes and POSTs to a URL +## Ref: https://github.com/prometheus-operator/prometheus-operator/tree/main/cmd/prometheus-config-reloader +## +configmapReload: + ## URL for configmap-reload to use for reloads + ## + reloadUrl: "" + + ## env sets environment variables to pass to the container. Can be set as name/value pairs, + ## read from secrets or configmaps. + env: [] + # - name: SOMEVAR + # value: somevalue + # - name: PASSWORD + # valueFrom: + # secretKeyRef: + # name: mysecret + # key: password + # optional: false + + prometheus: + ## If false, the configmap-reload container will not be deployed + ## + enabled: true + + ## configmap-reload container name + ## + name: configmap-reload + + ## configmap-reload container image + ## + image: + repository: quay.io/prometheus-operator/prometheus-config-reloader + tag: v0.74.0 + # When digest is set to a non-empty value, images will be pulled by digest (regardless of tag value). + digest: "" + pullPolicy: IfNotPresent + + ## config-reloader's container port and port name for probes and metrics + containerPort: 8080 + containerPortName: metrics + + ## Additional configmap-reload container arguments + ## Set to null for argumentless flags + ## + extraArgs: {} + + ## Additional configmap-reload volume directories + ## + extraVolumeDirs: [] + + ## Additional configmap-reload volume mounts + ## + extraVolumeMounts: [] + + ## Additional configmap-reload mounts + ## + extraConfigmapMounts: [] + # - name: prometheus-alerts + # mountPath: /etc/alerts.d + # subPath: "" + # configMap: prometheus-alerts + # readOnly: true + + ## Security context to be added to configmap-reload container + containerSecurityContext: {} + + ## Settings for Prometheus reloader's readiness, liveness and startup probes + ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ + ## + + livenessProbe: + httpGet: + path: /healthz + port: metrics + scheme: HTTP + periodSeconds: 10 + initialDelaySeconds: 2 + + readinessProbe: + httpGet: + path: /healthz + port: metrics + scheme: HTTP + periodSeconds: 10 + + startupProbe: + enabled: false + httpGet: + path: /healthz + port: metrics + scheme: HTTP + periodSeconds: 10 + + ## configmap-reload resource requests and limits + ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## + resources: {} + +server: + ## Prometheus server container name + ## + name: server + + ## Opt out of automounting Kubernetes API credentials. + ## If set it will override serviceAccounts.server.automountServiceAccountToken value for ServiceAccount. + # automountServiceAccountToken: false + + ## Use a ClusterRole (and ClusterRoleBinding) + ## - If set to false - we define a RoleBinding in the defined namespaces ONLY + ## + ## NB: because we need a Role with nonResourceURL's ("/metrics") - you must get someone with Cluster-admin privileges to define this role for you, before running with this setting enabled. + ## This makes prometheus work - for users who do not have ClusterAdmin privs, but wants prometheus to operate on their own namespaces, instead of clusterwide. + ## + ## You MUST also set namespaces to the ones you have access to and want monitored by Prometheus. + ## + # useExistingClusterRoleName: nameofclusterrole + + ## If set it will override prometheus.server.fullname value for ClusterRole and ClusterRoleBinding + ## + clusterRoleNameOverride: "" + + # Enable only the release namespace for monitoring. By default all namespaces are monitored. + # If releaseNamespace and namespaces are both set a merged list will be monitored. + releaseNamespace: false + + ## namespaces to monitor (instead of monitoring all - clusterwide). Needed if you want to run without Cluster-admin privileges. + # namespaces: + # - yournamespace + + # sidecarContainers - add more containers to prometheus server + # Key/Value where Key is the sidecar `- name: ` + # Example: + # sidecarContainers: + # webserver: + # image: nginx + # OR for adding OAuth authentication to Prometheus + # sidecarContainers: + # oauth-proxy: + # image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.2 + # args: + # - --upstream=http://127.0.0.1:9090 + # - --http-address=0.0.0.0:8081 + # - ... + # ports: + # - containerPort: 8081 + # name: oauth-proxy + # protocol: TCP + # resources: {} + sidecarContainers: {} + + # sidecarTemplateValues - context to be used in template for sidecarContainers + # Example: + # sidecarTemplateValues: *your-custom-globals + # sidecarContainers: + # webserver: |- + # {{ include "webserver-container-template" . }} + # Template for `webserver-container-template` might looks like this: + # image: "{{ .Values.server.sidecarTemplateValues.repository }}:{{ .Values.server.sidecarTemplateValues.tag }}" + # ... + # + sidecarTemplateValues: {} + + ## Prometheus server container image + ## + image: + repository: quay.io/prometheus/prometheus + # if not set appVersion field from Chart.yaml is used + tag: "" + # When digest is set to a non-empty value, images will be pulled by digest (regardless of tag value). + digest: "" + pullPolicy: IfNotPresent + + ## Prometheus server command + ## + command: [] + + ## prometheus server priorityClassName + ## + priorityClassName: "" + + ## EnableServiceLinks indicates whether information about services should be injected + ## into pod's environment variables, matching the syntax of Docker links. + ## WARNING: the field is unsupported and will be skipped in K8s prior to v1.13.0. + ## + enableServiceLinks: true + + ## The URL prefix at which the container can be accessed. Useful in the case the '-web.external-url' includes a slug + ## so that the various internal URLs are still able to access as they are in the default case. + ## (Optional) + prefixURL: "" + + ## External URL which can access prometheus + ## Maybe same with Ingress host name + baseURL: "" + + ## Additional server container environment variables + ## + ## You specify this manually like you would a raw deployment manifest. + ## This means you can bind in environment variables from secrets. + ## + ## e.g. static environment variable: + ## - name: DEMO_GREETING + ## value: "Hello from the environment" + ## + ## e.g. secret environment variable: + ## - name: USERNAME + ## valueFrom: + ## secretKeyRef: + ## name: mysecret + ## key: username + env: [] + + # List of flags to override default parameters, e.g: + # - --enable-feature=agent + # - --storage.agent.retention.max-time=30m + # - --config.file=/etc/config/prometheus.yml + defaultFlagsOverride: [] + + extraFlags: + - web.enable-lifecycle + ## web.enable-admin-api flag controls access to the administrative HTTP API which includes functionality such as + ## deleting time series. This is disabled by default. + # - web.enable-admin-api + ## + ## storage.tsdb.no-lockfile flag controls BD locking + # - storage.tsdb.no-lockfile + ## + ## storage.tsdb.wal-compression flag enables compression of the write-ahead log (WAL) + # - storage.tsdb.wal-compression + + ## Path to a configuration file on prometheus server container FS + configPath: /etc/config/prometheus.yml + + ### The data directory used by prometheus to set --storage.tsdb.path + ### When empty server.persistentVolume.mountPath is used instead + storagePath: "" + + global: + ## How frequently to scrape targets by default + ## + scrape_interval: 1m + ## How long until a scrape request times out + ## + scrape_timeout: 10s + ## How frequently to evaluate rules + ## + evaluation_interval: 1m + ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write + ## + remoteWrite: [] + ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_read + ## + remoteRead: [] + + ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tsdb + ## + tsdb: {} + # out_of_order_time_window: 0s + + ## https://prometheus.io/docs/prometheus/latest/configuration/configuration/#exemplars + ## Must be enabled via --enable-feature=exemplar-storage + ## + exemplars: {} + # max_exemplars: 100000 + + ## Custom HTTP headers for Liveness/Readiness/Startup Probe + ## + ## Useful for providing HTTP Basic Auth to healthchecks + probeHeaders: [] + # - name: "Authorization" + # value: "Bearer ABCDEabcde12345" + + ## Additional Prometheus server container arguments + ## Set to null for argumentless flags + ## + extraArgs: {} + # web.enable-remote-write-receiver: null + + ## Additional InitContainers to initialize the pod + ## + extraInitContainers: [] + + ## Additional Prometheus server Volume mounts + ## + extraVolumeMounts: [] + + ## Additional Prometheus server Volumes + ## + extraVolumes: [] + + ## Additional Prometheus server hostPath mounts + ## + extraHostPathMounts: [] + # - name: certs-dir + # mountPath: /etc/kubernetes/certs + # subPath: "" + # hostPath: /etc/kubernetes/certs + # readOnly: true + + extraConfigmapMounts: [] + # - name: certs-configmap + # mountPath: /prometheus + # subPath: "" + # configMap: certs-configmap + # readOnly: true + + ## Additional Prometheus server Secret mounts + # Defines additional mounts with secrets. Secrets must be manually created in the namespace. + extraSecretMounts: [] + # - name: secret-files + # mountPath: /etc/secrets + # subPath: "" + # secretName: prom-secret-files + # readOnly: true + + ## ConfigMap override where fullname is {{.Release.Name}}-{{.Values.server.configMapOverrideName}} + ## Defining configMapOverrideName will cause templates/server-configmap.yaml + ## to NOT generate a ConfigMap resource + ## + configMapOverrideName: "" + + ## Extra labels for Prometheus server ConfigMap (ConfigMap that holds serverFiles) + extraConfigmapLabels: {} + + ## Override the prometheus.server.fullname for all objects related to the Prometheus server + fullnameOverride: "" + + ingress: + ## If true, Prometheus server Ingress will be created + ## + enabled: false + + # For Kubernetes >= 1.18 you should specify the ingress-controller via the field ingressClassName + # See https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/#specifying-the-class-of-an-ingress + # ingressClassName: nginx + + ## Prometheus server Ingress annotations + ## + annotations: {} + # kubernetes.io/ingress.class: nginx + # kubernetes.io/tls-acme: 'true' + + ## Prometheus server Ingress additional labels + ## + extraLabels: {} + + ## Redirect ingress to an additional defined port on the service + # servicePort: 8081 + + ## Prometheus server Ingress hostnames with optional path + ## Must be provided if Ingress is enabled + ## + hosts: [] + # - prometheus.domain.com + # - domain.com/prometheus + + path: / + + # pathType is only for k8s >= 1.18 + pathType: Prefix + + ## Extra paths to prepend to every host configuration. This is useful when working with annotation based services. + extraPaths: [] + # - path: /* + # backend: + # serviceName: ssl-redirect + # servicePort: use-annotation + + ## Prometheus server Ingress TLS configuration + ## Secrets must be manually created in the namespace + ## + tls: [] + # - secretName: prometheus-server-tls + # hosts: + # - prometheus.domain.com + + ## Server Deployment Strategy type + strategy: + type: Recreate + + ## hostAliases allows adding entries to /etc/hosts inside the containers + hostAliases: [] + # - ip: "127.0.0.1" + # hostnames: + # - "example.com" + + ## Node tolerations for server scheduling to nodes with taints + ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/ + ## + tolerations: [] + # - key: "key" + # operator: "Equal|Exists" + # value: "value" + # effect: "NoSchedule|PreferNoSchedule|NoExecute(1.6 only)" + + ## Node labels for Prometheus server pod assignment + ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/ + ## + nodeSelector: {} + + ## Pod affinity + ## + affinity: {} + + ## Pod anti-affinity can prevent the scheduler from placing Prometheus server replicas on the same node. + ## The value "soft" means that the scheduler should *prefer* to not schedule two replica pods onto the same node but no guarantee is provided. + ## The value "hard" means that the scheduler is *required* to not schedule two replica pods onto the same node. + ## The default value "" will disable pod anti-affinity so that no anti-affinity rules will be configured (unless set in `server.affinity`). + ## + podAntiAffinity: "" + + ## If anti-affinity is enabled sets the topologyKey to use for anti-affinity. + ## This can be changed to, for example, failure-domain.beta.kubernetes.io/zone + ## + podAntiAffinityTopologyKey: kubernetes.io/hostname + + ## Pod topology spread constraints + ## ref. https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/ + topologySpreadConstraints: [] + + ## PodDisruptionBudget settings + ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/ + ## + podDisruptionBudget: + enabled: false + maxUnavailable: 1 + # minAvailable: 1 + ## unhealthyPodEvictionPolicy is available since 1.27.0 (beta) + ## https://kubernetes.io/docs/tasks/run-application/configure-pdb/#unhealthy-pod-eviction-policy + # unhealthyPodEvictionPolicy: IfHealthyBudget + + ## Use an alternate scheduler, e.g. "stork". + ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ + ## + # schedulerName: + + persistentVolume: + ## If true, Prometheus server will create/use a Persistent Volume Claim + ## If false, use emptyDir + ## + enabled: true + + ## If set it will override the name of the created persistent volume claim + ## generated by the stateful set. + ## + statefulSetNameOverride: "" + + ## Prometheus server data Persistent Volume access modes + ## Must match those of existing PV or dynamic provisioner + ## Ref: http://kubernetes.io/docs/user-guide/persistent-volumes/ + ## + accessModes: + - ReadWriteOnce + + ## Prometheus server data Persistent Volume labels + ## + labels: {} + + ## Prometheus server data Persistent Volume annotations + ## + annotations: {} + + ## Prometheus server data Persistent Volume existing claim name + ## Requires server.persistentVolume.enabled: true + ## If defined, PVC must be created manually before volume will be bound + existingClaim: "" + + ## Prometheus server data Persistent Volume mount root path + ## + mountPath: /data + + ## Prometheus server data Persistent Volume size + ## + size: 8Gi + + ## Prometheus server data Persistent Volume Storage Class + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + # storageClass: "-" + + ## Prometheus server data Persistent Volume Binding Mode + ## If defined, volumeBindingMode: + ## If undefined (the default) or set to null, no volumeBindingMode spec is + ## set, choosing the default mode. + ## + # volumeBindingMode: "" + + ## Subdirectory of Prometheus server data Persistent Volume to mount + ## Useful if the volume's root directory is not empty + ## + subPath: "" + + ## Persistent Volume Claim Selector + ## Useful if Persistent Volumes have been provisioned in advance + ## Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector + ## + # selector: + # matchLabels: + # release: "stable" + # matchExpressions: + # - { key: environment, operator: In, values: [ dev ] } + + ## Persistent Volume Name + ## Useful if Persistent Volumes have been provisioned in advance and you want to use a specific one + ## + # volumeName: "" + + emptyDir: + ## Prometheus server emptyDir volume size limit + ## + sizeLimit: "" + + ## Annotations to be added to Prometheus server pods + ## + podAnnotations: {} + # iam.amazonaws.com/role: prometheus + + ## Labels to be added to Prometheus server pods + ## + podLabels: {} + + ## Prometheus AlertManager configuration + ## + alertmanagers: [] + + ## Specify if a Pod Security Policy for node-exporter must be created + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ + ## + podSecurityPolicy: + annotations: {} + ## Specify pod annotations + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl + ## + # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + + ## Use a StatefulSet if replicaCount needs to be greater than 1 (see below) + ## + replicaCount: 1 + + ## Number of old history to retain to allow rollback + ## Default Kubernetes value is set to 10 + ## + revisionHistoryLimit: 10 + + ## Annotations to be added to ConfigMap + ## + configMapAnnotations: {} + + ## Annotations to be added to deployment + ## + deploymentAnnotations: {} + + statefulSet: + ## If true, use a statefulset instead of a deployment for pod management. + ## This allows to scale replicas to more than 1 pod + ## + enabled: false + + annotations: {} + labels: {} + podManagementPolicy: OrderedReady + + ## Alertmanager headless service to use for the statefulset + ## + headless: + annotations: {} + labels: {} + servicePort: 80 + ## Enable gRPC port on service to allow auto discovery with thanos-querier + gRPC: + enabled: false + servicePort: 10901 + # nodePort: 10901 + + ## Statefulset's persistent volume claim retention policy + ## pvcDeleteOnStsDelete and pvcDeleteOnStsScale determine whether + ## statefulset's PVCs are deleted (true) or retained (false) on scaling down + ## and deleting statefulset, respectively. Requires 1.27.0+. + ## Ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention + ## + pvcDeleteOnStsDelete: false + pvcDeleteOnStsScale: false + + ## Prometheus server readiness and liveness probe initial delay and timeout + ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/ + ## + tcpSocketProbeEnabled: false + probeScheme: HTTP + readinessProbeInitialDelay: 30 + readinessProbePeriodSeconds: 5 + readinessProbeTimeout: 4 + readinessProbeFailureThreshold: 3 + readinessProbeSuccessThreshold: 1 + livenessProbeInitialDelay: 30 + livenessProbePeriodSeconds: 15 + livenessProbeTimeout: 10 + livenessProbeFailureThreshold: 3 + livenessProbeSuccessThreshold: 1 + startupProbe: + enabled: false + periodSeconds: 5 + failureThreshold: 30 + timeoutSeconds: 10 + + ## Prometheus server resource requests and limits + ## Ref: http://kubernetes.io/docs/user-guide/compute-resources/ + ## + resources: {} + # limits: + # cpu: 500m + # memory: 512Mi + # requests: + # cpu: 500m + # memory: 512Mi + + # Required for use in managed kubernetes clusters (such as AWS EKS) with custom CNI (such as calico), + # because control-plane managed by AWS cannot communicate with pods' IP CIDR and admission webhooks are not working + ## + hostNetwork: false + + # When hostNetwork is enabled, this will set to ClusterFirstWithHostNet automatically + dnsPolicy: ClusterFirst + + # Use hostPort + # hostPort: 9090 + + # Use portName + portName: "" + + ## Vertical Pod Autoscaler config + ## Ref: https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler + verticalAutoscaler: + ## If true a VPA object will be created for the controller (either StatefulSet or Deployemnt, based on above configs) + enabled: false + # updateMode: "Auto" + # containerPolicies: + # - containerName: 'prometheus-server' + + # Custom DNS configuration to be added to prometheus server pods + dnsConfig: {} + # nameservers: + # - 1.2.3.4 + # searches: + # - ns1.svc.cluster-domain.example + # - my.dns.search.suffix + # options: + # - name: ndots + # value: "2" + # - name: edns0 + + ## Security context to be added to server pods + ## + securityContext: + runAsUser: 65534 + runAsNonRoot: true + runAsGroup: 65534 + fsGroup: 65534 + + ## Security context to be added to server container + ## + containerSecurityContext: {} + + service: + ## If false, no Service will be created for the Prometheus server + ## + enabled: true + + annotations: {} + labels: {} + clusterIP: "" + + ## List of IP addresses at which the Prometheus server service is available + ## Ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips + ## + externalIPs: [] + + loadBalancerIP: "" + loadBalancerSourceRanges: [] + servicePort: 80 + sessionAffinity: None + type: ClusterIP + + ## Enable gRPC port on service to allow auto discovery with thanos-querier + gRPC: + enabled: false + servicePort: 10901 + # nodePort: 10901 + + ## If using a statefulSet (statefulSet.enabled=true), configure the + ## service to connect to a specific replica to have a consistent view + ## of the data. + statefulsetReplica: + enabled: false + replica: 0 + + ## Additional port to define in the Service + additionalPorts: [] + # additionalPorts: + # - name: authenticated + # port: 8081 + # targetPort: 8081 + + ## Prometheus server pod termination grace period + ## + terminationGracePeriodSeconds: 300 + + ## Prometheus data retention period (default if not specified is 15 days) + ## + retention: "15d" + + ## Prometheus' data retention size. Supported units: B, KB, MB, GB, TB, PB, EB. + ## + retentionSize: "" + +## Prometheus server ConfigMap entries for rule files (allow prometheus labels interpolation) +ruleFiles: {} + +## Prometheus server ConfigMap entries for scrape_config_files +## (allows scrape configs defined in additional files) +## +scrapeConfigFiles: [] + +## Prometheus server ConfigMap entries +## +serverFiles: + ## Alerts configuration + ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/ + alerting_rules.yml: {} + # groups: + # - name: Instances + # rules: + # - alert: InstanceDown + # expr: up == 0 + # for: 5m + # labels: + # severity: page + # annotations: + # description: '{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes.' + # summary: 'Instance {{ $labels.instance }} down' + ## DEPRECATED DEFAULT VALUE, unless explicitly naming your files, please use alerting_rules.yml + alerts: {} + + ## Records configuration + ## Ref: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/ + recording_rules.yml: {} + ## DEPRECATED DEFAULT VALUE, unless explicitly naming your files, please use recording_rules.yml + rules: {} + + prometheus.yml: + rule_files: + - /etc/config/recording_rules.yml + - /etc/config/alerting_rules.yml + ## Below two files are DEPRECATED will be removed from this default values file + - /etc/config/rules + - /etc/config/alerts + + scrape_configs: + - job_name: prometheus + static_configs: + - targets: + - localhost:9090 + + # A scrape configuration for running Prometheus on a Kubernetes cluster. + # This uses separate scrape configs for cluster components (i.e. API server, node) + # and services to allow each to use different authentication configs. + # + # Kubernetes labels will be added as Prometheus labels on metrics via the + # `labelmap` relabeling action. + + # Scrape config for API servers. + # + # Kubernetes exposes API servers as endpoints to the default/kubernetes + # service so this uses `endpoints` role and uses relabelling to only keep + # the endpoints associated with the default/kubernetes service using the + # default named port `https`. This works for single API server deployments as + # well as HA API server deployments. + - job_name: 'kubernetes-apiservers' + + kubernetes_sd_configs: + - role: endpoints + + # Default to scraping over https. If required, just disable this or change to + # `http`. + scheme: https + + # This TLS & bearer token file config is used to connect to the actual scrape + # endpoints for cluster components. This is separate to discovery auth + # configuration because discovery & scraping are two separate concerns in + # Prometheus. The discovery auth config is automatic if Prometheus runs inside + # the cluster. Otherwise, more config options have to be provided within the + # . + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + # If your node certificates are self-signed or use a different CA to the + # master CA, then disable certificate verification below. Note that + # certificate verification is an integral part of a secure infrastructure + # so this should only be disabled in a controlled environment. You can + # disable certificate verification by uncommenting the line below. + # + insecure_skip_verify: true + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + + # Keep only the default/kubernetes service endpoints for the https port. This + # will add targets for each API server which Kubernetes adds an endpoint to + # the default/kubernetes service. + relabel_configs: + - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name] + action: keep + regex: default;kubernetes;https + + - job_name: 'kubernetes-nodes' + + # Default to scraping over https. If required, just disable this or change to + # `http`. + scheme: https + + # This TLS & bearer token file config is used to connect to the actual scrape + # endpoints for cluster components. This is separate to discovery auth + # configuration because discovery & scraping are two separate concerns in + # Prometheus. The discovery auth config is automatic if Prometheus runs inside + # the cluster. Otherwise, more config options have to be provided within the + # . + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + # If your node certificates are self-signed or use a different CA to the + # master CA, then disable certificate verification below. Note that + # certificate verification is an integral part of a secure infrastructure + # so this should only be disabled in a controlled environment. You can + # disable certificate verification by uncommenting the line below. + # + insecure_skip_verify: true + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + + kubernetes_sd_configs: + - role: node + + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_node_label_(.+) + - target_label: __address__ + replacement: kubernetes.default.svc:443 + - source_labels: [__meta_kubernetes_node_name] + regex: (.+) + target_label: __metrics_path__ + replacement: /api/v1/nodes/$1/proxy/metrics + + + - job_name: 'kubernetes-nodes-cadvisor' + + # Default to scraping over https. If required, just disable this or change to + # `http`. + scheme: https + + # This TLS & bearer token file config is used to connect to the actual scrape + # endpoints for cluster components. This is separate to discovery auth + # configuration because discovery & scraping are two separate concerns in + # Prometheus. The discovery auth config is automatic if Prometheus runs inside + # the cluster. Otherwise, more config options have to be provided within the + # . + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + # If your node certificates are self-signed or use a different CA to the + # master CA, then disable certificate verification below. Note that + # certificate verification is an integral part of a secure infrastructure + # so this should only be disabled in a controlled environment. You can + # disable certificate verification by uncommenting the line below. + # + insecure_skip_verify: true + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + + kubernetes_sd_configs: + - role: node + + # This configuration will work only on kubelet 1.7.3+ + # As the scrape endpoints for cAdvisor have changed + # if you are using older version you need to change the replacement to + # replacement: /api/v1/nodes/$1:4194/proxy/metrics + # more info here https://github.com/coreos/prometheus-operator/issues/633 + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_node_label_(.+) + - target_label: __address__ + replacement: kubernetes.default.svc:443 + - source_labels: [__meta_kubernetes_node_name] + regex: (.+) + target_label: __metrics_path__ + replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor + + # Metric relabel configs to apply to samples before ingestion. + # [Metric Relabeling](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#metric_relabel_configs) + # metric_relabel_configs: + # - action: labeldrop + # regex: (kubernetes_io_hostname|failure_domain_beta_kubernetes_io_region|beta_kubernetes_io_os|beta_kubernetes_io_arch|beta_kubernetes_io_instance_type|failure_domain_beta_kubernetes_io_zone) + + # Scrape config for service endpoints. + # + # The relabeling allows the actual service scrape endpoint to be configured + # via the following annotations: + # + # * `prometheus.io/scrape`: Only scrape services that have a value of + # `true`, except if `prometheus.io/scrape-slow` is set to `true` as well. + # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need + # to set this to `https` & most likely set the `tls_config` of the scrape config. + # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. + # * `prometheus.io/port`: If the metrics are exposed on a different port to the + # service then set this appropriately. + # * `prometheus.io/param_`: If the metrics endpoint uses parameters + # then you can set any parameter + - job_name: 'kubernetes-service-endpoints' + honor_labels: true + + kubernetes_sd_configs: + - role: endpoints + + relabel_configs: + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape] + action: keep + regex: true + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape_slow] + action: drop + regex: true + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: replace + target_label: __scheme__ + regex: (https?) + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: (.+?)(?::\d+)?;(\d+) + replacement: $1:$2 + - action: labelmap + regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+) + replacement: __param_$1 + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_service_name] + action: replace + target_label: service + - source_labels: [__meta_kubernetes_pod_node_name] + action: replace + target_label: node + + # Scrape config for slow service endpoints; same as above, but with a larger + # timeout and a larger interval + # + # The relabeling allows the actual service scrape endpoint to be configured + # via the following annotations: + # + # * `prometheus.io/scrape-slow`: Only scrape services that have a value of `true` + # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need + # to set this to `https` & most likely set the `tls_config` of the scrape config. + # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. + # * `prometheus.io/port`: If the metrics are exposed on a different port to the + # service then set this appropriately. + # * `prometheus.io/param_`: If the metrics endpoint uses parameters + # then you can set any parameter + - job_name: 'kubernetes-service-endpoints-slow' + honor_labels: true + + scrape_interval: 5m + scrape_timeout: 30s + + kubernetes_sd_configs: + - role: endpoints + + relabel_configs: + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape_slow] + action: keep + regex: true + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme] + action: replace + target_label: __scheme__ + regex: (https?) + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port] + action: replace + target_label: __address__ + regex: (.+?)(?::\d+)?;(\d+) + replacement: $1:$2 + - action: labelmap + regex: __meta_kubernetes_service_annotation_prometheus_io_param_(.+) + replacement: __param_$1 + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_service_name] + action: replace + target_label: service + - source_labels: [__meta_kubernetes_pod_node_name] + action: replace + target_label: node + + - job_name: 'prometheus-pushgateway' + honor_labels: true + + kubernetes_sd_configs: + - role: service + + relabel_configs: + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] + action: keep + regex: pushgateway + + # Example scrape config for probing services via the Blackbox Exporter. + # + # The relabeling allows the actual service scrape endpoint to be configured + # via the following annotations: + # + # * `prometheus.io/probe`: Only probe services that have a value of `true` + - job_name: 'kubernetes-services' + honor_labels: true + + metrics_path: /probe + params: + module: [http_2xx] + + kubernetes_sd_configs: + - role: service + + relabel_configs: + - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe] + action: keep + regex: true + - source_labels: [__address__] + target_label: __param_target + - target_label: __address__ + replacement: blackbox + - source_labels: [__param_target] + target_label: instance + - action: labelmap + regex: __meta_kubernetes_service_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + target_label: namespace + - source_labels: [__meta_kubernetes_service_name] + target_label: service + + # Example scrape config for pods + # + # The relabeling allows the actual pod scrape endpoint to be configured via the + # following annotations: + # + # * `prometheus.io/scrape`: Only scrape pods that have a value of `true`, + # except if `prometheus.io/scrape-slow` is set to `true` as well. + # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need + # to set this to `https` & most likely set the `tls_config` of the scrape config. + # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. + # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`. + - job_name: 'kubernetes-pods' + honor_labels: true + + kubernetes_sd_configs: + - role: pod + + relabel_configs: + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] + action: keep + regex: true + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow] + action: drop + regex: true + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] + action: replace + regex: (https?) + target_label: __scheme__ + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] + action: replace + regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) + replacement: '[$2]:$1' + target_label: __address__ + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] + action: replace + regex: (\d+);((([0-9]+?)(\.|$)){4}) + replacement: $2:$1 + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+) + replacement: __param_$1 + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod + - source_labels: [__meta_kubernetes_pod_phase] + regex: Pending|Succeeded|Failed|Completed + action: drop + - source_labels: [__meta_kubernetes_pod_node_name] + action: replace + target_label: node + + # Example Scrape config for pods which should be scraped slower. An useful example + # would be stackriver-exporter which queries an API on every scrape of the pod + # + # The relabeling allows the actual pod scrape endpoint to be configured via the + # following annotations: + # + # * `prometheus.io/scrape-slow`: Only scrape pods that have a value of `true` + # * `prometheus.io/scheme`: If the metrics endpoint is secured then you will need + # to set this to `https` & most likely set the `tls_config` of the scrape config. + # * `prometheus.io/path`: If the metrics path is not `/metrics` override this. + # * `prometheus.io/port`: Scrape the pod on the indicated port instead of the default of `9102`. + - job_name: 'kubernetes-pods-slow' + honor_labels: true + + scrape_interval: 5m + scrape_timeout: 30s + + kubernetes_sd_configs: + - role: pod + + relabel_configs: + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_slow] + action: keep + regex: true + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme] + action: replace + regex: (https?) + target_label: __scheme__ + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path] + action: replace + target_label: __metrics_path__ + regex: (.+) + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] + action: replace + regex: (\d+);(([A-Fa-f0-9]{1,4}::?){1,7}[A-Fa-f0-9]{1,4}) + replacement: '[$2]:$1' + target_label: __address__ + - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_port, __meta_kubernetes_pod_ip] + action: replace + regex: (\d+);((([0-9]+?)(\.|$)){4}) + replacement: $2:$1 + target_label: __address__ + - action: labelmap + regex: __meta_kubernetes_pod_annotation_prometheus_io_param_(.+) + replacement: __param_$1 + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_namespace] + action: replace + target_label: namespace + - source_labels: [__meta_kubernetes_pod_name] + action: replace + target_label: pod + - source_labels: [__meta_kubernetes_pod_phase] + regex: Pending|Succeeded|Failed|Completed + action: drop + - source_labels: [__meta_kubernetes_pod_node_name] + action: replace + target_label: node + +# adds additional scrape configs to prometheus.yml +# must be a string so you have to add a | after extraScrapeConfigs: +# example adds prometheus-blackbox-exporter scrape config +extraScrapeConfigs: "" + # - job_name: 'prometheus-blackbox-exporter' + # metrics_path: /probe + # params: + # module: [http_2xx] + # static_configs: + # - targets: + # - https://example.com + # relabel_configs: + # - source_labels: [__address__] + # target_label: __param_target + # - source_labels: [__param_target] + # target_label: instance + # - target_label: __address__ + # replacement: prometheus-blackbox-exporter:9115 + +# Adds option to add alert_relabel_configs to avoid duplicate alerts in alertmanager +# useful in H/A prometheus with different external labels but the same alerts +alertRelabelConfigs: {} + # alert_relabel_configs: + # - source_labels: [dc] + # regex: (.+)\d+ + # target_label: dc + +networkPolicy: + ## Enable creation of NetworkPolicy resources. + ## + ## Customized for K10 + enabled: true + +# Force namespace of namespaced resources +forceNamespace: "" + +# Extra manifests to deploy as an array +extraManifests: [] + # - | + # apiVersion: v1 + # kind: ConfigMap + # metadata: + # labels: + # name: prometheus-extra + # data: + # extra-data: "value" + +# Configuration of subcharts defined in Chart.yaml + +## alertmanager sub-chart configurable values +## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/alertmanager +## +alertmanager: + ## If false, alertmanager will not be installed + ## + ## Customized for K10 + enabled: false + + persistence: + size: 2Gi + + podSecurityContext: + runAsUser: 65534 + runAsNonRoot: true + runAsGroup: 65534 + fsGroup: 65534 + +## kube-state-metrics sub-chart configurable values +## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics +## +kube-state-metrics: + ## If false, kube-state-metrics sub-chart will not be installed + ## + ## Customized for K10 + enabled: false + +## prometheus-node-exporter sub-chart configurable values +## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-node-exporter +## +prometheus-node-exporter: + ## If false, node-exporter will not be installed + ## + ## Customized for K10 + enabled: false + + rbac: + pspEnabled: false + + containerSecurityContext: + allowPrivilegeEscalation: false + +## prometheus-pushgateway sub-chart configurable values +## Please see https://github.com/prometheus-community/helm-charts/tree/main/charts/prometheus-pushgateway +## +prometheus-pushgateway: + ## If false, pushgateway will not be installed + ## + ## Customized for K10 + enabled: false + + # Optional service annotations + serviceAnnotations: + prometheus.io/probe: pushgateway diff --git a/charts/kasten/k10/7.0.1101/config.json b/charts/kasten/k10/7.0.1101/config.json new file mode 100644 index 0000000000..e69de29bb2 diff --git a/charts/kasten/k10/7.0.1101/eula.txt b/charts/kasten/k10/7.0.1101/eula.txt new file mode 100644 index 0000000000..19f9fc0763 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/eula.txt @@ -0,0 +1,459 @@ +KASTEN END USER LICENSE AGREEMENT + +This End User License Agreement is a binding agreement between Kasten, Inc., a +Delaware Corporation ("Kasten"), and you ("Licensee"), and establishes the terms +under which Licensee may use the Software and Documentation (as defined below), +including without limitation terms and conditions relating to license grant, +intellectual property rights, disclaimers /exclusions / limitations of warranty, +indemnity and liability, governing law and limitation periods. All components +collectively are referred to herein as the "Agreement." + +LICENSEE ACKNOWLEDGES IT HAS HAD THE OPPORTUNITY TO REVIEW THE AGREEMENT, PRIOR +TO ACCEPTANCE OF THIS AGREEMENT. LICENSEE'S ACCEPTANCE OF THIS AGREEMENT IS +EVIDENCED BY LICENSEE'S DOWNLOADING, COPYING, INSTALLING OR USING THE KASTEN +SOFTWARE. IF YOU ARE ACTING ON BEHALF OF A COMPANY, YOU REPRESENT THAT YOU ARE +AUTHORIZED TO BIND THE COMPANY. IF YOU DO NOT AGREE TO ALL TERMS OF THIS +AGREEMENT, DO NOT DOWNLOAD, COPY, INSTALL, OR USE THE SOFTWARE, AND PERMANENTLY +DELETE THE SOFTWARE. + +1. DEFINITIONS + +1.1 "Authorized Persons" means trained technical employees and contractors of +Licensee who are subject to a written agreement with Licensee that includes use +and confidentiality restrictions that are at least as protective as those set +forth in this Agreement. + +1.2 "Authorized Reseller" means a distributor or reseller, including cloud +computing platform providers, authorized by Kasten to resell licenses to the +Software through the channel through or in the territory in which Licensee is +purchasing. + +1.3 "Confidential Information" means all non-public information disclosed in +written, oral or visual form by either party to the other. Confidential +Information may include, but is not limited to, services, pricing information, +computer programs, source code, names and expertise of employees and +consultants, know-how, and other technical, business, financial and product +development information. "Confidential Information" does not include any +information that the receiving party can demonstrate by its written records (1) +was rightfully known to it without obligation of confidentiality prior to its +disclosure hereunder by the disclosing party; (2) is or becomes publicly known +through no wrongful act of the receiving party; (3) has been rightfully received +without obligation of confidentiality from a third party authorized to make such +a disclosure; or (4) is independently developed by the receiving party without +reference to confidential information disclosed hereunder. + +1.4 "Documentation" means any administration guides, installation and user +guides, and release notes that are provided by Kasten to Licensee with the +Software. + +1.5 "Intellectual Property Rights" means patents, design patents, copyrights, +trademarks, Confidential Information, know-how, trade secrets, moral rights, and +any other intellectual property rights recognized in any country or jurisdiction +in the world. + +1.6 "Node" means a single physical or virtual computing machine recognizable by +the Software as a unique device. Nodes must be owned or leased by Licensee or an +entity controlled by, controlling or under common control with Licensee. + +1.7 "Edition" means a unique identifier for each distinct product that is made +available by Kasten and that can be licensed, including summary information +regarding any associated functionality, features, or restrictions specific to +the Edition. + +1.8 "Open Source Software" means software delivered to Licensee hereunder that +is subject to the provisions of any open source license agreement. + +1.9 "Purchase Agreement" means a separate commercial agreement, if applicable, +between Kasten and the Licensee that contains the terms for the licensing of a +specific Edition of the Software. + +1.10 "Software" means any and all software product Editions licensed to Licensee +under this Agreement, all as developed by Kasten and delivered to Licensee +hereunder. Software also includes any Updates provided by Kasten to Licensee. +For the avoidance of doubt, the definition of Software shall exclude any +Third-Party Software and Open Source Software. + +1.11 "Third-Party Software" means certain software Kasten licenses from third +parties and provides to Licensee with the Software, which may include Open +Source Software. + +1.12 "Update" means a revision of the Software that Kasten makes available to +customers at no additional cost. The Update includes, if and when applicable and +available, bug fix patches, maintenance release, minor release, or new major +releases. Updates are limited only to the Software licensed by Licensee, and +specifically exclude new product offerings, features, options or functionality +of the Software that Kasten may choose to license separately, or for an +additional fee. + +1.13 "Use" means to install activate the processing capabilities of the +Software, load, execute, access, employ the Software, or display information +resulting from such capabilities. + + +2. LICENSE GRANT AND RESTRICTIONS + +2.1 Enterprise License. Subject to Licensee"s compliance with the terms and +conditions of this Agreement (including any additional restrictions on +Licensee"s use of the Software set forth in the Purchase Agreement, if one +exists, between Licensee and Kasten), Kasten grants to Licensee a non-exclusive, +non-transferable (except in connection with a permitted assignment of this +Agreement under Section 14.10 (Assignment), non-sublicensable, limited term +license to install and use the Software, in object code form only, solely for +Licensee"s use, unless terminated in accordance with Section 4 (Term and +Termination). + +2.2 Starter License. This section shall only apply when the Licensee licenses +Starter Edition of the Software. The license granted herein is for a maximum of +5 Nodes and for a period of 12 months from the date of the Software release that +embeds the specific license instance. Updating to a newer Software (minor or +major) release will always extend the validity of the license by 12 months. If +the Licensee wishes to upgrade to an Enterprise License instead, the Licensee +will have to enter into a Purchase Agreement with Kasten which will supersede +this Agreement. The Licensee is required to provide accurate email and company +information, if representing a company, when accepting this Agreement. Under no +circumstances will a Starter License be construed to mean that the Licensee is +authorized to distribute the Software to any third party for any reason +whatsoever. + +2.3 Evaluation License. This section shall only apply when the Licensee has +licensed the Software for an initial evaluation period. The license granted +herein is valid only one time 30 days, starting from date of installation, +unless otherwise explicitly designated by Kasten ("Evaluation Period"). Under +this license the Software can only be used for evaluation purposes. Under no +circumstances will an Evaluation License be construed to mean that the Licensee +is authorized to distribute the Software to any third party for any reason +whatsoever. If the Licensee wishes to upgrade to an Enterprise License instead, +the Licensee will have to enter into a Purchase Agreement with Kasten which will +supersede this Agreement.. If the Licensee does not wish to upgrade to an +Enterprise License at the end of the Evaluation Period the Licensee"s rights +under the Agreement shall terminate, and the Licensee shall delete all Kasten +Software. + +2.4 License Restrictions. Except to the extent permitted under this Agreement, +Licensee will not nor will Licensee allow any third party to: (i) copy, modify, +adapt, translate or otherwise create derivative works of the Software or the +Documentation; (ii) reverse engineer, decompile, disassemble or otherwise +attempt to discover the source code of the Software; (iii) rent, lease, sell, +assign or otherwise transfer rights in or to the Software or Documentation; (iv) +remove any proprietary notices or labels from the Software or Documentation; (v) +publicly disseminate performance information or analysis (including, without +limitation, benchmarks) relating to the Software. Licensee will comply with all +applicable laws and regulations in Licensee"s use of and access to the Software +and Documentation. + +2.5 Responsibility for Use. The Software and Documentation may be used only by +Authorized Persons and in conformance with this Agreement. Licensee shall be +responsible for the proper use and protection of the Software and Documentation +and is responsible for: (i) installing, managing, operating, and physically +controlling the Software and the results obtained from using the Software; (ii) +using the Software within the operating environment specified in the +Documentation; and; (iii) establishing and maintaining such recovery and data +protection and security procedures as necessary for Licensee's service and +operation and/or as may be specified by Kasten from time to time. + +2.6 United States Government Users. The Software licensed under this Agreement +is "commercial computer software" as that term is described in DFAR +252.227-7014(a)(1). If acquired by or on behalf of a civilian agency, the U.S. +Government acquires this commercial computer software and/or commercial computer +software documentation subject to the terms and this Agreement as specified in +48 C.F.R. 12.212 (Computer Software) and 12.211 (Technical Data) of the Federal +Acquisition Regulations ("FAR") and its successors. If acquired by or on behalf +of any agency within the Department of Defense ("DOD"), the U.S. Government +acquires this commercial computer software and/or commercial computer software +documentation subject to the terms of this Agreement as specified in 48 C.F.R. +227.7202 of the DOD FAR Supplement and its successors. + + +3. SUPPORT + +3.1 During the Term (as defined below) and subject to Licensee’s compliance +with the terms and conditions of this Agreement, Licensee may submit queries and +requests for support for Enterprise Licenses by submitting Service Requests via Veeam +Support Portal (https://my.veeam.com). Support is not provided for Starter and Evaluation +Licenses. Licensee shall be entitled to the support service-level agreement specified +in the relevant order form or purchase order (“Order Form”) between Licensee and the +Reseller and as set forth in Kasten’s Support Policy, a copy of which can be found +at https://www.kasten.io/support-services-policy. Licensee shall also be permitted to +download and install all Updates released by Kasten during the Term and made generally +available to users of the Software. Software versions with all updates and upgrades +installed is supported for six months from the date of release of that version. + +3.2 Starter Edition Support. If the Licensee has licensed Starter Edition of +the Software, you will have access to the Kasten K10 Support Community +(https://community.veeam.com/groups/kasten-k10-support-92), but Kasten cannot guarantee +a service level of any sort. Should a higher level of support be needed, Licensee has +the option to consider entering into a Purchase Agreement with Kasten for licensing a +different Edition of the Software. + + + +4. TERM AND TERMINATION + +4.1 Term. The term of this Agreement, except for Starter and Evaluation +Licenses, shall commence on the Effective Date and shall, unless terminated +earlier in accordance with the provisions of Section 4.2 below, remain in force +for the Subscription Period as set forth in the applicable Order Form(s) (the +"Term"). The parties may extend the Term of this Agreement beyond the +Subscription Period by executing additional Order Form(s) and Licensee"s payment +of additional licensing fees. The term of this Agreement for the Starter and +Evaluation Licenses will coincide with the term for Starter Edition (as stated +in section 2.2) and the term for Evaluation Period (as stated in section 2.3), +respectively + +4.2 Termination. Either party may immediately terminate this +Agreement and the licenses granted hereunder if the other party (1) becomes +insolvent and"becomes unwilling or unable to meet its obligations under this +Agreement, (2) files a petition in bankruptcy, (3) is subject to the filing of +an involuntary petition for bankruptcy which is not rescinded within a period of +forty-five (45) days, (4) fails to cure a material breach of any material term +or condition of this Agreement within thirty (30) days of receipt of written +notice specifying such breach, or (5) materially breaches its obligations of +confidentiality hereunder. + +4.3 Effects of Termination. Upon expiration or +termination of this Agreement for any reason, (i) any amounts owed to Kasten +under this Agreement will be immediately due and payable; (ii) all licensed +rights granted in this Agreement will immediately cease; and (iii) Licensee will +promptly discontinue all use of the Software and Documentation and return to +Kasten any Kasten Confidential Information in Licensee"s possession or control. + +4.4 Survival. The following Sections of this Agreement will remain in effect +following the expiration or termination of these General Terms for any reason: +4.3 (Effects of Termination), 4.4 (Survival), 5 (Third Party Software) 5 +(Confidentiality), 9 (Ownership), 10.2 (Third-Party Software), 10.3 (Warranty +Disclaimer), 11 (Limitations of Liability), 12.2 (Exceptions to Kasten +Obligation), 13 (Export) and 14 (General). + + +5. THIRD PARTY AND OPEN SOURCE SOFTWARE Certain Third-Party Software or Open +Source Software (Kasten can provide a list upon request) that may be provided +with the Software may be subject to various other terms and conditions imposed +by the licensors of such Third-Party Software or Open Source Software. The +terms of Licensee"s use of the Third-Party Software or Open Source Software is +subject to and governed by the respective Third-Party Software and Open Source +licenses, except that this Section 5 (Third-Party Software), Section 10.2 (Third +Party Software), 10.3 (Warranty Disclaimer), Section 11 (Limitations of +Liability), and Section 14 (General) of this Agreement also govern Licensee"s +use of the Third-Party Software. To the extent applicable to Licensee"s use of +such Third-Party Software and Open Source, Licensee agrees to comply with the +terms and conditions contained in all such Third-Party Software and Open Source +licenses. + + +6. CONFIDENTIALITY Neither party will use any Confidential Information of the +other party except as expressly permitted by this Agreement or as expressly +authorized in writing by the disclosing party. The receiving party shall use +the same degree of care to protect the disclosing party"s Confidential +Information as it uses to protect its own Confidential Information of like +nature, but in no circumstances less than a commercially reasonable standard of +care. The receiving party may not disclose the disclosing party"s Confidential +Information to any person or entity other than to (i) (a) Authorized Persons in +the case the receiving party is Licensee, and (b) Kasten"s employees and +contractors in the case the receiving party is Kasten, and (ii) who need access +to such Confidential Information solely for the purpose of fulfilling that +party"s obligations or exercising that party"s rights hereunder. The foregoing +obligations will not restrict the receiving party from disclosing Confidential +Information of the disclosing party: (1) pursuant to the order or requirement of +a court, administrative agency, or other governmental body, provided that the +receiving party required to make such a disclosure gives reasonable notice to +the disclosing party prior to such disclosure; and (2) on a confidential basis +to its legal and financial advisors. Kasten may identify Licensee in its +customer lists in online and print marketing materials. + + +7. FEES Fees for Enterprise License shall be set forth in separate Order Form(s) +attached to a Purchase Agreement, between the Licensee and Kasten. + +If Licensee has obtained the Software through an Authorized Reseller, fees for +licensing shall be invoiced directly by the Authorized Reseller. + +If no Purchase Agreement exists, during the term of this Agreement, Kasten +shall license the Starter Edition only and no other Edition of the Software +"at no charge" to Licensee. + + +8. USAGE DATA Kasten may collect, accumulate, and aggregate certain usage +statistics in order to analyze usage of the Software, make improvements, and +potentially develop new products. Kasten may use aggregated anonymized data for +any purpose that Kasten, at its own discretion, may consider appropriate. + + +9. OWNERSHIP As between Kasten and Licensee, all right, title and interest in +the Software, Documentation and any other Kasten materials furnished or made +available hereunder, all modifications and enhancements thereof, and all +suggestions, ideas and feedback proposed by Licensee regarding the Software and +Documentation, including all copyright rights, patent rights and other +Intellectual Property Rights in each of the foregoing, belong to and are +retained solely by Kasten or Kasten"s licensors and providers, as applicable. +Licensee hereby does and will irrevocably assign to Kasten all evaluations, +ideas, feedback and suggestions made by Licensee to Kasten regarding the +Software and Documentation (collectively, "Feedback") and all Intellectual +Property Rights in and to the Feedback. Except as expressly provided herein, no +licenses of any kind are granted hereunder, whether by implication, estoppel, or +otherwise. + + +10. LIMITED WARRANTY AND DISCLAIMERS + +10.1 Limited Warranty. Kasten warrants for a period of thirty (30) days from +the Effective Date that the Software will materially conform to Kasten"s +then-current Documentation (the "Warranty Period") when properly installed on a +computer for which a license is granted hereunder. Licensee"s exclusive remedy +for a breach of this Section 10.1 is that Kasten shall, at its option, use +commercially reasonable efforts to correct or replace the Software, or refund +all or a portion of the fees paid by Licensee pursuant to the Purchase +Agreement. Kasten, in its sole discretion, may revise this limited warranty from +time to time. + +10.2 Third-Party Software. Except as expressly set forth in this Agreement, +Third-Party Software (including any Open Source Software) are provided on an +"as-is" basis at the sole risk of Licensee. Notwithstanding any language to the +contrary in this Agreement, Kasten makes no express or implied warranties of any +kind with respect to Third-Party Software provided to Licensee and shall not be +liable for any damages regarding the use or operation of the Third-Party +Software furnished under this Agreement. Any and all express or implied +warranties, if any, arising from the license of Third-Party Software shall be +those warranties running from the third party manufacturer or licensor to +Licensee. + +10.3 Warranty Disclaimer. EXCEPT FOR THE LIMITED WARRANTY PROVIDED ABOVE, +KASTEN AND ITS SUPPLIERS MAKE NO WARRANTY OF ANY KIND, WHETHER EXPRESS, IMPLIED, +STATUTORY OR OTHERWISE, RELATING TO THE SOFTWARE OR TO KASTEN"S MAINTENANCE, +PROFESSIONAL OR OTHER SERVICES. KASTEN SPECIFICALLY DISCLAIMS ALL IMPLIED +WARRANTIES OF DESIGN, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE +AND NON-INFRINGEMENT. KASTEN AND ITS SUPPLIERS AND LICENSORS DO NOT WARRANT OR +REPRESENT THAT THE SOFTWARE WILL BE FREE FROM BUGS OR THAT ITS USE WILL BE +UNINTERRUPTED OR ERROR-FREE. THIS DISCLAIMER SHALL APPLY NOTWITHSTANDING THE +FAILURE OF THE ESSENTIAL PURPOSE OF ANY LIMITED REMEDY PROVIDED HEREIN. EXCEPT +AS STATED ABOVE, KASTEN AND ITS SUPPLIERS PROVIDE THE SOFTWARE ON AN "AS IS" +BASIS. KASTEN PROVIDES NO WARRANTIES WITH RESPECT TO THIRD PARTY SOFTWARE AND +OPEN SOURCE SOFTWARE. + + +11. LIMITATIONS OF LIABILITY + +11.1 EXCLUSION OF CERTAIN DAMAGES. EXCEPT FOR BREACHES OF SECTION 6 +(CONFIDENTIALITY) OR SECTION 9 (OWNERSHIP), IN NO EVENT WILL EITHER PARTY BE +LIABLE FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, SPECIAL, INCIDENTAL OR +RELIANCE DAMAGES, INCLUDING ANY LOST DATA, LOSS OF USE AND LOST PROFITS, ARISING +FROM OR RELATING TO THIS AGREEMENT, THE SOFTWARE OR DOCUMENTATION, EVEN IF SUCH +PARTY KNEW OR SHOULD HAVE KNOWN OF THE POSSIBILITY OF, OR COULD REASONABLY HAVE +PREVENTED, SUCH DAMAGES. + +11.2 LIMITATION OF DAMAGES. EXCEPT FOR THE BREACHES OF SECTION 6 +(CONFIDENTIALITY) OR SECTION 9 (OWNERSHIP), EACH PARTY"S TOTAL CUMULATIVE +LIABILITY ARISING FROM OR RELATED TO THIS AGREEMENT OR THE SOFTWARE, +DOCUMENTATION, OR SERVICES PROVIDED BY KASTEN, WILL NOT EXCEED THE AMOUNT OF +FEES PAID OR PAYABLE BY LICENSEE FOR THE SOFTWARE, DOCUMENTATION OR SERVICES +GIVING RISE TO THE CLAIM IN THE TWELVE (12) MONTHS FOLLOWING THE EFFECTIVE DATE. +LICENSEE AGREES THAT KASTEN"S SUPPLIERS AND LICENSORS WILL HAVE NO LIABILITY OF +ANY KIND UNDER OR AS A RESULT OF THIS AGREEMENT. IN THE CASE OF KASTEN"S +INDEMNIFICATION OBLIGATIONS, KASTEN"S CUMULATIVE LIABILITY UNDER THIS AGREEMENT +SHALL BE LIMITED TO THE SUM OF THE LICENSE FEES PAID OR PAYABLE BY LICENSEE FOR +THE SOFTWARE, DOCUMENTATION OR SERVICES GIVING RISE TO THE CLAIM IN THE TWELVE +(12) MONTHS FOLLOWING THE EFFECTIVE DATE. + +11.3 THIRD PARTY SOFTWARE. NOTWITHSTANDING ANY LANGUAGE TO THE CONTRARY IN THIS +AGREEMENT, KASTEN SHALL NOT BE LIABLE FOR ANY DAMAGES REGARDING THE USE OR +OPERATION OF ANY THIRD-PARTY SOFTWARE FURNISHED UNDER THIS AGREEMENT. + +11.4 LIMITATION OF ACTIONS. IN NO EVENT MAY LICENSEE BRING ANY CAUSE OF ACTION +RELATED TO THIS AGREEMENT MORE THAN ONE (1) YEAR AFTER THE OCCURRENCE OF THE +EVENT GIVING RISE TO THE LIABILITY. + + +12. EXPORT +The Software, Documentation and related technical data may be subject +to U.S. export control laws, including without limitation the U.S. Export +Administration Act and its associated regulations, and may be subject to export +or import regulations in other countries. Licensee shall comply with all such +regulations and agrees to obtain all necessary licenses to export, re-export, or +import the Software, Documentation and related technical data. + + +13. GENERAL + +13.1 No Agency. Kasten and Licensee each acknowledge and agree that the +relationship established by this Agreement is that of independent contractors, +and nothing contained in this Agreement shall be construed to: (1) give either +party the power to direct or control the daytoday activities of the other; (2) +deem the parties to be acting as partners, joint venturers, coowners or +otherwise as participants in a joint undertaking; or (3) permit either party or +any of either party"s officers, directors, employees, agents or representatives +to create or assume any obligation on behalf of or for the account of the other +party for any purpose whatsoever. + +13.2 Compliance with Laws. Each party agrees to comply with all applicable +laws, regulations, and ordinances relating to their performance hereunder. +Without limiting the foregoing, Licensee warrants and covenants that it will +comply with all then current laws and regulations of the United States and other +jurisdictions relating or applicable to Licensee"s use of the Software and +Documentation including, without limitation, those concerning Intellectual +Property Rights, invasion of privacy, defamation, and the import and export of +Software and Documentation. + +13.3 Force Majeure. Except for the duty to pay money, neither party shall be +liable hereunder by reason of any failure or delay in the performance of its +obligations hereunder on account of strikes, riots, fires, flood, storm, +explosions, acts of God, war, governmental action, earthquakes, or any other +cause which is beyond the reasonable control of such party. + +13.4 Governing Law; Venue and Jurisdiction. This Agreement shall be interpreted +according to the laws of the State of California without regard to or +application of choiceoflaw rules or principles. The parties expressly agree +that the United Nations Convention on Contracts for the International Sale of +Goods and the Uniform Computer Information Transactions Act will not apply. Any +legal action or proceeding arising under this Agreement will be brought +exclusively in the federal or state courts located in Santa Clara County, +California and the parties hereby consent to the personal jurisdiction and venue +therein. + +13.5 Injunctive Relief. The parties agree that monetary damages would not be an +adequate remedy for the breach of certain provisions of this Agreement, +including, without limitation, all provisions concerning infringement, +confidentiality and nondisclosure, or limitation on permitted use of the +Software or Documentation. The parties further agree that, in the event of such +breach, injunctive relief would be necessary to prevent irreparable injury. +Accordingly, either party shall have the right to seek injunctive relief or +similar equitable remedies to enforce such party's rights under the pertinent +provisions of this Agreement, without limiting its right to pursue any other +legal remedies available to it. + +13.6 Entire Agreement and Waiver. This Agreement and any exhibits hereto shall +constitute the entire agreement and contains all terms and conditions between +Kasten and Licensee with respect to the subject matter hereof and all prior +agreements, representations, and statement with respect to such subject matter +are superseded hereby. This Agreement may be changed only by written agreement +signed by both Kasten and Licensee. No failure of either party to exercise or +enforce any of its rights under this Agreement shall act as a waiver of +subsequent breaches; and the waiver of any breach shall not act as a waiver of +subsequent breaches. + +13.7 Severability. In the event any provision of this Agreement is held by a +court or other tribunal of competent jurisdiction to be unenforceable, that +provision will be enforced to the maximum extent permissible under applicable +law and the other provisions of this Agreement will remain in full force and +effect. The parties further agree that in the event such provision is an +essential part of this Agreement, they will begin negotiations for a suitable +replacement provision. + +13.8 Counterparts. This Agreement may be executed in any number of +counterparts, each of which, when so executed and delivered (including by +facsimile), shall be deemed an original, and all of which shall constitute one +and the same agreement. + +13.9 Binding Effect. This Agreement shall be binding upon and shall inure to +the benefit of the respective parties hereto, their respective successors and +permitted assigns. + +13.10 Assignment. Neither party may, without the prior written consent of the +other party (which shall not be unreasonably withheld), assign this Agreement, +in whole or in part, either voluntarily or by operation of law, and any attempt +to do so shall be a material default of this Agreement and shall be void. +Notwithstanding the foregoing, Kasten may assign its rights and benefits and +delegate its duties and obligations under this Agreement without the consent of +Licensee in connection with a merger, reorganization or sale of all or +substantially all relevant assets of the assigning party; in each case provided +that such successor assumes the assigning party"s obligations under this +Agreement. + diff --git a/charts/kasten/k10/7.0.1101/files/favicon.png b/charts/kasten/k10/7.0.1101/files/favicon.png new file mode 100644 index 0000000000..fb617ce12c Binary files /dev/null and b/charts/kasten/k10/7.0.1101/files/favicon.png differ diff --git a/charts/kasten/k10/7.0.1101/files/kasten-logo.svg b/charts/kasten/k10/7.0.1101/files/kasten-logo.svg new file mode 100644 index 0000000000..0d0ef14eee --- /dev/null +++ b/charts/kasten/k10/7.0.1101/files/kasten-logo.svg @@ -0,0 +1,24 @@ + + + + + + diff --git a/charts/kasten/k10/7.0.1101/files/styles.css b/charts/kasten/k10/7.0.1101/files/styles.css new file mode 100644 index 0000000000..2d92057119 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/files/styles.css @@ -0,0 +1,113 @@ +.theme-body { + background-color: #efefef; + color: #333; + font-family: 'Source Sans Pro', Helvetica, sans-serif; +} + +.theme-navbar { + background-color: #fff; + box-shadow: 0 2px 2px rgba(0, 0, 0, 0.2); + color: #333; + font-size: 13px; + font-weight: 100; + height: 46px; + overflow: hidden; + padding: 0 10px; +} + +.theme-navbar__logo-wrap { + display: inline-block; + height: 100%; + overflow: hidden; + padding: 10px 15px; + width: 300px; +} + +.theme-navbar__logo { + height: 100%; + max-height: 25px; +} + +.theme-heading { + font-size: 20px; + font-weight: 500; + margin-bottom: 10px; + margin-top: 0; +} + +.theme-panel { + background-color: #fff; + box-shadow: 0 5px 15px rgba(0, 0, 0, 0.5); + padding: 30px; +} + +.theme-btn-provider { + background-color: #fff; + color: #333; + min-width: 250px; +} + +.theme-btn-provider:hover { + color: #999; +} + +.theme-btn--primary { + background-color: #333; + border: none; + color: #fff; + min-width: 200px; + padding: 6px 12px; +} + +.theme-btn--primary:hover { + background-color: #666; + color: #fff; +} + +.theme-btn--success { + background-color: #2FC98E; + color: #fff; + width: 250px; +} + +.theme-btn--success:hover { + background-color: #49E3A8; +} + +.theme-form-row { + display: block; + margin: 20px auto; +} + +.theme-form-input { + border-radius: 4px; + border: 1px solid #CCC; + box-shadow: inset 0 1px 1px rgba(0, 0, 0, 0.075); + color: #666; + display: block; + font-size: 14px; + height: 36px; + line-height: 1.42857143; + margin: auto; + padding: 6px 12px; + width: 250px; +} + +.theme-form-input:focus, +.theme-form-input:active { + border-color: #66AFE9; + outline: none; +} + +.theme-form-label { + font-size: 13px; + font-weight: 600; + margin: 4px auto; + position: relative; + text-align: left; + width: 250px; +} + +.theme-link-back { + margin-top: 4px; +} diff --git a/charts/kasten/k10/7.0.1101/grafana/dashboards/default/default.json b/charts/kasten/k10/7.0.1101/grafana/dashboards/default/default.json new file mode 100644 index 0000000000..163461a0ff --- /dev/null +++ b/charts/kasten/k10/7.0.1101/grafana/dashboards/default/default.json @@ -0,0 +1,5855 @@ +{ + "annotations": { + "list": [ + { + "builtIn": 1, + "datasource": "-- Grafana --", + "enable": true, + "hide": true, + "iconColor": "rgba(0, 211, 255, 1)", + "name": "Annotations & Alerts", + "target": { + "limit": 100, + "matchAny": false, + "tags": [], + "type": "dashboard" + }, + "type": "dashboard" + } + ] + }, + "editable": true, + "fiscalYearStartMonth": 0, + "graphTooltip": 0, + "id": 12, + "links": [], + "liveNow": false, + "panels": [ + { + "collapsed": false, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 0 + }, + "id": 53, + "panels": [], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "K10 System Resource Usage", + "type": "row" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 1 + }, + "id": 55, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(rate(process_cpu_seconds_total[5m]))", + "legendFormat": "Total CPU seconds", + "range": true, + "refId": "A" + } + ], + "title": "K10 CPU total seconds ", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "decbytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 1 + }, + "id": 57, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(process_resident_memory_bytes)", + "hide": false, + "legendFormat": "Total memory consumption", + "range": true, + "refId": "C" + } + ], + "title": "K10 total memory consumption", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 9 + }, + "id": 81, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "rate(process_cpu_seconds_total{job=\"httpServiceDiscovery\"}[5m])", + "legendFormat": "{{service}}", + "range": true, + "refId": "A" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(rate(process_cpu_seconds_total{job=\"k10-pods\"}[5m]))", + "hide": false, + "legendFormat": "executor", + "range": true, + "refId": "B" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(rate(process_cpu_seconds_total{job=\"pushAggregator\"}[5m]))", + "hide": false, + "legendFormat": "ephemeral pods", + "range": true, + "refId": "C" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(rate(process_cpu_seconds_total{job=\"prometheus\"}[5m]))", + "hide": false, + "legendFormat": "prometheus", + "range": true, + "refId": "D" + } + ], + "title": "CPU total seconds per service", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "decbytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 9 + }, + "id": 82, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "process_resident_memory_bytes{job=\"pushAggregator\"}", + "hide": false, + "legendFormat": "ephemeral pods", + "range": true, + "refId": "C" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "process_resident_memory_bytes{job=\"httpServiceDiscovery\"}", + "hide": false, + "legendFormat": "{{service}}", + "range": true, + "refId": "A" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(process_resident_memory_bytes{job=\"k10-pods\"})", + "hide": false, + "legendFormat": "executor", + "range": true, + "refId": "B" + }, + { + "datasource": "Prometheus", + "editorMode": "builder", + "expr": "sum(process_resident_memory_bytes{job=\"prometheus\"})", + "hide": false, + "legendFormat": "prometheus", + "range": true, + "refId": "D" + } + ], + "title": "Memory consumption by service", + "type": "timeseries" + }, + { + "collapsed": false, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 17 + }, + "id": 18, + "panels": [], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "Applications", + "type": "row" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "yellow", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 5, + "x": 0, + "y": 18 + }, + "id": 24, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Backups Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 5, + "y": 18 + }, + "id": 33, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Backups Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 8, + "y": 18 + }, + "id": 34, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Backups Skipped", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 5, + "x": 13, + "y": 18 + }, + "id": 35, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Restores Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 18, + "y": 18 + }, + "id": 36, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Restores Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 21, + "y": 18 + }, + "id": 23, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Restores Skipped", + "type": "stat" + }, + { + "collapsed": false, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 25 + }, + "id": 16, + "panels": [], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "Cluster", + "type": "row" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "yellow", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 5, + "x": 0, + "y": 26 + }, + "id": 10, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_cluster_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Backups Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 5, + "y": 26 + }, + "id": 19, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_cluster_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Backups Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 8, + "y": 26 + }, + "id": 28, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_backup_cluster_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Backups Skipped", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 5, + "x": 13, + "y": 26 + }, + "id": 21, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_cluster_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Restores Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 18, + "y": 26 + }, + "id": 22, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_cluster_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Restores Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 3, + "x": 21, + "y": 26 + }, + "id": 25, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_restore_cluster_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Cluster Restores Skipped", + "type": "stat" + }, + { + "collapsed": false, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 33 + }, + "id": 31, + "panels": [], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "Backup Exports", + "type": "row" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 5, + "x": 0, + "y": 34 + }, + "id": 38, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_export_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Exports Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 5, + "y": 34 + }, + "id": 29, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_export_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Exports Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 8, + "y": 34 + }, + "id": 20, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_export_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Exports Skipped", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 5, + "x": 13, + "y": 34 + }, + "id": 27, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_import_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Imports Completed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 18, + "y": 34 + }, + "id": 39, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_import_ended_overall{cluster=\"$cluster\", state=~\"failed|cancelled\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Imports Failed", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "#EAB839", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 21, + "y": 34 + }, + "id": 37, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_import_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Imports Skipped", + "type": "stat" + }, + { + "collapsed": false, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 40 + }, + "id": 14, + "panels": [], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "System", + "type": "row" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + }, + "unit": "runs" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 0, + "y": 41 + }, + "id": 12, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_run_ended_overall{cluster=\"$cluster\", state=\"succeeded\"}[$__range])))", + "format": "time_series", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Policy Runs", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [ + { + "options": { + "0": { + "index": 0, + "text": "-" + } + }, + "type": "value" + } + ], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "yellow", + "value": 1 + } + ] + }, + "unit": "runs" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 3, + "y": 41 + }, + "id": 40, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "sum(round(increase(action_run_skipped_overall{cluster=\"$cluster\"}[$__range])))", + "format": "time_series", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Policy Runs Skipped", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 6, + "y": 41 + }, + "id": 6, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "catalog_persistent_volume_disk_space_used_bytes{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Catalog Volume Used", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "max": 100, + "min": 0, + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "yellow", + "value": 70 + }, + { + "color": "orange", + "value": 80 + }, + { + "color": "red", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 9, + "y": 41 + }, + "id": 2, + "options": { + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true, + "text": {} + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "100-catalog_persistent_volume_free_space_percent{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Catalog Volume Used Space", + "type": "gauge" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 12, + "y": 41 + }, + "id": 8, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "jobs_persistent_volume_disk_space_used_bytes{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Jobs Volume Used", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "max": 100, + "min": 0, + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "yellow", + "value": 70 + }, + { + "color": "orange", + "value": 80 + }, + { + "color": "red", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 15, + "y": 41 + }, + "id": 4, + "options": { + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true, + "text": {} + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "100-jobs_persistent_volume_free_space_percent{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Jobs Volume Used Space", + "type": "gauge" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 18, + "y": 41 + }, + "id": 7, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "logging_persistent_volume_disk_space_used_bytes{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Logging Volume Used", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "max": 100, + "min": 0, + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "yellow", + "value": 70 + }, + { + "color": "orange", + "value": 80 + }, + { + "color": "red", + "value": 90 + } + ] + }, + "unit": "percent" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 21, + "y": 41 + }, + "id": 3, + "options": { + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "showThresholdLabels": false, + "showThresholdMarkers": true, + "text": {} + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "100-logging_persistent_volume_free_space_percent{cluster=\"$cluster\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Logging Volume Used Space", + "type": "gauge" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "text", + "value": null + }, + { + "color": "green", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 0, + "y": 47 + }, + "id": 41, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "compliance_count{state=\"Compliant\"}", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Compliant Applications", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 3, + "y": 47 + }, + "id": 42, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "compliance_count{state=\"NotCompliant\"}", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Non-Compliant Applications", + "type": "stat" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "0", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 1 + } + ] + } + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 6, + "y": 47 + }, + "id": 43, + "interval": "1m", + "options": { + "colorMode": "value", + "graphMode": "none", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "last" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": false, + "expr": "compliance_count{state=\"Unmanaged\"}", + "hide": false, + "interval": "", + "legendFormat": "", + "refId": "B" + } + ], + "title": "Unmanaged Applications", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 12, + "y": 47 + }, + "id": 44, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "snapshot_storage_size_bytes{cluster=\"$cluster\", type=\"physical\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Snapshot Size (Physical)", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 15, + "y": 47 + }, + "id": 45, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "snapshot_storage_size_bytes{cluster=\"$cluster\", type=\"logical\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Snapshot Size (Logical)", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 18, + "y": 47 + }, + "id": 46, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "export_storage_size_bytes{cluster=\"$cluster\", type=\"physical\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Export Size (Physical)", + "type": "stat" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "thresholds" + }, + "mappings": [], + "noValue": "-", + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "#ccccdc", + "value": null + } + ] + }, + "unit": "bytes" + }, + "overrides": [] + }, + "gridPos": { + "h": 6, + "w": 3, + "x": 21, + "y": 47 + }, + "id": 47, + "options": { + "colorMode": "value", + "graphMode": "area", + "justifyMode": "auto", + "orientation": "auto", + "reduceOptions": { + "calcs": [ + "lastNotNull" + ], + "fields": "", + "values": false + }, + "text": {}, + "textMode": "auto" + }, + "pluginVersion": "9.1.5", + "targets": [ + { + "datasource": "Prometheus", + "exemplar": true, + "expr": "export_storage_size_bytes{cluster=\"$cluster\", type=\"logical\"}", + "interval": "", + "legendFormat": "", + "queryType": "randomWalk", + "refId": "A" + } + ], + "title": "Export Size (Logical)", + "type": "stat" + }, + { + "collapsed": true, + "datasource": "Prometheus", + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 53 + }, + "id": 49, + "panels": [ + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "red", + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "Worker Count" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "dark-red", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 0, + "y": 54 + }, + "id": 57, + "interval": "5s", + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(exec_executor_worker_count)", + "legendFormat": "Worker Count", + "range": true, + "refId": "A" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(exec_active_job_count) OR on() vector(0)", + "hide": false, + "legendFormat": "Worker Load", + "range": true, + "refId": "B" + } + ], + "title": "Executor Worker Load", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineStyle": { + "fill": "solid" + }, + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 12, + "x": 12, + "y": 54 + }, + "id": 68, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_backup_duration_seconds_sum_overall[5m])) / sum(rate(action_backup_ended_overall[5m]))", + "hide": false, + "legendFormat": "Backup", + "range": true, + "refId": "A" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_backup_cluster_duration_seconds_overall_sum[5m])) / sum(rate(action_backup_cluster_ended_overall[5m]))", + "hide": false, + "legendFormat": "Backup Cluster", + "range": true, + "refId": "B" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_export_duration_seconds_sum_overall[5m])) / sum(rate(action_export_ended_overall[5m]))", + "hide": false, + "legendFormat": "Export", + "range": true, + "refId": "C" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_import_duration_seconds_sum_overall[5m])) / sum(rate(action_import_ended_overall[5m]))", + "hide": false, + "legendFormat": "Import", + "range": true, + "refId": "D" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_report_duration_seconds_sum_overall[5m])) / sum(rate(action_report_ended_overall[5m]))", + "hide": false, + "legendFormat": "Report", + "range": true, + "refId": "E" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_retire_duration_seconds_sum_overall[5m])) / sum(rate(action_retire_ended_overall[5m]))", + "hide": false, + "legendFormat": "Retire", + "range": true, + "refId": "F" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_restore_duration_seconds_sum_overall[5m])) / sum(rate(action_restore_ended_overall[5m]))", + "hide": false, + "legendFormat": "Restore", + "range": true, + "refId": "G" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(action_restore_cluster_duration_seconds_sum_overall[5m])) / sum(rate(action_restore_cluster_ended_overall[5m]))", + "hide": false, + "legendFormat": "Restore Cluster", + "range": true, + "refId": "H" + } + ], + "title": "Average Action Duration", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 0, + "y": 61 + }, + "id": 74, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_backup_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Backups", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 6, + "y": 61 + }, + "id": 69, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_backup_cluster_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Cluster Backups", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 12, + "y": 61 + }, + "id": 75, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_export_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Exports", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 18, + "y": 61 + }, + "id": 76, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_import_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Imports", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 0, + "y": 68 + }, + "id": 77, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_report_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Reports", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 6, + "y": 68 + }, + "id": 79, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_retire_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Retires", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 12, + "y": 68 + }, + "id": 80, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_restore_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Restores", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "axisSoftMax": 0, + "axisSoftMin": 0, + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "decimals": 0, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "none" + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "succeeded" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "failed" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "cancelled" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-orange", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "skipped" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "semi-dark-blue", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 6, + "x": 18, + "y": 68 + }, + "id": 78, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(round(increase(action_restore_cluster_ended_overall[1m:10s]))) by (state)", + "hide": false, + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Finished Cluster Restores", + "transformations": [], + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s" + }, + "overrides": [] + }, + "gridPos": { + "h": 7, + "w": 24, + "x": 0, + "y": 75 + }, + "id": 63, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(rate(limiter_request_seconds_sum{stage=\"hold\"}[5m])) by (operation) / sum(rate(limiter_request_seconds_count{stage=\"hold\"}[5m])) by (operation) ", + "legendFormat": "__auto", + "range": true, + "refId": "A" + } + ], + "title": "Rate Limiter - avg operation duration", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "fixedColor": "red", + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + } + }, + "overrides": [ + { + "matcher": { + "id": "byName", + "options": "Limit" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "dark-red", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "inflight" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "green", + "mode": "fixed" + } + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "pending" + }, + "properties": [ + { + "id": "color", + "value": { + "fixedColor": "yellow", + "mode": "fixed" + } + } + ] + } + ] + }, + "gridPos": { + "h": 7, + "w": 4.8, + "x": 0, + "y": 82 + }, + "id": 51, + "maxPerRow": 6, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "repeat": "operation", + "repeatDirection": "h", + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "limiter_inflight_count{operation=\"$operation\"}", + "legendFormat": "Inflight", + "range": true, + "refId": "A" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "limiter_pending_count{operation=\"$operation\"}", + "hide": false, + "legendFormat": "Pending", + "range": true, + "refId": "B" + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "limiter_inflight_limit_value{operation=\"$operation\"}", + "hide": false, + "legendFormat": "Limit", + "range": true, + "refId": "C" + } + ], + "title": "Rate Limiter - $operation", + "type": "timeseries" + } + ], + "targets": [ + { + "datasource": "Prometheus", + "refId": "A" + } + ], + "title": "Execution Control", + "type": "row" + }, + { + "collapsed": true, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 54 + }, + "id": 84, + "panels": [ + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": true, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "percentunit" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 55 + }, + "id": 86, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum(increase(action_export_transferred_bytes[5m:30s]))/sum((increase(action_export_processed_bytes[5m:30s])>0))", + "legendFormat": "Transferred/Processed across all actions", + "range": true, + "refId": "A" + } + ], + "title": "Transferred/Processed Ratio", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": true, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "percentunit" + }, + "overrides": [] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 55 + }, + "id": 88, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "(increase(action_export_transferred_bytes[5m:30s])/(increase(action_export_processed_bytes[5m:30s])>0))", + "legendFormat": "{{policy}}:{{app}}", + "range": true, + "refId": "A" + } + ], + "title": "Transferred/Processed Ratio per policy:app", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": true, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [ ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [ ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 63 + }, + "id": 89, + "options": { + "legend": { + "calcs": [ ], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "increase(action_export_transferred_bytes[5m:30s]) > 0", + "legendFormat": "{{policy}}:{{app}}", + "range": true, + "refId": "A" + } + ], + "title": "Transferred bytes per policy:app", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "auto", + "barAlignment": 0, + "drawStyle": "line", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "lineInterpolation": "linear", + "lineWidth": 1, + "pointSize": 5, + "scaleDistribution": { + "type": "linear" + }, + "showPoints": "auto", + "spanNulls": true, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [ ], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "bytes" + }, + "overrides": [ ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 63 + }, + "id": 90, + "options": { + "legend": { + "calcs": [ ], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "increase(action_export_processed_bytes[5m:30s]) > 0", + "legendFormat": "{{policy}}:{{app}}", + "range": true, + "refId": "A" + } + ], + "title": "Processed bytes per policy:app", + "type": "timeseries" + } + ], + "title": "Data reduction", + "type": "row" + }, + { + "collapsed": true, + "gridPos": { + "h": 1, + "w": 24, + "x": 0, + "y": 55 + }, + "id": 1013, + "panels": [ + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisPlacement": "left", + "barAlignment": 0, + "drawStyle": "points", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "stepAfter", + "lineWidth": 1, + "pointSize": 4, + "scaleDistribution": { + "log": 2, + "type": "log" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s", + "unitScale": true + }, + "overrides": [ + { + "matcher": { + "id": "byRegexp", + "options": "/#.*/" + }, + "properties": [ + { + "id": "unit", + "value": "none" + }, + { + "id": "custom.axisPlacement", + "value": "right" + }, + { + "id": "decimals", + "value": 0 + }, + { + "id": "custom.scaleDistribution", + "value": { + "type": "linear" + } + }, + { + "id": "custom.drawStyle", + "value": "line" + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + }, + { + "id": "custom.showPoints", + "value": "never" + }, + { + "id": "custom.axisSoftMin", + "value": 0 + }, + { + "id": "custom.axisLabel", + "value": "# volumes" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#Volumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# Volumes Under Transfer" + }, + { + "id": "custom.lineStyle", + "value": { + "fill": "solid" + } + }, + { + "id": "custom.lineWidth", + "value": 0.4 + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#UploadSessionVolumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# VBR Session Volumes" + }, + { + "id": "custom.lineWidth", + "value": 0 + }, + { + "id": "custom.fillOpacity", + "value": 25 + }, + { + "id": "color", + "value": { + "fixedColor": "dark-blue", + "mode": "shades" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 8 + }, + "id": 1006, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum (max_over_time(data_operation_volume_count{}[2m]))", + "fullMetaSearch": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#Volumes", + "range": true, + "refId": "VOLUME_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum by (repo_type) (max_over_time(data_upload_session_volume_count{repo_type=\"VBR\"}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#UploadSessionVolumes", + "range": true, + "refId": "VBR_SESSION_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum by (data_format,operation,storage_class,repo_name) (rate(data_operation_normalized_duration_sum{}[2m])) / sum by (data_format,operation,storage_class,repo_name) (rate(data_operation_normalized_duration_count{}[2m]))", + "hide": false, + "instant": false, + "legendFormat": "{{operation}} {{storage_class}}/{{repo_name}} ({{data_format}})", + "range": true, + "refId": "NORMALIZED_DURATION_BY_STORAGE_CLASS_LOC" + } + ], + "title": "Normalized operation duration by storage class, location and data format (time/MiB)", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "left", + "barAlignment": 0, + "drawStyle": "points", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "stepAfter", + "lineWidth": 1, + "pointSize": 4, + "scaleDistribution": { + "log": 2, + "type": "log" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "min": 0, + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "s", + "unitScale": true + }, + "overrides": [ + { + "matcher": { + "id": "byRegexp", + "options": "/#.*/" + }, + "properties": [ + { + "id": "unit", + "value": "none" + }, + { + "id": "custom.axisPlacement", + "value": "right" + }, + { + "id": "decimals", + "value": 0 + }, + { + "id": "custom.scaleDistribution", + "value": { + "type": "linear" + } + }, + { + "id": "custom.drawStyle", + "value": "line" + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + }, + { + "id": "custom.showPoints", + "value": "never" + }, + { + "id": "custom.axisSoftMin", + "value": 0 + }, + { + "id": "custom.axisLabel", + "value": "# volumes" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#Volumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# Volumes Under Transfer" + }, + { + "id": "custom.lineStyle", + "value": { + "fill": "solid" + } + }, + { + "id": "custom.lineWidth", + "value": 0.4 + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#UploadSessionVolumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# VBR Session Volumes" + }, + { + "id": "custom.lineWidth", + "value": 0 + }, + { + "id": "custom.fillOpacity", + "value": 25 + }, + { + "id": "color", + "value": { + "fixedColor": "dark-blue", + "mode": "shades" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 8 + }, + "id": 1012, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum (max_over_time(data_operation_volume_count{}[2m]))", + "fullMetaSearch": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#Volumes", + "range": true, + "refId": "VOLUME_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum by (repo_type) (max_over_time(data_upload_session_volume_count{repo_type=\"VBR\"}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#UploadSessionVolumes", + "range": true, + "refId": "VBR_SESSION_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "sum by (data_format,operation,namespace,pvc_name) (rate(data_operation_duration_sum{}[2m])) / sum by (data_format,operation,namespace,pvc_name) (rate(data_operation_duration_count{}[2m]))", + "hide": false, + "instant": false, + "legendFormat": "{{operation}} {{namespace}}/{{pvc_name}} ({{data_format}})", + "range": true, + "refId": "DURATION_BY_PVC" + } + ], + "title": "Operation duration by pvc and data format", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "left", + "barAlignment": 0, + "drawStyle": "points", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "stepAfter", + "lineWidth": 1, + "pointSize": 4, + "scaleDistribution": { + "log": 2, + "type": "log" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "binBps", + "unitScale": true + }, + "overrides": [ + { + "matcher": { + "id": "byRegexp", + "options": "/#.*/" + }, + "properties": [ + { + "id": "unit", + "value": "none" + }, + { + "id": "custom.axisPlacement", + "value": "right" + }, + { + "id": "decimals", + "value": 0 + }, + { + "id": "custom.scaleDistribution", + "value": { + "type": "linear" + } + }, + { + "id": "custom.drawStyle", + "value": "line" + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + }, + { + "id": "custom.showPoints", + "value": "never" + }, + { + "id": "custom.axisSoftMin", + "value": 0 + }, + { + "id": "custom.axisLabel", + "value": "# volumes" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#Volumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# Volumes Under Transfer" + }, + { + "id": "custom.lineStyle", + "value": { + "fill": "solid" + } + }, + { + "id": "custom.lineWidth", + "value": 0.4 + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#UploadSessionVolumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# VBR Session Volumes" + }, + { + "id": "custom.lineWidth", + "value": 0 + }, + { + "id": "custom.fillOpacity", + "value": 25 + }, + { + "id": "color", + "value": { + "fixedColor": "dark-blue", + "mode": "shades" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 0, + "y": 16 + }, + "id": 1011, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum (max_over_time(data_operation_volume_count{}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#Volumes", + "range": true, + "refId": "VOLUME_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum by (repo_type) (max_over_time(data_upload_session_volume_count{repo_type=\"VBR\"}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#UploadSessionVolumes", + "range": true, + "refId": "VBR_SESSION_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "avg by (data_format, operation, storage_class, repo_name) (rate(data_operation_bytes{}[$__rate_interval]))", + "hide": false, + "instant": false, + "legendFormat": "{{operation}} {{storage_class}}/{{repo_name}} ({{data_format}})", + "range": true, + "refId": "RATE_BY_STORAGE_CLASS" + } + ], + "title": "Operation transfer rate by storage class, location and data format", + "type": "timeseries" + }, + { + "datasource": "Prometheus", + "description": "", + "fieldConfig": { + "defaults": { + "color": { + "mode": "palette-classic" + }, + "custom": { + "axisBorderShow": false, + "axisCenteredZero": false, + "axisColorMode": "text", + "axisLabel": "", + "axisPlacement": "left", + "barAlignment": 0, + "drawStyle": "points", + "fillOpacity": 0, + "gradientMode": "none", + "hideFrom": { + "legend": false, + "tooltip": false, + "viz": false + }, + "insertNulls": false, + "lineInterpolation": "stepAfter", + "lineWidth": 1, + "pointSize": 4, + "scaleDistribution": { + "log": 2, + "type": "log" + }, + "showPoints": "auto", + "spanNulls": false, + "stacking": { + "group": "A", + "mode": "none" + }, + "thresholdsStyle": { + "mode": "off" + } + }, + "mappings": [], + "thresholds": { + "mode": "absolute", + "steps": [ + { + "color": "green", + "value": null + }, + { + "color": "red", + "value": 80 + } + ] + }, + "unit": "binBps", + "unitScale": true + }, + "overrides": [ + { + "matcher": { + "id": "byRegexp", + "options": "/#.*/" + }, + "properties": [ + { + "id": "unit", + "value": "none" + }, + { + "id": "custom.axisPlacement", + "value": "right" + }, + { + "id": "decimals", + "value": 0 + }, + { + "id": "custom.scaleDistribution", + "value": { + "type": "linear" + } + }, + { + "id": "custom.drawStyle", + "value": "line" + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + }, + { + "id": "custom.showPoints", + "value": "never" + }, + { + "id": "custom.axisSoftMin", + "value": 0 + }, + { + "id": "custom.axisLabel", + "value": "# volumes" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#Volumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# Volumes Under Transfer" + }, + { + "id": "custom.lineStyle", + "value": { + "fill": "solid" + } + }, + { + "id": "custom.lineWidth", + "value": 0.4 + }, + { + "id": "custom.lineInterpolation", + "value": "stepAfter" + } + ] + }, + { + "matcher": { + "id": "byName", + "options": "#UploadSessionVolumes" + }, + "properties": [ + { + "id": "displayName", + "value": "# VBR Session Volumes" + }, + { + "id": "custom.lineWidth", + "value": 0 + }, + { + "id": "custom.fillOpacity", + "value": 25 + }, + { + "id": "color", + "value": { + "fixedColor": "dark-blue", + "mode": "shades" + } + } + ] + } + ] + }, + "gridPos": { + "h": 8, + "w": 12, + "x": 12, + "y": 16 + }, + "id": 1004, + "options": { + "legend": { + "calcs": [], + "displayMode": "list", + "placement": "bottom", + "showLegend": true + }, + "tooltip": { + "mode": "single", + "sort": "none" + } + }, + "targets": [ + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum (max_over_time(data_operation_volume_count{}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#Volumes", + "range": true, + "refId": "VOLUME_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "disableTextWrap": false, + "editorMode": "code", + "expr": "sum by (repo_type) (max_over_time(data_upload_session_volume_count{repo_type=\"VBR\"}[2m]))", + "fullMetaSearch": false, + "hide": false, + "includeNullMetadata": true, + "instant": false, + "legendFormat": "#UploadSessionVolumes", + "range": true, + "refId": "VBR_SESSION_COUNT", + "useBackend": false + }, + { + "datasource": "Prometheus", + "editorMode": "code", + "expr": "avg by (data_format, operation, namespace, pvc_name) (rate(data_operation_bytes{}[$__rate_interval]))", + "hide": false, + "instant": false, + "legendFormat": "{{operation}} {{namespace}}/{{pvc_name}} ({{data_format}})", + "range": true, + "refId": "RATE_BY_PVC" + } + ], + "title": "Operation transfer rate by pvc and data format", + "type": "timeseries" + } + ], + "title": "Data transfer operations", + "type": "row" + } + ], + "schemaVersion": 39, + "style": "dark", + "tags": [], + "templating": { + "list": [ + { + "hide": 2, + "label": "Cluster", + "name": "cluster", + "query": "", + "skipUrlSync": false, + "type": "constant" + }, + { + "current": { + "selected": false, + "text": "All", + "value": "$__all" + }, + "datasource": "Prometheus", + "definition": "limiter_pending_count", + "description": "", + "hide": 2, + "includeAll": true, + "label": "operation", + "multi": false, + "name": "operation", + "options": [], + "query": { + "query": "limiter_pending_count", + "refId": "StandardVariableQuery" + }, + "refresh": 1, + "regex": "/operation=\\\"([\\w]+)\\\"/", + "skipUrlSync": false, + "sort": 0, + "type": "query" + } + ] + }, + "time": { + "from": "now-24h", + "to": "now" + }, + "timepicker": {}, + "timezone": "", + "title": "K10 Dashboard", + "uid": "8Ebb3xS7k", + "version": 2 +} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/license b/charts/kasten/k10/7.0.1101/license new file mode 100644 index 0000000000..fb23dbb826 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/license @@ -0,0 +1 @@ +Y3VzdG9tZXJOYW1lOiBzdGFydGVyLWxpY2Vuc2UKZGF0ZUVuZDogJzIxMDAtMDEtMDFUMDA6MDA6MDAuMDAwWicKZGF0ZVN0YXJ0OiAnMjAyMC0wMS0wMVQwMDowMDowMC4wMDBaJwpmZWF0dXJlczogbnVsbAppZDogc3RhcnRlci00ZjE4NDJjMC0wNzQ1LTQxYTUtYWFhNy1hMDFkNzQ4YjFjMzAKcHJvZHVjdDogSzEwCnJlc3RyaWN0aW9uczoKICBub2RlczogJzEwJwpzZXJ2aWNlQWNjb3VudEtleTogbnVsbAp2ZXJzaW9uOiB2MS4wLjAKc2lnbmF0dXJlOiBqT1N5NDNQZG5ZMFVCZitValhOdU1oUEFSb1J2ZkpzWElQWnhBWFNCaGpKbUwxNlNodi8vVzgyV2NMeGZJM25NZTA0TThtRU03eThPcnArQks1ekxpeFd3clpncmZSbTBEaWlELyttRjR5U3l1Rko0QW1neHV6NDhQTmdnU1VyWUM3S1FVcFYxSEJZV1ZaNm9udEJDeE1rVWtkaDVqdzZJdWMzN3lDaktIYy92bWZaenBzTVhybmxUdGhha2RjVVk0azNyVHJDa3VDcnFUMkpjM1o1amFGalZSZW1Zd1NBVXpkRldNazdQdkp3eHVFdE5rNitPV0pCVERQbnNYdldKdjdNc3NneDBJTmdtNUlJWDRVeEVhQWI4QXpTNkMyQ21XQzlhWURFTDg1aEFpeWhONXUwU0tQczA3ZXB0R1VHYmc3cWtPUVN0d0NhcDFKUURvbDVDT0E9PQo= diff --git a/charts/kasten/k10/7.0.1101/questions.yaml b/charts/kasten/k10/7.0.1101/questions.yaml new file mode 100644 index 0000000000..713fcb1162 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/questions.yaml @@ -0,0 +1,295 @@ +questions: +# ======================== +# SECRETS And Configuration +# ======================== + +### AWS Configuration + +- variable: secrets.awsAccessKeyId + description: "AWS access key ID (required for AWS deployment)" + type: password + label: AWS Access Key ID + required: false + group: "AWS Configuration" + +- variable: secrets.awsSecretAccessKey + description: "AWS access key secret (required for AWS deployment)" + type: password + label: AWS Secret Access Key + required: false + group: "AWS Configuration" + +- variable: secrets.awsIamRole + description: "ARN of the AWS IAM role assumed by K10 to perform any AWS operation." + type: string + label: ARN of the AWS IAM role + required: false + group: "AWS Configuration" + +- variable: awsConfig.assumeRoleDuration + description: "Duration of a session token generated by AWS for an IAM role" + type: string + label: Role Duration + required: false + default: "" + group: "AWS Configuration" + +- variable: awsConfig.efsBackupVaultName + description: "Specifies the AWS EFS backup vault name" + type: string + label: EFS Backup Vault Name + required: false + default: "k10vault" + group: "AWS Configuration" + +### Google Cloud Configuration + +- variable: secrets.googleApiKey + description: "Required If cluster is deployed on Google Cloud" + type: multiline + label: Non-default base64 encoded GCP Service Account key file + required: false + group: "GoogleApi Configuration" + +### Azure Configuration + +- variable: secrets.azureTenantId + description: "Azure tenant ID (required for Azure deployment)" + type: string + label: Tenant ID + required: false + group: "Azure Configuration" + +- variable: secrets.azureClientId + description: "Azure Service App ID" + type: password + label: Service App ID + required: false + group: "Azure Configuration" + +- variable: secrets.azureClientSecret + description: "Azure Service App secret" + type: password + label: Service App secret + required: false + group: "Azure Configuration" + +- variable: secrets.azureResourceGroup + description: "Resource Group name that was created for the Kubernetes cluster" + type: string + label: Resource Group + required: false + group: "Azure Configuration" + +- variable: secrets.azureSubscriptionID + description: "Subscription ID in your Azure tenant" + type: string + label: Subscription ID + required: false + group: "Azure Configuration" + +- variable: secrets.azureResourceMgrEndpoint + description: "Resource management endpoint for the Azure Stack instance" + type: string + label: Resource management endpoint + required: false + group: "Azure Configuration" + +- variable: secrets.azureADEndpoint + description: "Azure Active Directory login endpoint" + type: string + label: Active Directory login endpoint + required: false + group: "Azure Configuration" + +- variable: secrets.azureADResourceID + description: "Azure Active Directory resource ID to obtain AD tokens" + type: string + label: Active Directory resource ID + required: false + group: "Azure Configuration" + +# ======================== +# Authentication +# ======================== + +- variable: auth.basicAuth.enabled + description: "Configures basic authentication for the K10 dashboard" + type: boolean + label: Enable Basic Authentication + required: false + group: "Authentication" + show_subquestion_if: true + subquestions: + - variable: auth.basicAuth.htpasswd + description: "A username and password pair separated by a colon character" + type: password + label: Authentication Details (htpasswd) + - variable: auth.basicAuth.secretName + description: "Name of an existing Secret that contains a file generated with htpasswd" + type: string + label: Secret Name + +- variable: auth.tokenAuth.enabled + description: "Configures token based authentication for the K10 dashboard" + type: boolean + label: Enable Token Based Authentication + required: false + group: "Authentication" + +- variable: auth.oidcAuth.enabled + description: "Configures Open ID Connect based authentication for the K10 dashboard" + type: boolean + label: Enable OpenID Connect Based Authentication + required: false + group: "Authentication" + show_subquestion_if: true + subquestions: + - variable: auth.oidcAuth.providerURL + description: "URL for the OIDC Provider" + type: string + label: OIDC Provider URL + - variable: auth.oidcAuth.redirectURL + description: "URL for the K10 gateway Provider" + type: string + label: OIDC Redirect URL + - variable: auth.oidcAuth.scopes + description: "Space separated OIDC scopes required for userinfo. Example: `profile email`" + type: string + label: OIDC scopes + - variable: auth.oidcAuth.prompt + description: "The type of prompt to be used during authentication (none, consent, login, or select_account)" + type: enum + options: + - none + - consent + - login + - select_account + default: none + label: The type of prompt to be used during authentication (none, consent, login, or select_account) + - variable: auth.oidcAuth.clientID + description: "Client ID given by the OIDC provider for K10" + type: password + label: OIDC Client ID + - variable: auth.oidcAuth.clientSecret + description: "Client secret given by the OIDC provider for K10" + type: password + label: OIDC Client Secret + - variable: auth.oidcAuth.usernameClaim + description: "The claim to be used as the username" + type: string + label: OIDC UserName Claim + - variable: auth.oidcAuth.usernamePrefix + description: "Prefix that has to be used with the username obtained from the username claim" + type: string + label: OIDC UserName Prefix + - variable: auth.oidcAuth.groupClaim + description: "Name of a custom OpenID Connect claim for specifying user groups" + type: string + label: OIDC group Claim + - variable: auth.oidcAuth.groupPrefix + description: "All groups will be prefixed with this value to prevent conflicts" + type: string + label: OIDC group Prefix + +# ======================== +# External Gateway +# ======================== + +- variable: externalGateway.create + description: "Configures an external gateway for K10 API services" + type: boolean + label: Create External Gateway + required: false + group: "External Gateway" + show_subquestion_if: true + subquestions: + - variable: externalGateway.annotations + description: "Standard annotations for the services" + type: multiline + default: "" + label: Annotation + - variable: externalGateway.fqdn.name + description: "Domain name for the K10 API services" + type: string + label: Domain Name + - variable: externalGateway.fqdn.type + description: "Supported gateway type: `route53-mapper` or `external-dns`" + type: string + label: Gateway Type route53-mapper or external-dns + - variable: externalGateway.awsSSLCertARN + description: "ARN for the AWS ACM SSL certificate used in the K10 API server" + type: multiline + label: ARN for the AWS ACM SSL certificate + +# ======================== +# Storage Management +# ======================== + +- variable: global.persistence.storageClass + label: StorageClass Name + description: "Specifies StorageClass Name to be used for PVCs" + type: string + required: false + default: "" + group: "Storage Management" + +- variable: prometheus.server.persistentVolume.storageClass + type: string + label: StorageClass Name for Prometheus PVC + description: "StorageClassName used to create Prometheus PVC. Setting this option overwrites global StorageClass value" + default: "" + required: false + group: "Storage Management" + +- variable: prometheus.server.persistentVolume.enabled + type: boolean + label: Enable PVC for Prometheus server + description: "If true, K10 Prometheus server will create a Persistent Volume Claim" + default: true + required: false + group: "Storage Management" + +- variable: global.persistence.enabled + type: boolean + label: Storage Enabled + description: "If true, K10 will use Persistent Volume Claim" + default: true + required: false + group: "Storage Management" + +# ======================== +# Service Account +# ======================== + +- variable: serviceAccount.name + description: "Name of a service account in the target namespace that has cluster-admin permissions. This is needed for the K10 to be able to protect cluster resources." + type: string + label: Service Account Name + required: false + group: "Service Account" + +# ======================== +# License +# ======================== + +- variable: license + description: "License string obtained from Kasten" + type: multiline + label: License String + group: "License" +- variable: eula.accept + description: "Whether to enable accept EULA before installation" + type: boolean + label: Enable accept EULA before installation + group: "License" + show_subquestion_if: true + subquestions: + - variable: eula.company + description: "Company name. Required field if EULA is accepted" + type: string + label: Company Name + - variable: eula.email + description: "Contact email. Required field if EULA is accepted" + type: string + label: Contact Email diff --git a/charts/kasten/k10/7.0.1101/templates/NOTES.txt b/charts/kasten/k10/7.0.1101/templates/NOTES.txt new file mode 100644 index 0000000000..7c2354ce02 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/NOTES.txt @@ -0,0 +1,106 @@ +Thank you for installing Kasten’s K10 Data Management Platform {{ .Chart.Version }}! +{{- if .Values.fips.enabled }} + +You are operating in FIPS mode. +{{- end }} + +Documentation can be found at https://docs.kasten.io/. + +How to access the K10 Dashboard: + +{{- if .Values.ingress.create }} + +You are using the system's default ingress controller. Please ask your +administrator for instructions on how to access the cluster. + +WebUI location: https://{{ default "Your ingress endpoint" .Values.ingress.host }}/{{ default .Release.Name .Values.ingress.urlPath }} + +In addition, +{{- end }} + +{{- if .Values.route.enabled }} +WebUI location: https://{{ default "k10-route endpoint" .Values.route.host}}/{{ default .Release.Name .Values.route.path }}/ + +In addition, +{{- end }} + +{{- if .Values.externalGateway.create }} +{{- if .Values.externalGateway.fqdn.name }} + +The K10 Dashboard is accessible via {{ if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) .Values.externalGateway.awsSSLCertARN }}https{{ else }}http{{ end }}://{{ .Values.externalGateway.fqdn.name }}/{{ .Release.Name }}/#/ + +In addition, +{{- else }} + +The K10 Dashboard is accessible via a LoadBalancer. Find the service's EXTERNAL IP using: + `kubectl get svc gateway-ext --namespace {{ .Release.Namespace }} -o wide` +And use it in following URL + `http://SERVICE_EXTERNAL_IP/{{ .Release.Name }}/#/` + +In addition, +{{- end }} +{{- end }} + +To establish a connection to it use the following `kubectl` command: + +`kubectl --namespace {{ .Release.Namespace }} port-forward service/gateway 8080:{{ .Values.gateway.service.externalPort }}` + +The Kasten dashboard will be available at: `http{{ if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) .Values.externalGateway.awsSSLCertARN }}s{{ end }}://127.0.0.1:8080/{{ .Release.Name }}/#/` +{{ if and ( .Values.metering.awsManagedLicense ) ( not .Values.metering.licenseConfigSecretName ) }} + +IAM Role created during installation need to have permissions that allow K10 to +perform operations on EBS and, if needed, EFS and S3. Please create a policy +with required permissions, and use the commands below to attach the policy to +the service account. + +`ROLE_NAME=$(kubectl get serviceaccount {{ .Values.serviceAccount.name }} -n {{ .Release.Namespace }} -ojsonpath="{.metadata.annotations['eks\.amazonaws\.com/role-arn']}" | awk -F '/' '{ print $(NF) }')` +`aws iam attach-role-policy --role-name "${ROLE_NAME}" --policy-arn ` + +Refer to `https://docs.kasten.io/latest/install/aws-containers-anywhere/aws-containers-anywhere.html#attaching-permissions-for-eks-installations` +for more information. + +{{ end }} + +{{- if .Values.restore }} +{{- if or (empty .Values.restore.copyImagePullSecrets) (.Values.restore.copyImagePullSecrets) }} +-------------------- +Removal warning: The helm field `restore.copyImagePullSecrets` has been removed in version 6.0.12. K10 no longer copies the `imagePullSecret` to the application namespace. +-------------------- +{{- end }} +{{- end }} + +{{- if or (not (empty .Values.garbagecollector.importRunActions)) (not (empty .Values.garbagecollector.backupRunActions)) (not (empty .Values.garbagecollector.retireActions)) }} +Deprecation warning: The `garbagecollector.importRunActions`, `garbagecollector.backupRunActions`, `garbagecollector.retireActions` +blocks within the helm chart values have been replaced with `garbagecollector.actions`. +{{- end }} + +{{- if .Values.secrets.azureADEndpoint }} +-------------------- +Deprecation warning: The helm field `secret.azureADEndpoint` is deprecated and will be removed in upcoming release, we recommend you to use correct respective field, i.e., `secrets.microsoftEntraIDEndpoint`. +-------------------- +{{- end }} + + +{{- if .Values.secrets.azureADResourceID }} +-------------------- +Deprecation warning: The helm field `secret.azureADResourceID` is deprecated and will be removed in upcoming release, we recommend you to use correct respective field, i.e., `secrets.microsoftEntraIDResourceID` +-------------------- +{{- end }} + +{{- if .Values.grafana.enabled }} +-------------------- +Deprecation warning: Grafana will no longer be included in the Veeam Kasten installation process from the upcoming release 7.5.0. Upon upgrading to this (7.5.0) version, the integrated version of Grafana will be removed. It is important to install Grafana separately and follow the procedure described in our knowledge base article (https://www.veeam.com/kb4635) to configure the Kasten dashboards and alerts before upgrading Kasten to version 7.5.0. +-------------------- +{{- end }} + +{{- if or .Values.kanisterPodCustomLabels .Values.kanisterPodCustomAnnotations }} +-------------------- +Deprecation warning: The Helm values `kanisterPodCustomLabels` and `kanisterPodCustomAnnotations` are deprecated and will be removed in an upcoming release. Please use `global.podLabels` and `global.podAnnotations` to set labels and annotations to all the Kasten pods globally. +-------------------- +{{- end }} + +{{- if or .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey }} +-------------------- +Deprecation warning: The Helm values `secrets.apiTlsCrt` and `secrets.apiTlsKey` are deprecated and will be removed in an upcoming release. Please use `secrets.tlsSecret` to specify the name of a secret of type `kubernetes.io/tls`. This reduces the security risk of caching the certificates and keys in the shell history. +-------------------- +{{- end }} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/templates/_definitions.tpl b/charts/kasten/k10/7.0.1101/templates/_definitions.tpl new file mode 100644 index 0000000000..4e26d6df4f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_definitions.tpl @@ -0,0 +1,238 @@ +{{/* Code generated automatically. DO NOT EDIT. */}} +{{/* K10 services can be disabled by customers via helm value based feature flags. +Therefore, fetching of a list or yaml with service names should be done with the get.enabled* helper functions. +For example, the k10.restServices list can be fetched with get.enabledRestServices */}} +{{- define "k10.additionalServices" -}}frontend kanister{{- end -}} +{{- define "k10.restServices" -}}admin auth bloblifecyclemanager catalog controllermanager crypto dashboardbff events executor garbagecollector jobs logging metering repositories state vbrintegrationapi{{- end -}} +{{- define "k10.services" -}}aggregatedapis gateway{{- end -}} +{{- define "k10.exposedServices" -}}auth dashboardbff vbrintegrationapi{{- end -}} +{{- define "k10.statelessServices" -}}admin aggregatedapis auth bloblifecyclemanager controllermanager crypto dashboardbff events executor garbagecollector repositories gateway state vbrintegrationapi{{- end -}} +{{- define "k10.colocatedServices" -}} +admin: + port: 8001 + primary: state +bloblifecyclemanager: + port: 8001 + primary: crypto +events: + port: 8002 + primary: state +garbagecollector: + port: 8002 + primary: crypto +repositories: + port: 8003 + primary: crypto +vbrintegrationapi: + port: 8001 + primary: dashboardbff +{{- end -}} +{{- define "k10.colocatedServiceLookup" -}} +crypto: +- garbagecollector +- repositories +- bloblifecyclemanager +dashboardbff: +- vbrintegrationapi +state: +- admin +- events +{{- end -}} +{{- define "k10.aggregatedAPIs" -}}actions apps repositories vault{{- end -}} +{{- define "k10.configAPIs" -}}config{{- end -}} +{{- define "k10.profiles" -}}profiles{{- end -}} +{{- define "k10.policies" -}}policies{{- end -}} +{{- define "k10.policypresets" -}}policypresets{{- end -}} +{{- define "k10.transformsets" -}}transformsets{{- end -}} +{{- define "k10.blueprintbindings" -}}blueprintbindings{{- end -}} +{{- define "k10.auditconfigs" -}}auditconfigs{{- end -}} +{{- define "k10.storagesecuritycontexts" -}}storagesecuritycontexts{{- end -}} +{{- define "k10.storagesecuritycontextbindings" -}}storagesecuritycontextbindings{{- end -}} +{{- define "k10.reportingAPIs" -}}reporting{{- end -}} +{{- define "k10.distAPIs" -}}dist{{- end -}} +{{- define "k10.actionsAPIs" -}}actions{{- end -}} +{{- define "k10.backupActions" -}}backupactions{{- end -}} +{{- define "k10.backupActionsDetails" -}}backupactions/details{{- end -}} +{{- define "k10.reportActions" -}}reportactions{{- end -}} +{{- define "k10.reportActionsDetails" -}}reportactions/details{{- end -}} +{{- define "k10.storageRepositories" -}}storagerepositories{{- end -}} +{{- define "k10.restoreActions" -}}restoreactions{{- end -}} +{{- define "k10.restoreActionsDetails" -}}restoreactions/details{{- end -}} +{{- define "k10.importActions" -}}importactions{{- end -}} +{{- define "k10.exportActions" -}}exportactions{{- end -}} +{{- define "k10.exportActionsDetails" -}}exportactions/details{{- end -}} +{{- define "k10.retireActions" -}}retireactions{{- end -}} +{{- define "k10.runActions" -}}runactions{{- end -}} +{{- define "k10.runActionsDetails" -}}runactions/details{{- end -}} +{{- define "k10.backupClusterActions" -}}backupclusteractions{{- end -}} +{{- define "k10.backupClusterActionsDetails" -}}backupclusteractions/details{{- end -}} +{{- define "k10.restoreClusterActions" -}}restoreclusteractions{{- end -}} +{{- define "k10.restoreClusterActionsDetails" -}}restoreclusteractions/details{{- end -}} +{{- define "k10.cancelActions" -}}cancelactions{{- end -}} +{{- define "k10.upgradeActions" -}}upgradeactions{{- end -}} +{{- define "k10.appsAPIs" -}}apps{{- end -}} +{{- define "k10.restorePoints" -}}restorepoints{{- end -}} +{{- define "k10.restorePointsDetails" -}}restorepoints/details{{- end -}} +{{- define "k10.clusterRestorePoints" -}}clusterrestorepoints{{- end -}} +{{- define "k10.clusterRestorePointsDetails" -}}clusterrestorepoints/details{{- end -}} +{{- define "k10.applications" -}}applications{{- end -}} +{{- define "k10.applicationsDetails" -}}applications/details{{- end -}} +{{- define "k10.vaultAPIs" -}}vault{{- end -}} +{{- define "k10.passkey" -}}passkeys{{- end -}} +{{- define "k10.authAPIs" -}}auth{{- end -}} +{{- define "k10.defaultConcurrentSnapshotConversions" -}}3{{- end -}} +{{- define "k10.defaultConcurrentWorkloadSnapshots" -}}5{{- end -}} +{{- define "k10.defaultK10DataStoreParallelUpload" -}}8{{- end -}} +{{- define "k10.defaultK10DataStoreGeneralContentCacheSizeMB" -}}0{{- end -}} +{{- define "k10.defaultK10DataStoreGeneralMetadataCacheSizeMB" -}}500{{- end -}} +{{- define "k10.defaultK10DataStoreRestoreContentCacheSizeMB" -}}500{{- end -}} +{{- define "k10.defaultK10DataStoreRestoreMetadataCacheSizeMB" -}}500{{- end -}} +{{- define "k10.defaultK10BackupBufferFileHeadroomFactor" -}}1.1{{- end -}} +{{- define "k10.defaultK10LimiterGenericVolumeSnapshots" -}}10{{- end -}} +{{- define "k10.defaultK10LimiterGenericVolumeCopies" -}}10{{- end -}} +{{- define "k10.defaultK10LimiterGenericVolumeRestores" -}}10{{- end -}} +{{- define "k10.defaultK10LimiterCsiSnapshots" -}}10{{- end -}} +{{- define "k10.defaultK10LimiterImageCopies" -}}10{{- end -}} +{{- define "k10.defaultK10LimiterProviderSnapshots" -}}10{{- end -}} +{{- define "k10.defaultK10GCDaemonPeriod" -}}21600{{- end -}} +{{- define "k10.defaultK10GCKeepMaxActions" -}}1000{{- end -}} +{{- define "k10.defaultK10GCActionsEnabled" -}}false{{- end -}} +{{- define "k10.defaultK10ExecutorWorkerCount" -}}8{{- end -}} +{{- define "k10.defaultK10ExecutorMaxConcurrentRestoreCsiSnapshots" -}}3{{- end -}} +{{- define "k10.defaultK10ExecutorMaxConcurrentRestoreGenericVolumeSnapshots" -}}3{{- end -}} +{{- define "k10.defaultK10ExecutorMaxConcurrentRestoreWorkloads" -}}3{{- end -}} +{{- define "k10.defaultAssumeRoleDuration" -}}60m{{- end -}} +{{- define "k10.defaultKanisterBackupTimeout" -}}45{{- end -}} +{{- define "k10.defaultKanisterRestoreTimeout" -}}600{{- end -}} +{{- define "k10.defaultKanisterDeleteTimeout" -}}45{{- end -}} +{{- define "k10.defaultKanisterHookTimeout" -}}20{{- end -}} +{{- define "k10.defaultKanisterCheckRepoTimeout" -}}20{{- end -}} +{{- define "k10.defaultKanisterStatsTimeout" -}}20{{- end -}} +{{- define "k10.defaultKanisterEFSPostRestoreTimeout" -}}45{{- end -}} +{{- define "k10.cloudProviders" -}}aws google azure{{- end -}} +{{- define "k10.serviceResources" -}} +aggregatedapis-svc: + aggregatedapis-svc: + requests: + cpu: 90m + memory: 180Mi +auth-svc: + auth-svc: + requests: + cpu: 2m + memory: 30Mi +catalog-svc: + catalog-svc: + requests: + cpu: 200m + memory: 780Mi + kanister-sidecar: + limits: + cpu: 1200m + memory: 800Mi + requests: + cpu: 100m + memory: 800Mi +controllermanager-svc: + controllermanager-svc: + requests: + cpu: 5m + memory: 30Mi +crypto-svc: + bloblifecyclemanager-svc: + requests: + cpu: 10m + memory: 40Mi + crypto-svc: + requests: + cpu: 1m + memory: 30Mi + events-svc: + requests: + cpu: 3m + memory: 500Mi + garbagecollector-svc: + requests: + cpu: 3m + memory: 100Mi +dashboardbff-svc: + dashboardbff-svc: + requests: + cpu: 8m + memory: 40Mi + repositories-svc: + requests: + cpu: 10m + memory: 40Mi +executor-svc: + executor-svc: + requests: + cpu: 3m + memory: 50Mi + tools: + requests: + cpu: 1m + memory: 2Mi +frontend-svc: + frontend-svc: + requests: + cpu: 1m + memory: 40Mi +jobs-svc: + jobs-svc: + requests: + cpu: 30m + memory: 380Mi +kanister-svc: + kanister-svc: + requests: + cpu: 1m + memory: 30Mi +logging-svc: + logging-svc: + requests: + cpu: 2m + memory: 40Mi +metering-svc: + metering-svc: + requests: + cpu: 2m + memory: 30Mi +state-svc: + admin-svc: + requests: + cpu: 2m + memory: 160Mi + state-svc: + requests: + cpu: 2m + memory: 30Mi +{{- end -}} +{{- define "k10.multiClusterVersion" -}}2.5{{- end -}} +{{- define "k10.mcExternalPort" -}}18000{{- end -}} +{{- define "k10.defaultKubeVirtVMsUnfreezeTimeout" -}}5m{{- end -}} +{{- define "k10.aggAuditPolicyFile" -}}agg-audit-policy.yaml{{- end -}} +{{- define "k10.siemAuditLogFilePath" -}}-{{- end -}} +{{- define "k10.siemAuditLogFileSize" -}}100{{- end -}} +{{- define "k10.kanisterToolsImageTag" -}}0.111.0{{- end -}} +{{- define "k10.disabledServicesEnvVar" -}}K10_DISABLED_SERVICES{{- end -}} +{{- define "k10.openShiftClientSecretEnvVar" -}}K10_OPENSHIFT_CLIENT_SECRET{{- end -}} +{{- define "k10.defaultK10DefaultPriorityClassName" -}}{{- end -}} +{{- define "k10.dexServiceAccountName" -}}k10-dex-k10-sa{{- end -}} +{{- define "k10.defaultCACertConfigMapName" -}}custom-ca-bundle-store{{- end -}} +{{- define "k10.gatewayPrefixVarName" -}}PREFIX_PATH{{- end -}} +{{- define "k10.gatewayGrafanaSvcVarName" -}}GRAFANA_SVC_NAME{{- end -}} +{{- define "k10.gatewayRequestHeadersVarName" -}}EXTAUTH_REQUEST_HEADERS{{- end -}} +{{- define "k10.gatewayAuthHeadersVarName" -}}EXTAUTH_AUTH_HEADERS{{- end -}} +{{- define "k10.gatewayPortVarName" -}}PORT{{- end -}} +{{- define "k10.gatewayEnableDex" -}}ENABLE_DEX{{- end -}} +{{- define "k10.gatewayTLSCertFile" -}}TLS_CRT_FILE{{- end -}} +{{- define "k10.gatewayTLSKeyFile" -}}TLS_KEY_FILE{{- end -}} +{{- define "k10.azureClientIDEnvVar" -}}AZURE_CLIENT_ID{{- end -}} +{{- define "k10.azureTenantIDEnvVar" -}}AZURE_TENANT_ID{{- end -}} +{{- define "k10.azureClientSecretEnvVar" -}}AZURE_CLIENT_SECRET{{- end -}} +{{- define "k10.oidcSecretName" -}}k10-oidc-auth{{- end -}} +{{- define "k10.oidcCustomerSecretName" -}}k10-oidc-auth-creds{{- end -}} +{{- define "k10.secretsDir" -}}/var/run/secrets/kasten.io{{- end -}} +{{- define "k10.sccNameEnvVar" -}}K10_SCC_NAME{{- end -}} +{{- define "k10.fluentbitEndpointEnvVar" -}}FLUENTBIT_ENDPOINT{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/_grafana.tpl b/charts/kasten/k10/7.0.1101/templates/_grafana.tpl new file mode 100644 index 0000000000..53ade64a0c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_grafana.tpl @@ -0,0 +1,18 @@ +{{/*** SELECTOR LABELS *** + NOTE: The selector labels here (`app` and `release`) are divergent from + the selector labels set by the upstream chart. This is intentional since a + Deployment's `spec.selector` is immutable and K10 has already been shipped + with these values. + + A change to these selector labels will mean that all customers must manually + delete the Grafana Deployment before upgrading, which is a situation we don't + want for our customers. + + Instead, the `app.kubernetes.io/name` and `app.kubernetes.io/instance` labels + are included in the `grafana.extraLabels` in: + `templates/{values}/grafana/values/grafana_values.tpl`. +*/}} +{{- define "grafana.selectorLabels" -}} +app: {{ include "grafana.name" . }} +release: {{ .Release.Name }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/_helpers.tpl b/charts/kasten/k10/7.0.1101/templates/_helpers.tpl new file mode 100644 index 0000000000..5be5dd5f70 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_helpers.tpl @@ -0,0 +1,1524 @@ +{{/* Returns a string of the disabled K10 services */}} +{{- define "get.disabledServices" -}} + {{/* Append services to this list based on helm values */}} + {{- $disabledServices := list -}} + + {{- if .Values.reporting -}} + {{- if eq .Values.reporting.pdfReports false -}} + {{- $disabledServices = append $disabledServices "admin" -}} + {{- end -}} + {{- end -}} + + {{- if eq .Values.logging.internal false -}} + {{- $disabledServices = append $disabledServices "logging" -}} + {{- end -}} + + {{- $disabledServices | join " " -}} +{{- end -}} + +{{/* Removes disabled service names from the provided string of service names */}} +{{- define "removeDisabledServicesFromList" -}} + {{- $disabledServices := include "get.disabledServices" .main | splitList " " -}} + {{- $services := .list | splitList " " -}} + + {{- range $disabledServices -}} + {{- $services = without $services . -}} + {{- end -}} + + {{- $services | join " " -}} +{{- end -}} + +{{/* Removes keys with disabled service names from the provided YAML string */}} +{{- define "removeDisabledServicesFromYaml" -}} + {{- $disabledServices := include "get.disabledServices" .main | splitList " " -}} + {{- $services := .yaml | fromYaml -}} + + {{- range $disabledServices -}} + {{- $services = unset $services . -}} + {{- end -}} + + {{- if gt (len $services) 0 -}} + {{- $services | toYaml | trim | nindent 0}} + {{- else -}} + {{- print "" -}} + {{- end -}} +{{- end -}} + +{{/* Returns k10.additionalServices string with disabled services removed */}} +{{- define "get.enabledAdditionalServices" -}} + {{- $list := include "k10.additionalServices" . -}} + {{- dict "main" . "list" $list | include "removeDisabledServicesFromList" -}} +{{- end -}} + +{{/* Returns k10.restServices string with disabled services removed */}} +{{- define "get.enabledRestServices" -}} + {{- $list := include "k10.restServices" . -}} + {{- dict "main" . "list" $list | include "removeDisabledServicesFromList" -}} +{{- end -}} + +{{/* Returns k10.services string with disabled services removed */}} +{{- define "get.enabledServices" -}} + {{- $list := include "k10.services" . -}} + {{- dict "main" . "list" $list | include "removeDisabledServicesFromList" -}} +{{- end -}} + +{{/* Returns k10.exposedServices string with disabled services removed */}} +{{- define "get.enabledExposedServices" -}} + {{- $list := include "k10.exposedServices" . -}} + {{- dict "main" . "list" $list | include "removeDisabledServicesFromList" -}} +{{- end -}} + +{{/* Returns k10.statelessServices string with disabled services removed */}} +{{- define "get.enabledStatelessServices" -}} + {{- $list := include "k10.statelessServices" . -}} + {{- dict "main" . "list" $list | include "removeDisabledServicesFromList" -}} +{{- end -}} + +{{/* Returns k10.colocatedServices string with disabled services removed */}} +{{- define "get.enabledColocatedServices" -}} + {{- $yaml := include "k10.colocatedServices" . -}} + {{- dict "main" . "yaml" $yaml | include "removeDisabledServicesFromYaml" -}} +{{- end -}} + +{{/* Returns YAML of primary services mapped to their secondary services */}} +{{/* The content will only have services which are not disabled */}} +{{- define "get.enabledColocatedServiceLookup" -}} + {{- $colocatedServicesLookup := include "k10.colocatedServiceLookup" . | fromYaml -}} + {{- $disabledServices := include "get.disabledServices" . | splitList " " -}} + {{- $filteredLookup := dict -}} + + {{/* construct filtered lookup */}} + {{- range $primaryService, $secondaryServices := $colocatedServicesLookup -}} + {{/* proceed only if primary service is enabled */}} + {{- if not (has $primaryService $disabledServices) -}} + {{/* filter out secondary services */}} + {{- range $disabledServices -}} + {{- $secondaryServices = without $secondaryServices . -}} + {{- end -}} + {{/* add entry for primary service only if secondary services exist */}} + {{- if gt (len $secondaryServices) 0 -}} + {{- $filteredLookup = set $filteredLookup $primaryService $secondaryServices -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{/* return filtered lookup */}} + {{- if gt (len $filteredLookup) 0 -}} + {{- $filteredLookup | toYaml | trim | nindent 0 -}} + {{- else -}} + {{- print "" -}} + {{- end -}} +{{- end -}} + +{{- define "k10.capabilities" -}} + {{- /* Internal capabilities enabled by other Helm values are added here */ -}} + {{- $internal_capabilities := list "gateway" -}} + + {{- /* Multi-cluster */ -}} + {{- if eq .Values.multicluster.enabled true -}} + {{- $internal_capabilities = append $internal_capabilities "mc" -}} + {{- end -}} + + {{- /* FIPS */ -}} + {{- if .Values.fips.enabled -}} + {{- $internal_capabilities = append $internal_capabilities "fips.strict" -}} + {{- $internal_capabilities = append $internal_capabilities "crypto.k10.v2" -}} + {{- $internal_capabilities = append $internal_capabilities "crypto.storagerepository.v2" -}} + {{- $internal_capabilities = append $internal_capabilities "crypto.vbr.v2" -}} + {{- $internal_capabilities = append $internal_capabilities "gateway" -}} + {{- end -}} + + {{- concat $internal_capabilities (.Values.capabilities | default list) | join " " -}} +{{- end -}} + +{{- define "k10.capabilities_mask" -}} + {{- /* Internal capabilities masked by other Helm values are added here */ -}} + {{- $internal_capabilities_mask := list -}} + + {{- /* Multi-cluster */ -}} + {{- if eq .Values.multicluster.enabled false -}} + {{- $internal_capabilities_mask = append $internal_capabilities_mask "mc" -}} + {{- end -}} + + {{- concat $internal_capabilities_mask (.Values.capabilitiesMask | default list) | join " " -}} +{{- end -}} + +{{/* + k10.capability checks whether a given capability is enabled + + For example: + + include "k10.capability" (. | merge (dict "capability" "SOME.CAPABILITY")) +*/}} +{{- define "k10.capability" -}} + {{- $capabilities := dict -}} + {{- range $capability := include "k10.capabilities" . | splitList " " -}} + {{- $_ := set $capabilities $capability "enabled" -}} + {{- end -}} + {{- range $capability := include "k10.capabilities_mask" . | splitList " " -}} + {{- $_ := unset $capabilities $capability -}} + {{- end -}} + + {{- index $capabilities .capability | default "" -}} +{{- end -}} + +{{/* + k10.capability.gateway checks whether the "gateway" capability is enabled +*/}} +{{- define "k10.capability.gateway" -}} + {{- include "k10.capability" (. | merge (dict "capability" "gateway")) -}} +{{- end -}} + +{{/* Check if basic auth is needed */}} +{{- define "basicauth.check" -}} + {{- if .Values.auth.basicAuth.enabled }} + {{- print true }} + {{- end -}} {{/* End of check for auth.basicAuth.enabled */}} +{{- end -}} + +{{/* +Check if trusted root CA certificate related configmap settings +have been configured +*/}} +{{- define "check.cacertconfigmap" -}} +{{- if .Values.cacertconfigmap.name -}} +{{- print true -}} +{{- else -}} +{{- print false -}} +{{- end -}} +{{- end -}} + +{{/* +Check if OCP CA certificates automatic extraction is enabled +*/}} +{{- define "k10.ocpcacertsautoextraction" -}} + {{- if and .Values.auth.openshift.enabled .Values.auth.openshift.caCertsAutoExtraction -}} + {{- true -}} + {{- end -}} +{{- end -}} + +{{/* +Get the name of the CA certificate related configmap +*/}} +{{- define "k10.cacertconfigmapname" -}} + {{- if eq (include "check.cacertconfigmap" .) "true" -}} + {{- .Values.cacertconfigmap.name -}} + {{- else if (include "k10.ocpcacertsautoextraction" .) -}} + {{- include "k10.defaultCACertConfigMapName" . -}} + {{- end -}} +{{- end -}} + +{{/* Custom pod labels applied globally to all pods */}} +{{- define "k10.globalPodLabels" -}} + {{ include "k10.validateGlobalAndKanisterLabelsAnnotations" . }} + {{- with .Values.global.podLabels -}} + {{- toYaml . -}} + {{- end -}} +{{- end -}} + +{{/* Custom pod labels applied globally to all pods in a json format */}} +{{- define "k10.globalPodLabelsJson" -}} + {{- if .Values.global.podLabels -}} + {{- toJson .Values.global.podLabels -}} + {{- end -}} +{{- end -}} + +{{/* Custom pod annotations applied globally to all pods */}} +{{- define "k10.globalPodAnnotations" -}} + {{ include "k10.validateGlobalAndKanisterLabelsAnnotations" . }} + {{- with .Values.global.podAnnotations -}} + {{- toYaml . -}} + {{- end -}} +{{- end -}} + +{{/* Custom pod annotations applied globally to all pods in a json format */}} +{{- define "k10.globalPodAnnotationsJson" -}} + {{- if .Values.global.podAnnotations -}} + {{- toJson .Values.global.podAnnotations -}} + {{- end -}} +{{- end -}} + +{{/* +Validate and fail if the labels/annotations are configured at global level (global.podLabels) +as well as kanister helm field level (kanisterPodCustomLabels) +*/}} +{{- define "k10.validateGlobalAndKanisterLabelsAnnotations" -}} + {{- if and .Values.global.podAnnotations .Values.kanisterPodCustomAnnotations -}} + {{- fail "The `kanisterPodCustomAnnotations` field has been deprecated and cannot be used simultaneously with `global.podAnnotations`. Please use `global.podAnnotations` to set annotations to all the Kasten pods globally." }} + {{- end -}} + {{- if and .Values.global.podLabels .Values.kanisterPodCustomLabels -}} + {{- fail "The `kanisterPodCustomLabels` field has been deprecated and cannot be used simultaneously with `global.podLabels`. Please use `global.podLabels` to set labels to all the Kasten pods globally." }} + {{- end -}} +{{- end -}} + +{{/* +Check if the auth options are implemented using Dex +*/}} +{{- define "check.dexAuth" -}} +{{- if or .Values.auth.openshift.enabled .Values.auth.ldap.enabled -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* Check the only 1 auth is specified */}} +{{- define "singleAuth.check" -}} +{{- $count := dict "count" (int 0) -}} +{{- $authList := list .Values.auth.basicAuth.enabled .Values.auth.tokenAuth.enabled .Values.auth.oidcAuth.enabled .Values.auth.openshift.enabled .Values.auth.ldap.enabled -}} +{{- range $i, $val := $authList }} +{{ if $val }} +{{ $c := add1 $count.count | set $count "count" }} +{{ if gt $count.count 1 }} +{{- fail "Multiple auth types were selected. Only one type can be enabled." }} +{{ end }} +{{ end }} +{{- end }} +{{- end -}}{{/* Check the only 1 auth is specified */}} + +{{/* Check if Auth is enabled */}} +{{- define "authEnabled.check" -}} +{{- $count := dict "count" (int 0) -}} +{{- $authList := list .Values.auth.basicAuth.enabled .Values.auth.tokenAuth.enabled .Values.auth.oidcAuth.enabled .Values.auth.openshift.enabled .Values.auth.ldap.enabled -}} +{{- range $i, $val := $authList }} +{{ if $val }} +{{ $c := add1 $count.count | set $count "count" }} +{{ end }} +{{- end }} +{{- if eq $count.count 0}} + {{- fail "Auth is required to expose access to K10." }} +{{- end }} +{{- end -}}{{/*end of check */}} + +{{/* Return ingress class name annotation */}} +{{- define "ingressClassAnnotation" -}} +{{- if .Values.ingress.class -}} +kubernetes.io/ingress.class: {{ .Values.ingress.class | quote }} +{{- end -}} +{{- end -}} + +{{/* Return ingress class name in spec */}} +{{- define "specIngressClassName" -}} +{{- if and .Values.ingress.class (semverCompare ">= 1.27-0" .Capabilities.KubeVersion.Version) -}} +ingressClassName: {{ .Values.ingress.class }} +{{- end -}} +{{- end -}} + +{{/* Helm required labels */}} +{{- define "helm.labels" -}} +heritage: {{ .Release.Service }} +helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} +app.kubernetes.io/name: {{ .Chart.Name }} +app.kubernetes.io/instance: {{ .Release.Name }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{ include "k10.common.matchLabels" . }} +{{- end -}} + +{{- define "k10.common.matchLabels" -}} +app: {{ .Chart.Name }} +release: {{ .Release.Name }} +{{- end -}} + +{{- define "k10.defaultRBACLabels" -}} +k10.kasten.io/default-rbac-object: "true" +{{- end -}} + +{{/* Expand the name of the chart. */}} +{{- define "name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "serviceAccountName" -}} +{{- if and .Values.metering.awsMarketplace ( not .Values.serviceAccount.name ) -}} + {{ print "k10-metering" }} +{{- else if .Values.serviceAccount.create -}} + {{ default (include "fullname" .) .Values.serviceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the metering service account to use +*/}} +{{- define "meteringServiceAccountName" -}} +{{- if and .Values.metering.awsManagedLicense ( not .Values.serviceAccount.name ) ( not .Values.metering.serviceAccount.name ) ( not .Values.metering.licenseConfigSecretName ) -}} + {{ print "k10-metering" }} +{{- else -}} + {{ default (include "serviceAccountName" .) .Values.metering.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Prints annotations based on .Values.fqdn.type +*/}} +{{- define "dnsAnnotations" -}} +{{- if .Values.externalGateway.fqdn.name -}} +{{- if eq "route53-mapper" ( default "" .Values.externalGateway.fqdn.type) }} +domainName: {{ .Values.externalGateway.fqdn.name | quote }} +{{- end }} +{{- if eq "external-dns" (default "" .Values.externalGateway.fqdn.type) }} +external-dns.alpha.kubernetes.io/hostname: {{ .Values.externalGateway.fqdn.name | quote }} +{{- end }} +{{- end -}} +{{- end -}} + +{{/* +Prometheus scrape config template for k10 services +*/}} +{{- define "k10.prometheusScrape" -}} +{{- $cluster_domain := "" -}} +{{- with .main.Values.cluster.domainName -}} + {{- $cluster_domain = printf ".%s" . -}} +{{- end -}} +{{- $admin_port := default 8877 .main.Values.service.gatewayAdminPort -}} +- job_name: {{ .k10service }} + metrics_path: /metrics + {{- if eq "aggregatedapis" .k10service }} + scheme: https + tls_config: + insecure_skip_verify: true + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + {{- else }} + scheme: http + {{- end }} + static_configs: + - targets: + {{- if eq "gateway" .k10service }} + - {{ .k10service }}-admin.{{ .main.Release.Namespace }}.svc{{ $cluster_domain }}:{{ $admin_port }} + {{- else if eq "aggregatedapis" .k10service }} + - {{ .k10service }}-svc.{{ .main.Release.Namespace }}.svc{{ $cluster_domain }}:443 + {{- else }} + {{- $service := default .k10service (index (include "get.enabledColocatedServices" . | fromYaml) .k10service).primary }} + {{- $port := default .main.Values.service.externalPort (index (include "get.enabledColocatedServices" . | fromYaml) .k10service).port }} + - {{ $service }}-svc.{{ .main.Release.Namespace }}.svc{{ $cluster_domain }}:{{ $port }} + {{- end }} + labels: + application: {{ .main.Release.Name }} + service: {{ .k10service }} +{{- end -}} + +{{/* +Prometheus scrape config template for k10 services +*/}} +{{- define "k10.prometheusTargetConfig" -}} +{{- $cluster_domain := "" -}} +{{- with .main.Values.cluster.domainName -}} + {{- $cluster_domain = printf ".%s" . -}} +{{- end -}} +{{- $admin_port := default 8877 .main.Values.service.gatewayAdminPort | toString -}} +- service: {{ .k10service }} + metricsPath: /metrics + {{- if eq "aggregatedapis" .k10service }} + scheme: https + tls_config: + insecure_skip_verify: true + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + {{- else }} + scheme: http + {{- end }} + {{- $serviceFqdn := "" }} + {{- $servicePort := "" }} + {{- if eq "gateway" .k10service -}} + {{- $serviceFqdn = printf "%s-admin.%s.svc%s" .k10service .main.Release.Namespace $cluster_domain -}} + {{- $servicePort = $admin_port -}} + {{- else if eq "aggregatedapis" .k10service -}} + {{- $serviceFqdn = printf "%s-svc.%s.svc%s" .k10service .main.Release.Namespace $cluster_domain -}} + {{- $servicePort = "443" -}} + {{- else -}} + {{- $service := default .k10service (index (include "get.enabledColocatedServices" .main | fromYaml) .k10service).primary -}} + {{- $port := default .main.Values.service.externalPort (index (include "get.enabledColocatedServices" .main | fromYaml) .k10service).port | toString -}} + {{- $serviceFqdn = printf "%s-svc.%s.svc%s" $service .main.Release.Namespace $cluster_domain -}} + {{- $servicePort = $port -}} + {{- end }} + fqdn: {{ $serviceFqdn }} + port: {{ $servicePort }} + application: {{ .main.Release.Name }} +{{- end -}} + +{{/* +Expands the name of the Prometheus chart. It is equivalent to what the +"prometheus.name" template does. It is needed because the referenced values in a +template are relative to where/when the template is called from, and not where +the template is defined at. This means that the value of .Chart.Name and +.Values.nameOverride are different depending on whether the template is called +from within the Prometheus chart or the K10 chart. +*/}} +{{- define "k10.prometheus.name" -}} +{{- default "prometheus" .Values.prometheus.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Expands the name of the Prometheus service created to expose the prometheus server. +*/}} +{{- define "k10.prometheus.service.name" -}} +{{- default (printf "%s-%s-%s" .Release.Name "prometheus" .Values.prometheus.server.name) .Values.prometheus.server.fullnameOverride }} +{{- end -}} + +{{/* +Checks if EULA is accepted via cmd +Enforces eula.company and eula.email as required fields +returns configMap fields +*/}} +{{- define "k10.eula.fields" -}} +{{- if .Values.eula.accept -}} +accepted: "true" +company: {{ required "eula.company is required field if eula is accepted" .Values.eula.company }} +email: {{ required "eula.email is required field if eula is accepted" .Values.eula.email }} +{{- else -}} +accepted: "" +company: "" +email: "" +{{- end }} +{{- end -}} + +{{/* +Helper to determine the API Domain +*/}} +{{- define "apiDomain" -}} +{{- if .Values.useNamespacedAPI -}} +kio.{{- replace "-" "." .Release.Namespace -}} +{{- else -}} +kio.kasten.io +{{- end -}} +{{- end -}} + +{{/* +Get dex image, if user wants to +install certified version of upstream +images or not +*/}} + +{{- define "get.dexImage" }} + {{- (get .Values.global.images (include "dex.dexImageName" .)) | default (include "dex.dexImage" .) }} +{{- end }} + +{{- define "dex.dexImage" -}} + {{- printf "%s:%s" (include "dex.dexImageRepo" .) (include "dex.dexImageTag" .) }} +{{- end -}} + +{{- define "dex.dexImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "dex.dexImageName" .) }} + {{- else if .Values.global.azMarketPlace }} + {{- printf "%s/%s" .Values.global.azure.images.dex.registry .Values.global.azure.images.dex.image }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "dex.dexImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "dex.dexImageName" -}} + {{- printf "dex" }} +{{- end -}} + +{{- define "dex.dexImageTag" -}} + {{- if .Values.global.azMarketPlace }} + {{- print .Values.global.azure.images.dex.tag }} + {{- else }} + {{- .Values.global.image.tag | default .Chart.AppVersion }} + {{- end -}} +{{- end -}} + +{{/* + Get dex frontend directory (in the dex image) +*/}} +{{- define "k10.dexFrontendDir" -}} + {{- $dexImageDict := default $.Values.dexImage dict }} + {{- index $dexImageDict "frontendDir" | default "/srv/dex/web" }} +{{- end -}} + +{{/* +Get the k10tools image. +*/}} +{{- define "k10.k10ToolsImage" -}} + {{- (get .Values.global.images (include "k10.k10ToolsImageName" .)) | default (include "k10.k10ToolsDefaultImage" .) -}} +{{- end -}} + +{{- define "k10.k10ToolsDefaultImage" -}} + {{- printf "%s:%s" (include "k10.k10ToolsImageRepo" .) (include "k10.k10ToolsImageTag" .) -}} +{{- end -}} + +{{- define "k10.k10ToolsImageRepo" -}} + {{- if .Values.global.airgapped.repository -}} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "k10.k10ToolsImageName" .) -}} + {{- else if .Values.global.azMarketPlace -}} + {{- printf "%s/%s" .Values.global.azure.images.k10tools.registry .Values.global.azure.images.k10tools.image -}} + {{- else -}} + {{- printf "%s/%s" .Values.global.image.registry (include "k10.k10ToolsImageName" .) -}} + {{- end -}} +{{- end -}} + +{{- define "k10.k10ToolsImageName" -}} + {{- print "k10tools" -}} +{{- end -}} + +{{- define "k10.k10ToolsImageTag" -}} + {{- if .Values.global.azMarketPlace -}} + {{- print .Values.global.azure.images.k10tools.tag -}} + {{- else -}} + {{- include "get.k10ImageTag" . -}} + {{- end -}} +{{- end -}} + +{{/* +Get the emissary image. +*/}} +{{- define "get.emissaryImage" }} + {{- (get .Values.global.images (include "k10.emissaryImageName" .)) | default (include "k10.emissaryImage" .) }} +{{- end }} + +{{- define "k10.emissaryImage" -}} + {{- printf "%s:%s" (include "k10.emissaryImageRepo" .) (include "k10.emissaryImageTag" .) }} +{{- end -}} + +{{- define "k10.emissaryImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "k10.emissaryImageName" .) }} + {{- else if .Values.global.azMarketPlace }} + {{- printf "%s/%s" .Values.global.azure.images.emissary.registry .Values.global.azure.images.emissary.image }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "k10.emissaryImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "k10.emissaryImageName" -}} + {{- printf "emissary" }} +{{- end -}} + +{{- define "k10.emissaryImageTag" -}} + {{- if .Values.global.azMarketPlace }} + {{- print .Values.global.azure.images.emissary.tag }} + {{- else }} + {{- include "get.k10ImageTag" . }} + {{- end }} +{{- end -}} + +{{/* +Get the datamover image. +*/}} +{{- define "get.datamoverImage" }} + {{- (get .Values.global.images (include "k10.datamoverImageName" .)) | default (include "k10.datamoverImage" .) }} +{{- end }} + +{{- define "k10.datamoverImage" -}} + {{- printf "%s:%s" (include "k10.datamoverImageRepo" .) (include "k10.datamoverImageTag" .) }} +{{- end -}} + +{{- define "k10.datamoverImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "k10.datamoverImageName" .) }} + {{- else if .Values.global.azMarketPlace }} + {{- printf "%s/%s" .Values.global.azure.images.datamover.registry .Values.global.azure.images.datamover.image }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "k10.datamoverImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "k10.datamoverImageName" -}} + {{- printf "datamover" }} +{{- end -}} + +{{- define "k10.datamoverImageTag" -}} + {{- if .Values.global.azMarketPlace }} + {{- print .Values.global.azure.images.datamover.tag }} + {{- else }} + {{- include "get.k10ImageTag" . }} + {{- end }} +{{- end -}} + +{{/* +Get the metric-sidecar image. +*/}} +{{- define "get.metricSidecarImage" }} + {{- (get .Values.global.images (include "k10.metricSidecarImageName" .)) | default (include "k10.metricSidecarImage" .) }} +{{- end }} + +{{- define "k10.metricSidecarImage" -}} + {{- printf "%s:%s" (include "k10.metricSidecarImageRepo" .) (include "k10.metricSidecarImageTag" .) }} +{{- end -}} + +{{- define "k10.metricSidecarImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "k10.metricSidecarImageName" .) }} + {{- else if .Values.global.azMarketPlace }} + {{- printf "%s/%s" (.Values.global.azure.images.metricsidecar.registry) (.Values.global.azure.images.metricsidecar.image) }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "k10.metricSidecarImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "k10.metricSidecarImageName" -}} + {{- printf "metric-sidecar" }} +{{- end -}} + +{{- define "k10.metricSidecarImageTag" -}} + {{- if .Values.global.azMarketPlace }} + {{- print .Values.global.azure.images.metricsidecar.tag }} + {{- else }} + {{- include "get.k10ImageTag" . }} + {{- end }} +{{- end -}} + +{{/* +Check if AWS creds are specified +*/}} +{{- define "check.awscreds" -}} +{{- if or .Values.secrets.awsAccessKeyId .Values.secrets.awsSecretAccessKey -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.awsSecretName" -}} +{{- if .Values.secrets.awsClientSecretName -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Azure MSI with Default ID is specified +*/}} +{{- define "check.azureMSIWithDefaultID" -}} +{{- if .Values.azure.useDefaultMSI -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Azure MSI with a specific Client ID is specified +*/}} +{{- define "check.azureMSIWithClientID" -}} +{{- if and (not (or .Values.secrets.azureClientSecret .Values.secrets.azureTenantId)) .Values.secrets.azureClientId -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Azure ClientSecret creds are specified +*/}} +{{- define "check.azureClientSecretCreds" -}} +{{- if and (and .Values.secrets.azureTenantId .Values.secrets.azureClientId) .Values.secrets.azureClientSecret -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Checks and enforces only 1 set of azure creds is specified +*/}} +{{- define "enforce.singleazurecreds" -}} +{{ if and (eq (include "check.azureMSIWithClientID" .) "true") (eq (include "check.azureMSIWithDefaultID" .) "true") }} +{{- fail "useDefaultMSI is set to true, but an additional ClientID is also provided. Please choose one." }} +{{- end -}} +{{ if and ( or (eq (include "check.azureClientSecretCreds" .) "true") (eq (include "check.azuresecret" .) "true" )) (or (eq (include "check.azureMSIWithClientID" .) "true") (eq (include "check.azureMSIWithDefaultID" .) "true")) }} +{{- fail "Both Azure ClientSecret and Managed Identity creds are available, but only one is allowed. Please choose one." }} +{{- end -}} +{{- end -}} + +{{/* +Get the kanister-tools image. +*/}} +{{- define "get.kanisterToolsImage" -}} + {{- (get .Values.global.images (include "kan.kanisterToolsImageName" .)) | default (include "kan.kanisterToolsImage" .) }} +{{- end }} + +{{- define "kan.kanisterToolsImage" -}} + {{- printf "%s:%s" (include "kan.kanisterToolsImageRepo" .) (include "kan.kanisterToolsImageTag" .) }} +{{- end -}} + +{{- define "kan.kanisterToolsImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "kan.kanisterToolsImageName" .) }} + {{- else if .Values.global.azMarketPlace }} + {{- printf "%s/%s" .Values.global.azure.images.kanistertools.registry .Values.global.azure.images.kanistertools.image }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "kan.kanisterToolsImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "kan.kanisterToolsImageName" -}} + {{- printf "kanister-tools" }} +{{- end -}} + +{{- define "kan.kanisterToolsImageTag" -}} + {{- if .Values.global.azMarketPlace }} + {{- print .Values.global.azure.images.kanistertools.tag }} + {{- else }} + {{- include "get.k10ImageTag" . }} + {{- end }} +{{- end -}} + +{{/* +Check if Google Workload Identity Federation is enabled +*/}} +{{- define "check.gwifenabled" -}} +{{- if .Values.google.workloadIdentityFederation.enabled -}} +{{- print true -}} +{{- end -}} +{{- end -}} + + +{{/* +Check if Google Workload Identity Federation Identity Provider is set +*/}} +{{- define "check.gwifidptype" -}} +{{- if .Values.google.workloadIdentityFederation.idp.type -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Fail if Google Workload Identity Federation is enabled but no Identity Provider is set +*/}} +{{- define "validate.gwif.idp.type" -}} +{{- if and (eq (include "check.gwifenabled" .) "true") (ne (include "check.gwifidptype" .) "true") -}} + {{- fail "Google Workload Federation is enabled but helm flag for idp type is missing. Please set helm value google.workloadIdentityFederation.idp.type" -}} +{{- end -}} +{{- end -}} + +{{/* +Check if K8S Bound Service Account Token (aka Projected Service Account Token) is needed, +which is when GWIF is enabled and the IdP is kubernetes +*/}} +{{- define "check.projectSAToken" -}} +{{- if and (eq (include "check.gwifenabled" .) "true") (eq .Values.google.workloadIdentityFederation.idp.type "kubernetes") -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if the audience that the bound service account token is intended for is set +*/}} +{{- define "check.gwifidpaud" -}} +{{- if .Values.google.workloadIdentityFederation.idp.aud -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Fail if Service Account token projection is expected but no indented Audience is set +*/}} +{{- define "validate.gwif.idp.aud" -}} +{{- if and (eq (include "check.projectSAToken" .) "true") (ne (include "check.gwifidpaud" .) "true") -}} + {{- fail "Kubernetes is set as the Identity Provider but an intended Audience is missing. Please set helm value google.workloadIdentityFederation.idp.aud" -}} +{{- end -}} +{{- end -}} + + +{{/* +Check if Google creds are specified +*/}} +{{- define "check.googlecreds" -}} +{{- if .Values.secrets.googleApiKey -}} + {{- if eq (include "check.isBase64" .Values.secrets.googleApiKey) "false" -}} + {{- fail "secrets.googleApiKey must be base64 encoded" -}} + {{- end -}} + {{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.googleCredsSecret" -}} +{{- if .Values.secrets.googleClientSecretName -}} + {{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.googleCredsOrSecret" -}} +{{- if or (eq (include "check.googlecreds" .) "true") (eq (include "check.googleCredsSecret" .) "true")}} + {{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Google Project ID is not set without Google API Key +*/}} +{{- define "check.googleproject" -}} +{{- if .Values.secrets.googleProjectId -}} + {{- if not .Values.secrets.googleApiKey -}} + {{- print false -}} + {{- else -}} + {{- print true -}} + {{- end -}} +{{- else -}} + {{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Azure creds are specified +*/}} +{{- define "check.azurecreds" -}} +{{- if or (eq (include "check.azureClientSecretCreds" .) "true") ( or (eq (include "check.azureMSIWithClientID" .) "true") (eq (include "check.azureMSIWithDefaultID" .) "true")) -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.azuresecret" -}} +{{- if .Values.secrets.azureClientSecretName }} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Vsphere creds are specified +*/}} +{{- define "check.vspherecreds" -}} +{{- if or (or .Values.secrets.vsphereEndpoint .Values.secrets.vsphereUsername) .Values.secrets.vspherePassword -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.vsphereClientSecret" -}} +{{- if .Values.secrets.vsphereClientSecretName -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Vault token secret creds are specified +*/}} +{{- define "check.vaulttokenauth" -}} +{{- if .Values.vault.secretName -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if K8s role is specified +*/}} +{{- define "check.vaultk8sauth" -}} +{{- if .Values.vault.role -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{/* +Check if Vault creds for token or k8s auth are specified +*/}} +{{- define "check.vaultcreds" -}} +{{- if or (eq (include "check.vaulttokenauth" .) "true") (eq (include "check.vaultk8sauth" .) "true") -}} +{{- print true -}} +{{- end -}} +{{- end -}} + + +{{/* +Checks and enforces only 1 set of cloud creds is specified +*/}} +{{- define "enforce.singlecloudcreds" -}} +{{- $count := dict "count" (int 0) -}} +{{- $main := . -}} +{{- range $ind, $cloud_provider := include "k10.cloudProviders" . | splitList " " }} +{{ if eq (include (printf "check.%screds" $cloud_provider) $main) "true" }} +{{ $c := add1 $count.count | set $count "count" }} +{{ if gt $count.count 1 }} +{{- fail "Credentials for different cloud providers were provided but only one is allowed. Please verify your .secrets.* values." }} +{{ end }} +{{ end }} +{{- end }} +{{- end -}} + +{{/* +Converts .Values.features into k10-features: map[string]: "value" +*/}} +{{- define "k10.features" -}} +{{ range $n, $v := .Values.features }} +{{ $n }}: {{ $v | quote -}} +{{ end }} +{{- end -}} + +{{/* +Checks if string is base64 encoded +*/}} +{{- define "check.isBase64" -}} +{{- not (. | b64dec | contains "illegal base64 data") -}} +{{- end -}} + +{{/* +Returns a license base64 either from file or from values +or prints it for awsmarketplace or awsManagedLicense +*/}} +{{- define "k10.getlicense" -}} +{{- if .Values.metering.awsMarketplace -}} + {{- print "Y3VzdG9tZXJOYW1lOiBhd3MtbWFya2V0cGxhY2UKZGF0ZUVuZDogJzIxMDAtMDEtMDFUMDA6MDA6MDAuMDAwWicKZGF0ZVN0YXJ0OiAnMjAxOC0wOC0wMVQwMDowMDowMC4wMDBaJwpmZWF0dXJlczoKICBjbG91ZE1ldGVyaW5nOiBhd3MKaWQ6IGF3cy1ta3QtNWMxMDlmZDUtYWI0Yy00YTE0LWJiY2QtNTg3MGU2Yzk0MzRiCnByb2R1Y3Q6IEsxMApyZXN0cmljdGlvbnM6IG51bGwKdmVyc2lvbjogdjEuMC4wCnNpZ25hdHVyZTogY3ZEdTNTWHljaTJoSmFpazR3THMwTk9mcTNFekYxQ1pqLzRJMUZVZlBXS0JETHpuZmh2eXFFOGUvMDZxNG9PNkRoVHFSQlY3VFNJMzVkQzJ4alllaGp3cWwxNHNKT3ZyVERKZXNFWVdyMVFxZGVGVjVDd21HczhHR0VzNGNTVk5JQXVseGNTUG9oZ2x2UlRJRm0wVWpUOEtKTzlSTHVyUGxyRjlGMnpnK0RvM2UyTmVnamZ6eTVuMUZtd24xWUNlbUd4anhFaks0djB3L2lqSGlwTGQzWVBVZUh5Vm9mZHRodGV0YmhSUGJBVnVTalkrQllnRklnSW9wUlhpYnpTaEMvbCs0eTFEYzcyTDZXNWM0eUxMWFB1SVFQU3FjUWRiYnlwQ1dYYjFOT3B3aWtKMkpsR0thMldScFE4ZUFJNU9WQktqZXpuZ3FPa0lRUC91RFBtSXFBPT0K" -}} +{{- else if or ( .Values.metering.awsManagedLicense ) ( .Values.metering.licenseConfigSecretName ) -}} + {{- print "Y3VzdG9tZXJOYW1lOiBhd3MtdG90ZW0KZGF0ZUVuZDogJzIxMDAtMDEtMDFUMDA6MDA6MDAuMDAwWicKZGF0ZVN0YXJ0OiAnMjAyMS0wOS0wMVQwMDowMDowMC4wMDBaJwpmZWF0dXJlczoKICBleHRlcm5hbExpY2Vuc2U6IGF3cwogIHByb2R1Y3RTS1U6IGI4YzgyMWQ5LWJmNDAtNDE4ZC1iYTBiLTgxMjBiZjc3ZThmOQogIGtleUZpbmdlcnByaW50OiBhd3M6Mjk0NDA2ODkxMzExOkFXUy9NYXJrZXRwbGFjZTppc3N1ZXItZmluZ2VycHJpbnQKaWQ6IGF3cy1leHQtMWUxMTVlZjMtM2YyMC00MTJlLTgzODItMmE1NWUxMTc1OTFlCnByb2R1Y3Q6IEsxMApyZXN0cmljdGlvbnM6CiAgbm9kZXM6ICczJwp2ZXJzaW9uOiB2MS4wLjAKc2lnbmF0dXJlOiBkeEtLN3pPUXdzZFBOY2I1NExzV2hvUXNWeWZSVDNHVHZ0VkRuR1Vvb2VxSGlwYStTY25HTjZSNmdmdmtWdTRQNHh4RmV1TFZQU3k2VnJYeExOTE1RZmh2NFpBSHVrYmFNd3E5UXhGNkpGSmVXbTdzQmdtTUVpWVJ2SnFZVFcyMlNoakZEU1RWejY5c2JBTXNFMUd0VTdXKytITGk0dnhybjVhYkd6RkRHZW5iRE5tcXJQT3dSa3JIdTlHTFQ1WmZTNDFUL0hBMjNZZnlsTU54MGFlK2t5TGZvZXNuK3FKQzdld2NPWjh4eE94bFRJR3RuWDZ4UU5DTk5iYjhSMm5XbmljNVd0OElEc2VDR3lLMEVVRW9YL09jNFhsWVVra3FGQ0xPdVhuWDMxeFZNZ1NFQnVEWExFd3Y3K2RlSmcvb0pMaW9EVHEvWUNuM0lnem9VR2NTMGc9PQo=" -}} +{{- else -}} + {{- $license := .Values.license -}} + {{- if eq (include "check.isBase64" $license) "false" -}} + {{- $license = $license | b64enc -}} + {{- end -}} + {{- print (default (.Files.Get "license") $license) -}} +{{- end -}} +{{- end -}} + +{{/* +Returns resource usage given a pod name and container name +*/}} +{{- define "k10.resource.request" -}} +{{- $resourceDefaultList := (include "k10.serviceResources" .main | fromYaml) }} +{{- $podName := .k10_service_pod_name }} +{{- $containerName := .k10_service_container_name }} +{{- $resourceValue := "" }} +{{- if (hasKey $resourceDefaultList $podName) }} + {{- $resourceValue = index (index $resourceDefaultList $podName) $containerName }} +{{- end }} +{{- if (hasKey .main.Values.resources $podName) }} + {{- if (hasKey (index .main.Values.resources $podName) $containerName) }} + {{- $resourceValue = index (index .main.Values.resources $podName) $containerName }} + {{- end }} +{{- end }} +{{- /* If no resource usage value was provided, do not include the resources section */}} +{{- /* This allows users to set unlimited resources by providing a service key that is empty (e.g. `--set resources.=`) */}} +{{- if $resourceValue }} +resources: +{{- $resourceValue | toYaml | trim | nindent 2 }} +{{- else if eq .main.Release.Namespace "default" }} +resources: + requests: + cpu: "0.01" +{{- end }} +{{- end -}} + +{{/* +Adds priorityClassName field according to helm values. +*/}} +{{- define "k10.priorityClassName" }} +{{- $deploymentName := .k10_deployment_name }} +{{- $defaultPriorityClassName := default "" .main.Values.defaultPriorityClassName }} +{{- $priorityClassName := $defaultPriorityClassName }} + +{{- if and (hasKey .main.Values "priorityClassName") (hasKey .main.Values.priorityClassName $deploymentName) }} + {{- $priorityClassName = index .main.Values.priorityClassName $deploymentName }} +{{- end -}} + +{{- if $priorityClassName }} +priorityClassName: {{ $priorityClassName }} +{{- end }} + +{{- end }}{{/* define "k10.priorityClassName" */}} + +{{- define "kanisterToolsResources" }} +{{- if .Values.genericVolumeSnapshot.resources.requests.memory }} +KanisterToolsMemoryRequests: {{ .Values.genericVolumeSnapshot.resources.requests.memory | quote }} +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.requests.cpu }} +KanisterToolsCPURequests: {{ .Values.genericVolumeSnapshot.resources.requests.cpu | quote }} +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.limits.memory }} +KanisterToolsMemoryLimits: {{ .Values.genericVolumeSnapshot.resources.limits.memory | quote }} +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.limits.cpu }} +KanisterToolsCPULimits: {{ .Values.genericVolumeSnapshot.resources.limits.cpu | quote }} +{{- end }} +{{- end }} + +{{- define "kanisterPodMetricSidecarResources" }} +{{- if .Values.kanisterPodMetricSidecar.resources.requests.memory }} +KanisterPodMetricSidecarMemoryRequest: {{ .Values.kanisterPodMetricSidecar.resources.requests.memory | quote }} +{{- end }} +{{- if .Values.kanisterPodMetricSidecar.resources.requests.cpu }} +KanisterPodMetricSidecarCPURequest: {{ .Values.kanisterPodMetricSidecar.resources.requests.cpu | quote }} +{{- end }} +{{- if .Values.kanisterPodMetricSidecar.resources.limits.memory }} +KanisterPodMetricSidecarMemoryLimit: {{ .Values.kanisterPodMetricSidecar.resources.limits.memory | quote }} +{{- end }} +{{- if .Values.kanisterPodMetricSidecar.resources.limits.cpu }} +KanisterPodMetricSidecarCPULimit: {{ .Values.kanisterPodMetricSidecar.resources.limits.cpu | quote }} +{{- end }} +{{- end }} + +{{- define "workerPodResourcesCRD" }} +{{- if .Values.workerPodCRDs.resourcesRequests.maxMemory }} +workerPodMaxMemoryRequest: {{ .Values.workerPodCRDs.resourcesRequests.maxMemory | quote }} +{{- end }} +{{- if .Values.workerPodCRDs.resourcesRequests.maxCPU }} +workerPodMaxCPURequest: {{ .Values.workerPodCRDs.resourcesRequests.maxCPU | quote }} +{{- end }} +{{- if .Values.workerPodCRDs.defaultActionPodSpec }} +workerPodDefaultAPSName: {{ .Values.workerPodCRDs.defaultActionPodSpec | quote }} +{{- end }} +{{- end }} + +{{- define "get.kanisterPodCustomLabels" -}} +{{- if .Values.kanisterPodCustomLabels }} +KanisterPodCustomLabels: {{ .Values.kanisterPodCustomLabels | quote }} +{{- end }} +{{- end }} + +{{- define "get.gvsActivationToken" }} +{{- if .Values.genericStorageBackup.token }} +GVSActivationToken: {{ .Values.genericStorageBackup.token | quote }} +{{- end }} +{{- end }} + +{{- define "get.kanisterPodCustomAnnotations" -}} +{{- if .Values.kanisterPodCustomAnnotations }} +KanisterPodCustomAnnotations: {{ .Values.kanisterPodCustomAnnotations | quote }} +{{- end }} +{{- end }} + +{{/* +Lookup and return only enabled colocated services +*/}} +{{- define "get.enabledColocatedSvcList" -}} +{{- $enabledColocatedSvcList := dict }} +{{- $colocatedList := include "get.enabledColocatedServiceLookup" . | fromYaml }} +{{- range $primary, $secondaryList := $colocatedList }} + {{- $enabledSecondarySvcList := list }} + {{- range $skip, $secondary := $secondaryList }} + {{- if or (not (hasKey $.Values.optionalColocatedServices $secondary)) ((index $.Values.optionalColocatedServices $secondary).enabled) }} + {{- $enabledSecondarySvcList = append $enabledSecondarySvcList $secondary }} + {{- end }} + {{- end }} + {{- if gt (len $enabledSecondarySvcList) 0 }} + {{- $enabledColocatedSvcList = set $enabledColocatedSvcList $primary $enabledSecondarySvcList }} + {{- end }} +{{- end }} +{{- $enabledColocatedSvcList | toYaml | trim | nindent 0}} +{{- end -}} + +{{- define "get.serviceContainersInPod" -}} +{{- $podService := .k10_service_pod }} +{{- $colocatedList := include "get.enabledColocatedServices" .main | fromYaml }} +{{- $colocatedLookupByPod := include "get.enabledColocatedSvcList" .main | fromYaml }} +{{- $containerList := list $podService }} +{{- if hasKey $colocatedLookupByPod $podService }} + {{- $containerList = concat $containerList (index $colocatedLookupByPod $podService)}} +{{- end }} +{{- $containerList | join " " }} +{{- end -}} + +{{- define "get.statefulRestServicesInPod" -}} +{{- $statefulRestSvcsInPod := list }} +{{- $podService := .k10_service_pod }} +{{- $containerList := (dict "main" .main "k10_service_pod" $podService | include "get.serviceContainersInPod" | splitList " ") }} +{{- if .main.Values.global.persistence.enabled }} + {{- range $skip, $containerInPod := $containerList }} + {{- $isRestService := has $containerInPod (include "get.enabledRestServices" $.main | splitList " ") }} + {{- $isStatelessService := has $containerInPod (include "get.enabledStatelessServices" $.main | splitList " ") }} + {{- if and $isRestService (not $isStatelessService) }} + {{- $statefulRestSvcsInPod = append $statefulRestSvcsInPod $containerInPod }} + {{- end }} + {{- end }} +{{- end }} +{{- $statefulRestSvcsInPod | join " " }} +{{- end -}} + +{{- define "k10.prefixPath" -}} + {{- if .Values.route.enabled -}} + /{{ .Values.route.path | default .Release.Name | trimPrefix "/" | trimSuffix "/" }} + {{- else if .Values.ingress.create -}} + /{{ .Values.ingress.urlPath | default .Release.Name | trimPrefix "/" | trimSuffix "/" }} + {{- else -}} + /{{ .Release.Name }} + {{- end -}} +{{- end -}} + +{{/* +Check if encryption keys are specified +*/}} +{{- define "check.primaryKey" -}} +{{- if (or .Values.encryption.primaryKey.awsCmkKeyId .Values.encryption.primaryKey.vaultTransitKeyName) -}} +{{- print true -}} +{{- end -}} +{{- end -}} + +{{- define "check.validateImagePullSecrets" -}} + {{/* Validate image pull secrets if a custom Docker config is provided */}} + {{- if (or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath ) -}} + {{- if (and .Values.grafana.enabled (not .Values.global.imagePullSecret) (not .Values.grafana.image.pullSecrets)) -}} + {{ fail "A custom Docker config was provided, but Grafana is not configured to use it. Please check that global.imagePullSecret is set correctly." }} + {{- end -}} + {{- if (and .Values.prometheus.server.enabled (not .Values.global.imagePullSecret) (not .Values.prometheus.imagePullSecrets)) -}} + {{ fail "A custom Docker config was provided, but Prometheus is not configured to use it. Please check that global.imagePullSecret is set correctly." }} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- define "k10.imagePullSecrets" }} +{{- $imagePullSecrets := list .Values.global.imagePullSecret }}{{/* May be empty, but the compact below will handle that */}} +{{- if (or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath) }} + {{- $imagePullSecrets = concat $imagePullSecrets (list "k10-ecr") }} +{{- end }} +{{- $imagePullSecrets = $imagePullSecrets | compact | uniq }} + +{{- if $imagePullSecrets }} +imagePullSecrets: + {{- range $imagePullSecrets }} + {{/* Check if the name is not empty string */}} + - name: {{ . }} + {{- end }} +{{- end }} +{{- end }} + +{{/* +k10.imagePullSecretNames gets us just the secret names that are going be used +as imagePullSecrets in the k10 services. +*/}} +{{- define "k10.imagePullSecretNames" }} +{{- $pullSecretsSpec := (include "k10.imagePullSecrets" . ) | fromYaml }} +{{- if $pullSecretsSpec }} + {{- range $pullSecretsSpec.imagePullSecrets }} + {{- $secretName := . }} + {{- printf "%s " ( $secretName.name) }} + {{- end}} +{{- end}} +{{- end }} + +{{/* +Below helper template functions are referred from chart +https://github.com/prometheus-community/helm-charts/blob/main/charts/prometheus/templates/_helpers.tpl +*/}} + +{{/* +Return kubernetes version +*/}} +{{- define "k10.kubeVersion" -}} + {{- default .Capabilities.KubeVersion.Version (regexFind "v[0-9]+\\.[0-9]+\\.[0-9]+" .Capabilities.KubeVersion.Version) -}} +{{- end -}} + +{{/* +Return the appropriate apiVersion for ingress. +*/}} +{{- define "ingress.apiVersion" -}} + {{- if and (.Capabilities.APIVersions.Has "networking.k8s.io/v1") (semverCompare ">= 1.19.x" (include "k10.kubeVersion" .)) -}} + {{- print "networking.k8s.io/v1" -}} + {{- else if .Capabilities.APIVersions.Has "extensions/v1beta1" -}} + {{- print "extensions/v1beta1" -}} + {{- else -}} + {{- print "networking.k8s.io/v1beta1" -}} + {{- end -}} +{{- end -}} + +{{/* +Is ingress part of stable APIVersion. +*/}} +{{- define "ingress.isStable" -}} + {{- eq (include "ingress.apiVersion" .) "networking.k8s.io/v1" -}} +{{- end -}} + +{{/* +Check if `ingress.defaultBackend` is properly formatted when specified. +*/}} +{{- define "check.ingress.defaultBackend" -}} + {{- if .Values.ingress.defaultBackend -}} + {{- if and .Values.ingress.defaultBackend.service.enabled .Values.ingress.defaultBackend.resource.enabled -}} + {{- fail "Both `service` and `resource` cannot be enabled in the `ingress.defaultBackend`. Provide only one." -}} + {{- end -}} + {{- if .Values.ingress.defaultBackend.service.enabled -}} + {{- if and (not .Values.ingress.defaultBackend.service.port.name) (not .Values.ingress.defaultBackend.service.port.number) -}} + {{- fail "Provide either `name` or `number` in the `ingress.defaultBackend.service.port`." -}} + {{- end -}} + {{- if and .Values.ingress.defaultBackend.service.port.name .Values.ingress.defaultBackend.service.port.number -}} + {{- fail "Both `name` and `number` cannot be specified in the `ingress.defaultBackend.service.port`. Provide only one." -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- define "check.validatePrometheusConfig" -}} + {{if and ( and .Values.global.prometheus.external.host .Values.global.prometheus.external.port) .Values.prometheus.server.enabled}} + {{ fail "Both internal and external Prometheus configs are not allowed at same time"}} + {{- end -}} +{{- end -}} + +{{/* +Defines unique ID to be assigned to all the K10 ambassador resources. +This will ensure that the K10's ambassador does not conflict with any other ambassador instances +running in the same cluster. +*/}} +{{- define "k10.ambassadorId" -}} +"kasten.io/k10" +{{- end -}} + +{{/* Check that image.values are not set. */}} +{{- define "image.values.check" -}} + {{- if not (empty .main.Values.image) }} + + {{- $registry := .main.Values.image.registry }} + {{- $repository := .main.Values.image.repository }} + {{- if or $registry $repository }} + {{- $registry = coalesce $registry "gcr.io" }} + {{- $repository = coalesce $repository "kasten-images" }} + + {{- $oldCombinedRegistry := "" }} + {{- if hasPrefix $registry $repository }} + {{- $oldCombinedRegistry = $repository }} + {{- else }} + {{- $oldCombinedRegistry = printf "%s/%s" $registry $repository }} + {{- end }} + + {{- if ne $oldCombinedRegistry .main.Values.global.image.registry }} + {{- fail "Setting image.registry and image.repository is no longer supported use global.image.registry instead" }} + {{- end }} + {{- end }} + + {{- $tag := .main.Values.image.tag }} + {{- if $tag }} + {{- if ne $tag .main.Values.global.image.tag }} + {{- fail "Setting image.tag is no longer supported use global.image.tag instead" }} + {{- end }} + {{- end }} + + {{- $pullPolicy := .main.Values.image.pullPolicy }} + {{- if $pullPolicy }} + {{- if ne $pullPolicy .main.Values.global.image.pullPolicy }} + {{- fail "Setting image.pullPolicy is no longer supported use global.image.pullPolicy instead" }} + {{- end }} + {{- end }} + + {{- end }} +{{- end -}} + +{{/* Used to verify if Ironbank is enabled */}} +{{- define "ironbank.enabled" -}} + {{- if (.Values.global.ironbank | default dict).enabled -}} + {{- print true -}} + {{- end -}} +{{- end -}} + +{{/* Get the K10 image tag. Fails if not set correctly */}} +{{- define "get.k10ImageTag" -}} + {{- $imageTag := coalesce .Values.global.image.tag (include "k10.imageTag" .) }} + {{- if not $imageTag }} + {{- fail "global.image.tag must be set because helm chart does not include a default tag." }} + {{- else }} + {{- $imageTag }} + {{- end }} +{{- end -}} + +{{- define "get.initImage" -}} + {{- (get .Values.global.images (include "init.ImageName" .)) | default (include "init.Image" .) }} +{{- end -}} + +{{- define "init.Image" -}} + {{- printf "%s:%s" (include "init.ImageRepo" .) (include "get.k10ImageTag" .) }} +{{- end -}} + +{{- define "init.ImageRepo" -}} + {{- if .Values.global.airgapped.repository }} + {{- printf "%s/%s" .Values.global.airgapped.repository (include "init.ImageName" .) }} + {{- else if .main.Values.global.azMarketPlace }} + {{- printf "%s/%s" .Values.global.azure.images.init.registry .Values.global.azure.images.init.image }} + {{- else }} + {{- printf "%s/%s" .Values.global.image.registry (include "init.ImageName" .) }} + {{- end }} +{{- end -}} + +{{- define "init.ImageName" -}} + {{- printf "init" }} +{{- end -}} + +{{- define "k10.splitImage" -}} + {{- $split_repo_tag_and_hash := .image | splitList "@" -}} + {{- $split_repo_and_tag := $split_repo_tag_and_hash | first | splitList ":" -}} + {{- $repo := $split_repo_and_tag | first -}} + + {{- /* Error if there are extra pieces we don't understand in the image */ -}} + {{- $split_repo_tag_and_hash_len := $split_repo_tag_and_hash | len -}} + {{- $split_repo_and_tag_len := $split_repo_and_tag | len -}} + {{- if or (gt $split_repo_tag_and_hash_len 2) (gt $split_repo_and_tag_len 2) -}} + {{- fail (printf "Unsupported image format: %q (%s)" .image .path) -}} + {{- end -}} + + {{- $digest := $split_repo_tag_and_hash | rest | first -}} + {{- $tag := $split_repo_and_tag | rest | first -}} + + {{- $sha := "" -}} + {{- if $digest -}} + {{- if not ($digest | hasPrefix "sha256:") -}} + {{- fail (printf "Unsupported image ...@hash type: %q (%s)" .image .path) -}} + {{- end -}} + {{- $sha = $digest | trimPrefix "sha256:" }} + {{- end -}} + + {{- /* Split out the registry if the first component of the repo contains a "." */ -}} + {{- $registry := "" }} + {{- $split_repo := $repo | splitList "/" -}} + {{- if first $split_repo | contains "." -}} + {{- $registry = first $split_repo -}} + {{- $split_repo = rest $split_repo -}} + {{- end -}} + {{- $repo = $split_repo | join "/" -}} + + {{- + (dict + "registry" $registry + "repository" $repo + "tag" ($tag | default "") + "digest" ($digest | default "") + "sha" ($sha | default "") + ) | toJson + -}} +{{- end -}} + +{{/* Fail if Ironbank is enabled and the admin image is turned on */}} +{{- define "k10.fail.ironbankPdfReports" -}} + {{- if and (include "ironbank.enabled" .) (.Values.reporting.pdfReports) -}} + {{- fail "global.ironbank.enabled and reporting.pdfReports cannot both be enabled at the same time" -}} + {{- end -}} +{{- end -}} + +{{/* Fail if Ironbank is enabled and images we don't support are turned on */}} +{{- define "k10.fail.ironbankRHMarketplace" -}} + {{- if and (include "ironbank.enabled" .) (.Values.global.rhMarketPlace) -}} + {{- fail "global.ironbank.enabled and global.rhMarketPlace cannot both be enabled at the same time" -}} + {{- end -}} +{{- end -}} + +{{/* Fail if Ironbank is enabled and images we don't support are turned on */}} +{{- define "k10.fail.ironbankGrafana" -}} + {{- if (include "ironbank.enabled" .) -}} + {{- range $key, $value := .Values.grafana.sidecar -}} + {{/* + https://go.dev/doc/go1.18: the "and" used to evaluate all conditions and not terminate early + if a predicate was met, so we must have the below as their own conditional for any customers + used go version < 1.18. + */}} + {{- if kindIs "map" $value -}} + {{- if hasKey $value "enabled" -}} + {{- if $value.enabled -}} + {{- fail (printf "Ironbank deployment does not support grafana sidecar %s" $key) -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* Fail if Ironbank is enabled and images we don't support are turned on */}} +{{- define "k10.fail.ironbankPrometheus" -}} + {{- if (include "ironbank.enabled" .) -}} + {{- $prometheusDict := pick .Values.prometheus "alertmanager" "kube-state-metrics" "prometheus-node-exporter" "prometheus-pushgateway" -}} + {{- range $key, $value := $prometheusDict -}} + {{/* + https://go.dev/doc/go1.18: the "and" used to evaluate all conditions and not terminate early + if a predicate was met, so we must have the below as their own conditional for any customers + used go version < 1.18. + */}} + {{- if kindIs "map" $value -}} + {{- if hasKey $value "enabled" -}} + {{- if $value.enabled -}} + {{- fail (printf "Ironbank deployment does not support prometheus %s" $key) -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* Fail if FIPS is enabled and Grafana is turned on */}} +{{- define "k10.fail.fipsGrafana" -}} + {{- if and (.Values.fips.enabled) (.Values.grafana.enabled) -}} + {{- fail "fips.enabled and grafana.enabled cannot both be enabled at the same time" -}} + {{- end -}} +{{- end -}} + +{{/* Fail if FIPS is enabled and Prometheus is turned on */}} +{{- define "k10.fail.fipsPrometheus" -}} + {{- if and (.Values.fips.enabled) (.Values.prometheus.server.enabled) -}} + {{- fail "fips.enabled and prometheus.server.enabled cannot both be enabled at the same time" -}} + {{- end -}} +{{- end -}} + +{{/* Fail if FIPS is enabled and PDF reporting is turned on */}} +{{- define "k10.fail.fipsPDFReports" -}} + {{- if and (.Values.fips.enabled) (.Values.reporting.pdfReports) -}} + {{- fail "fips.enabled and reporting.pdfReports cannot both be enabled at the same time" -}} + {{- end -}} +{{- end -}} + +{{/* Check to see whether SIEM logging is enabled */}} +{{- define "k10.siemEnabled" -}} + {{- if or .Values.siem.logging.cluster.enabled .Values.siem.logging.cloud.awsS3.enabled -}} + {{- true -}} + {{- end -}} +{{- end -}} + +{{/* Determine if logging should go to filepath instead of stdout */}} +{{- define "k10.siemLoggingClusterFile" -}} + {{- if .Values.siem.logging.cluster.enabled -}} + {{- if (.Values.siem.logging.cluster.file | default dict).enabled -}} + {{- .Values.siem.logging.cluster.file.path | default "" -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* Determine if a max file size should be used */}} +{{- define "k10.siemLoggingClusterFileSize" -}} + {{- if .Values.siem.logging.cluster.enabled -}} + {{- if (.Values.siem.logging.cluster.file | default dict).enabled -}} + {{- .Values.siem.logging.cluster.file.size | default "" -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{/* Returns a generated name for the OpenShift Service Account secret */}} +{{- define "get.openshiftServiceAccountSecretName" -}} + {{ printf "%s-k10-secret" (include "get.openshiftServiceAccountName" .) | quote }} +{{- end -}} + +{{/* +Returns a generated name for the OpenShift Service Account if a service account name +is not configuredby the user using the helm value auth.openshift.serviceAccount +*/}} +{{- define "get.openshiftServiceAccountName" -}} + {{ default (include "k10.dexServiceAccountName" .) .Values.auth.openshift.serviceAccount}} +{{- end -}} + +{{/* +Returns the required environment variables to enforce FIPS mode using +the Microsoft Go toolchain and Red Hat's OpenSSL. +*/}} +{{- define "k10.enforceFIPSEnvironmentVariables" }} +- name: GOFIPS + value: "1" +- name: OPENSSL_FORCE_FIPS_MODE + value: "1" +{{- if .Values.fips.disable_ems }} +- name: KASTEN_CRYPTO_POLICY + value: disable_ems +{{- end }} +{{- end }} + +{{/* +Returns a billing identifier label to be added to workloads for azure marketplace offer +*/}} +{{- define "k10.azMarketPlace.billingIdentifier" -}} + {{- if .Values.global.azMarketPlace -}} + azure-extensions-usage-release-identifier: {{.Release.Name}} + {{- end -}} +{{- end -}} + +{{/* +Returns the grafana URL based on the fields grafana.enabled and grafana.external.url, or in other +words based on the fact that internal grafana is used to external grafana's URL is provided +*/}} +{{- define "k10.grafanaUrl" -}} + {{- if and (.Values.grafana.enabled) (.Values.grafana.external.url) }} + {{- fail "K10's Grafana is enabled and external Grafana's URL is also provided. URL must only be provided if grafana.enabled is set to false." }} + {{- end }} + {{- if .Values.grafana.enabled }} + {{- include "k10.prefixPath" . }}/grafana/ + {{- else -}} + {{ .Values.grafana.external.url }} + {{- end }} +{{- end }} + +{{/* Fail if internal logging is enabled and fluentbit endpoint is specified, otherwise return the fluentbit endpoint even if its empty */}} +{{- define "k10.fluentbitEndpoint" -}} + {{- if and (.Values.logging.fluentbit_endpoint) (.Values.logging.internal) -}} + {{- fail "logging.fluentbit_endpoint cannot be set if logging.internal is true" -}} + {{- end -}} + {{ .Values.logging.fluentbit_endpoint }} +{{- end -}} + diff --git a/charts/kasten/k10/7.0.1101/templates/_k10_container.tpl b/charts/kasten/k10/7.0.1101/templates/_k10_container.tpl new file mode 100644 index 0000000000..6e2434912c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_k10_container.tpl @@ -0,0 +1,1121 @@ +{{- define "k10-containers" }} +{{- $pod := .k10_pod }} +{{- with .main }} +{{- $main_context := . }} +{{- $colocatedList := include "get.enabledColocatedServices" . | fromYaml }} +{{- $containerList := (dict "main" $main_context "k10_service_pod" $pod | include "get.serviceContainersInPod" | splitList " ") }} + containers: +{{- range $skip, $container := $containerList }} + {{- $port := default $main_context.Values.service.externalPort (index $colocatedList $container).port }} + {{- $serviceStateful := has $container (dict "main" $main_context "k10_service_pod" $pod | include "get.statefulRestServicesInPod" | splitList " ") }} + {{- dict "main" $main_context "k10_pod" $pod "k10_container" $container "externalPort" $port "stateful" $serviceStateful | include "k10-container" }} +{{- end }} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-containers" */}} + +{{- define "k10-container" }} +{{- $pod := .k10_pod }} +{{- $service := .k10_container }} +{{- $externalPort := .externalPort }} +{{- with .main }} + - name: {{ $service }}-svc + {{- dict "main" . "k10_service" $service | include "serviceImage" | indent 8 }} + imagePullPolicy: {{ .Values.global.image.pullPolicy }} +{{- if eq $service "aggregatedapis" }} + args: + - "--secure-port={{ .Values.service.aggregatedApiPort }}" + - "--cert-dir=/tmp/apiserver.local.config/certificates/" + {{- if include "k10.siemEnabled" . }} + - "--audit-policy-file=/etc/kubernetes/{{ include "k10.aggAuditPolicyFile" .}}" + {{/* SIEM cloud logging */}} + - "--enable-k10-audit-cloud-aws-s3={{ .Values.siem.logging.cloud.awsS3.enabled }}" + - "--audit-cloud-path={{ .Values.siem.logging.cloud.path }}" + {{/* SIEM cluster logging */}} + - "--enable-k10-audit-cluster={{ .Values.siem.logging.cluster.enabled }}" + - "--audit-log-file={{ include "k10.siemLoggingClusterFile" . | default (include "k10.siemAuditLogFilePath" .) }}" + - "--audit-log-file-size={{ include "k10.siemLoggingClusterFileSize" . | default (include "k10.siemAuditLogFileSize" .) }}" + {{- end }} +{{- if .Values.useNamespacedAPI }} + - "--k10-api-domain={{ template "apiDomain" . }}" +{{- end }}{{/* .Values.useNamespacedAPI */}} +{{/* +We need this explicit conversion because installation using operator hub was failing +stating that types are not same for the equality check +*/}} +{{- else if not (eq (int .Values.service.externalPort) (int $externalPort) ) }} + args: + - "--port={{ $externalPort }}" + - "--host=0.0.0.0" +{{- end }}{{/* eq $service "aggregatedapis" */}} +{{- $podName := (printf "%s-svc" $pod) }} +{{- $containerName := (printf "%s-svc" $service) }} +{{- dict "main" . "k10_service_pod_name" $podName "k10_service_container_name" $containerName | include "k10.resource.request" | indent 8}} + ports: +{{- if eq $service "aggregatedapis" }} + - containerPort: {{ .Values.service.aggregatedApiPort }} +{{- else }} + - containerPort: {{ $externalPort }} + {{- if eq $service "controllermanager" }} + - containerPort: {{ include "k10.mcExternalPort" nil }} + {{- end }} +{{- end }} +{{- if eq $service "logging" }} + - containerPort: 24224 + protocol: TCP + - containerPort: 24225 + protocol: TCP +{{- end }} + securityContext: + allowPrivilegeEscalation: false + readOnlyRootFilesystem: false + capabilities: + drop: ["ALL"] + livenessProbe: +{{- if eq $service "aggregatedapis" }} + tcpSocket: + port: {{ .Values.service.aggregatedApiPort }} + timeoutSeconds: 5 +{{- else }} + httpGet: + path: /v0/healthz + port: {{ $externalPort }} + timeoutSeconds: 1 +{{- end }} + initialDelaySeconds: 300 +{{- if ne $service "aggregatedapis" }} + readinessProbe: + httpGet: + path: /v0/healthz + port: {{ $externalPort }} + initialDelaySeconds: 3 +{{- end }} + env: +{{- if eq $service "dashboardbff" }} + - name: {{ include "k10.disabledServicesEnvVar" . }} + value: {{ include "get.disabledServices" . | quote }} +{{- end -}} +{{- if list "dashboardbff" "executor" "garbagecollector" "controllermanager" "kanister" | has $service}} +{{- if not (eq (include "check.googleproject" . ) "true") -}} + {{- fail "secrets.googleApiKey field is required when using secrets.googleProjectId" -}} +{{- end -}} +{{- $gkeSecret := default "google-secret" .Values.secrets.googleClientSecretName }} +{{- $gkeProjectId := "kasten-gke-project" }} +{{- $gkeApiKey := "/var/run/secrets/kasten.io/kasten-gke-sa.json"}} +{{- if eq (include "check.googleCredsSecret" .) "true" }} + {{- $gkeProjectId = "google-project-id" }} + {{- $gkeApiKey = "/var/run/secrets/kasten.io/google-api-key" }} +{{- end }} +{{- if eq (include "check.googleCredsOrSecret" .) "true" }} + - name: GOOGLE_APPLICATION_CREDENTIALS + value: {{ $gkeApiKey }} +{{- end }} +{{- if eq (include "check.googleCredsOrSecret" .) "true" }} + - name: projectID + valueFrom: + secretKeyRef: + name: {{ $gkeSecret }} + key: {{ $gkeProjectId }} + optional: true +{{- end }} +{{- end }} +{{- if list "dashboardbff" "executor" "garbagecollector" "controllermanager" "kanister" | has $service}} +{{- if or (eq (include "check.azuresecret" .) "true") (eq (include "check.azurecreds" .) "true" ) }} +{{- if eq (include "check.azuresecret" .) "true" }} + - name: {{ include "k10.azureClientIDEnvVar" . }} + valueFrom: + secretKeyRef: + name: {{ .Values.secrets.azureClientSecretName }} + key: azure_client_id + - name: {{ include "k10.azureTenantIDEnvVar" . }} + valueFrom: + secretKeyRef: + name: {{ .Values.secrets.azureClientSecretName }} + key: azure_tenant_id + - name: {{ include "k10.azureClientSecretEnvVar" . }} + valueFrom: + secretKeyRef: + name: {{ .Values.secrets.azureClientSecretName }} + key: azure_client_secret +{{- else }} +{{- if or (eq (include "check.azureMSIWithClientID" .) "true") (eq (include "check.azureClientSecretCreds" .) "true") }} + - name: {{ include "k10.azureClientIDEnvVar" . }} + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_client_id +{{- end }} +{{- if eq (include "check.azureClientSecretCreds" .) "true" }} + - name: {{ include "k10.azureTenantIDEnvVar" . }} + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_tenant_id + - name: {{ include "k10.azureClientSecretEnvVar" . }} + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_client_secret +{{- end }} +{{- end }} +{{- if .Values.secrets.azureResourceGroup }} + - name: AZURE_RESOURCE_GROUP + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_resource_group +{{- end }} +{{- if .Values.secrets.azureSubscriptionID }} + - name: AZURE_SUBSCRIPTION_ID + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_subscription_id +{{- end }} +{{- if .Values.secrets.azureResourceMgrEndpoint }} + - name: AZURE_RESOURCE_MANAGER_ENDPOINT + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_resource_manager_endpoint +{{- end }} +{{- if or .Values.secrets.azureADEndpoint .Values.secrets.microsoftEntraIDEndpoint }} + - name: AZURE_AD_ENDPOINT + valueFrom: + secretKeyRef: + name: azure-creds + key: entra_id_endpoint +{{- end }} +{{- if or .Values.secrets.azureADResourceID .Values.secrets.microsoftEntraIDResourceID }} + - name: AZURE_AD_RESOURCE + valueFrom: + secretKeyRef: + name: azure-creds + key: entra_id_resource_id +{{- end }} +{{- if .Values.secrets.azureCloudEnvID }} + - name: AZURE_CLOUD_ENV_ID + valueFrom: + secretKeyRef: + name: azure-creds + key: azure_cloud_env_id +{{- end }} +{{- if eq (include "check.azureMSIWithDefaultID" .) "true" }} + - name: USE_AZURE_DEFAULT_MSI + value: "{{ .Values.azure.useDefaultMSI }}" +{{- end }} +{{- end }} +{{- end }} + +{{- /* +There are 3 valid states of the secret provided by customer: +1. Only role set +2. Both aws_access_key_id and aws_secret_access_key are set +3. All of role, aws_access_key_id and aws_secret_access_key are set. +*/}} +{{- if eq (include "check.awsSecretName" .) "true" }} + {{- $customerSecret := (lookup "v1" "Secret" .Release.Namespace .Values.secrets.awsClientSecretName )}} + {{- if $customerSecret }} + {{- if and (not $customerSecret.data.role) (not $customerSecret.data.aws_access_key_id) (not $customerSecret.data.aws_secret_access_key) }} + {{ fail "Provided secret must contain at least AWS IAM Role or AWS access key ID together with AWS secret access key"}} + {{- end }} + {{- if not (or (and $customerSecret.data.aws_access_key_id $customerSecret.data.aws_secret_access_key) (and (not $customerSecret.data.aws_access_key_id) (not $customerSecret.data.aws_secret_access_key))) }} + {{ fail "Provided secret lacks aws_access_key_id or aws_secret_access_key" }} + {{- end }} + {{- end }} +{{- end }} +{{- if list "dashboardbff" "executor" "garbagecollector" "controllermanager" "metering" "kanister" | has $service}} +{{- $awsSecretName := default "aws-creds" .Values.secrets.awsClientSecretName }} + - name: AWS_ACCESS_KEY_ID + valueFrom: + secretKeyRef: + name: {{ $awsSecretName }} + key: aws_access_key_id + optional: true + - name: AWS_SECRET_ACCESS_KEY + valueFrom: + secretKeyRef: + name: {{ $awsSecretName }} + key: aws_secret_access_key + optional: true + - name: K10_AWS_IAM_ROLE + valueFrom: + secretKeyRef: + name: {{ $awsSecretName }} + key: role + optional: true +{{- end }} +{{- if list "controllermanager" "executor" "catalog" | has $service}} +{{- if eq (include "check.gwifenabled" .) "true"}} + - name: GOOGLE_WORKLOAD_IDENTITY_FEDERATION_ENABLED + value: "true" +{{- if eq (include "check.gwifidptype" .) "true"}} + - name: GOOGLE_WORKLOAD_IDENTITY_FEDERATION_IDP + value: {{ .Values.google.workloadIdentityFederation.idp.type }} +{{- end }} +{{- if eq (include "check.gwifidpaud" .) "true"}} + - name: GOOGLE_WORKLOAD_IDENTITY_FEDERATION_AUD + value: {{ .Values.google.workloadIdentityFederation.idp.aud }} +{{- end }} +{{- end }} {{/* if eq (include "check.gwifenabled" .) "true" */}} +{{- end }} {{/* list "controllermanager" "executor" "catalog" | has $service */}} +{{- if or (eq $service "crypto") (eq $service "executor") (eq $service "dashboardbff") (eq $service "repositories") }} +{{- if eq (include "check.vaultcreds" .) "true" }} + - name: VAULT_ADDR + value: {{ .Values.vault.address }} +{{- if eq (include "check.vaultk8sauth" .) "true" }} + - name: VAULT_AUTH_ROLE + value: {{ .Values.vault.role }} + - name: VAULT_K8S_SERVICE_ACCOUNT_TOKEN_PATH + value: {{ .Values.vault.serviceAccountTokenPath }} +{{- end }} +{{- if (eq (include "check.vaulttokenauth" .) "true") }} + - name: VAULT_TOKEN + valueFrom: + secretKeyRef: + name: {{.Values.vault.secretName }} + key: vault_token +{{- end }} +{{- end }} +{{- end }} +{{- if list "dashboardbff" "executor" "garbagecollector" "controllermanager" | has $service}} +{{- if or (eq (include "check.vspherecreds" .) "true") (eq (include "check.vsphereClientSecret" .) "true") }} +{{- $vsphereSecretName := default "vsphere-creds" .Values.secrets.vsphereClientSecretName }} + - name: VSPHERE_ENDPOINT + valueFrom: + secretKeyRef: + name: {{ $vsphereSecretName }} + key: vsphere_endpoint + - name: VSPHERE_USERNAME + valueFrom: + secretKeyRef: + name: {{ $vsphereSecretName }} + key: vsphere_username + - name: VSPHERE_PASSWORD + valueFrom: + secretKeyRef: + name: {{ $vsphereSecretName }} + key: vsphere_password +{{- end }} +{{- end }} + - name: VERSION + valueFrom: + configMapKeyRef: + name: k10-config + key: version + - name: {{ include "k10.fluentbitEndpointEnvVar" . }} + valueFrom: + configMapKeyRef: + name: k10-config + key: fluentbitEndpoint + optional: true +{{- if .Values.clusterName }} + - name: CLUSTER_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: clustername +{{- end }} +{{- if .Values.fips.enabled }} + {{- include "k10.enforceFIPSEnvironmentVariables" . | indent 10 }} +{{- end }} + {{- with $capabilities := include "k10.capabilities" . }} + - name: K10_CAPABILITIES + value: {{ $capabilities | quote }} + {{- end }} + {{- with $capabilities_mask := include "k10.capabilities_mask" . }} + - name: K10_CAPABILITIES_MASK + value: {{ $capabilities_mask | quote }} + {{- end }} + - name: K10_HOST_SVC + value: {{ $pod }} +{{- if eq $service "controllermanager" }} + - name: K10_STATEFUL + value: "{{ .Values.global.persistence.enabled }}" +{{- if .Values.workerPodCRDs.resourcesRequests.maxMemory }} + - name: EPHEMERAL_POD_MAX_MEMORY_REQUESTS + valueFrom: + configMapKeyRef: + name: k10-config + key: workerPodMaxMemoryRequest +{{- end }} +{{- if .Values.workerPodCRDs.resourcesRequests.maxCPU }} + - name: EPHEMERAL_POD_MAX_CPU_REQUESTS + valueFrom: + configMapKeyRef: + name: k10-config + key: workerPodMaxCPURequest +{{- end }} +{{- end }} +{{- if eq $service "executor" }} + - name: KUBEVIRT_VM_UNFREEZE_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: kubeVirtVMsUnFreezeTimeout +{{- end }} +{{- if eq $service "executor" }} + - name: QUICK_DISASTER_RECOVERY_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: quickDisasterRecoveryEnabled +{{- end }} +{{- if or (eq $service "executor") (eq $service "controllermanager") }} +{{- if or .Values.global.imagePullSecret (or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath) }} + - name: IMAGE_PULL_SECRET_NAMES + value: {{ (trimSuffix " " (include "k10.imagePullSecretNames" .)) | toJson }} +{{- end }} +{{- end }} + - name: MODEL_STORE_DIR +{{- if or (eq $service "state") (not .Values.global.persistence.enabled) }} + value: "/tmp/k10store" +{{- else }} + valueFrom: + configMapKeyRef: + name: k10-config + key: modelstoredirname +{{- end }} +{{- if or (eq $service "kanister") (eq $service "executor")}} + - name: DATA_MOVER_IMAGE + value: {{ include "get.datamoverImage" . }} +{{- if .Values.workerPodCRDs.enabled }} + - name: K10_EPHEMERAL_POD_SPECS_CR_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: workerPodResourcesCRDEnabled +{{- if .Values.workerPodCRDs.defaultActionPodSpec }} + - name: K10_DEFAULT_ACTION_POD_SPEC_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: workerPodDefaultAPSName +{{- end }} +{{- end }} +{{- end }} +{{- if eq $service "executor"}} + - name: DATA_STORE_LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: DataStoreLogLevel + - name: DATA_STORE_FILE_LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: DataStoreFileLogLevel +{{- end }} + - name: LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: loglevel +{{- if .Values.kanisterPodCustomLabels }} + - name: KANISTER_POD_CUSTOM_LABELS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodCustomLabels +{{- end }} +{{- if .Values.kanisterPodCustomAnnotations }} + - name: KANISTER_POD_CUSTOM_ANNOTATIONS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodCustomAnnotations +{{- end }} + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: CONCURRENT_SNAP_CONVERSIONS + valueFrom: + configMapKeyRef: + name: k10-config + key: concurrentSnapConversions + - name: CONCURRENT_WORKLOAD_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: concurrentWorkloadSnapshots + - name: K10_DATA_STORE_PARALLEL_UPLOAD + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreParallelUpload + - name: K10_DATA_STORE_PARALLEL_DOWNLOAD + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreParallelDownload + - name: K10_DATA_STORE_GENERAL_CONTENT_CACHE_SIZE_MB + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreGeneralContentCacheSizeMB + - name: K10_DATA_STORE_GENERAL_METADATA_CACHE_SIZE_MB + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreGeneralMetadataCacheSizeMB + - name: K10_DATA_STORE_RESTORE_CONTENT_CACHE_SIZE_MB + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreRestoreContentCacheSizeMB + - name: K10_DATA_STORE_RESTORE_METADATA_CACHE_SIZE_MB + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreRestoreMetadataCacheSizeMB + - name: K10_LIMITER_GENERIC_VOLUME_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterGenericVolumeSnapshots + - name: K10_LIMITER_GENERIC_VOLUME_COPIES + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterGenericVolumeCopies + - name: K10_LIMITER_GENERIC_VOLUME_RESTORES + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterGenericVolumeRestores + - name: K10_LIMITER_CSI_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterCsiSnapshots + - name: K10_LIMITER_PROVIDER_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterProviderSnapshots + - name: K10_LIMITER_IMAGE_COPIES + valueFrom: + configMapKeyRef: + name: k10-config + key: K10LimiterImageCopies + - name: K10_EPHEMERAL_PVC_OVERHEAD + valueFrom: + configMapKeyRef: + name: k10-config + key: K10EphemeralPVCOverhead + - name: K10_PERSISTENCE_STORAGE_CLASS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10PersistenceStorageClass + - name: AWS_ASSUME_ROLE_DURATION + valueFrom: + configMapKeyRef: + name: k10-config + key: AWSAssumeRoleDuration +{{- if (list "kanister" "executor" "repositories" | has $service) }} + - name: K10_DATA_STORE_DISABLE_COMPRESSION + valueFrom: + configMapKeyRef: + name: k10-config + key: k10DataStoreDisableCompression + + - name: K10_KANISTER_POD_METRICS_IMAGE + value: {{ include "get.metricSidecarImage" . }} + + - name: KANISTER_POD_READY_WAIT_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodReadyWaitTimeout + + - name: K10_KANISTER_POD_METRICS_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarEnabled + - name: PUSHGATEWAY_METRICS_INTERVAL + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodPushgatewayMetricsInterval + {{- if .Values.kanisterPodMetricSidecar.resources.requests.memory }} + - name: K10_KANISTER_POD_METRIC_SIDECAR_MEMORY_REQUEST + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarMemoryRequest + {{- end }} + {{- if .Values.kanisterPodMetricSidecar.resources.requests.cpu }} + - name: K10_KANISTER_POD_METRIC_SIDECAR_CPU_REQUEST + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarCPURequest + {{- end }} + {{- if .Values.kanisterPodMetricSidecar.resources.limits.memory }} + - name: K10_KANISTER_POD_METRIC_SIDECAR_MEMORY_LIMIT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarMemoryLimit + {{- end }} + {{- if .Values.kanisterPodMetricSidecar.resources.limits.cpu }} + - name: K10_KANISTER_POD_METRIC_SIDECAR_CPU_LIMIT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarCPULimit + {{- end }} + {{- if .Values.scc.create }} + - name: {{ include "k10.sccNameEnvVar" . }} + value: {{ .Release.Name }}-scc + {{- end }} +{{- end }} +{{- if (list "kanister" "executor" "repositories" "crypto" "dashboardbff" "aggregatedapis" | has $service) }} + {{- if .Values.global.podLabels }} + - name: K10_CUSTOM_POD_LABELS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10CustomPodLabels + {{- end }} + {{- if .Values.global.podAnnotations }} + - name: K10_CUSTOM_POD_ANNOTATIONS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10CustomPodAnnotations + {{- end }} +{{- end }} +{{- if (list "dashboardbff" "catalog" "executor" "crypto" | has $service) }} + {{- if .Values.metering.mode }} + - name: K10REPORTMODE + value: {{ .Values.metering.mode }} + {{- end }} +{{- end }} +{{- if eq $service "garbagecollector" }} + - name: K10_GC_DAEMON_PERIOD + valueFrom: + configMapKeyRef: + name: k10-config + key: K10GCDaemonPeriod + - name: K10_GC_KEEP_MAX_ACTIONS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10GCKeepMaxActions + - name: K10_GC_ACTIONS_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: K10GCActionsEnabled +{{- end }} +{{- if (eq $service "executor") }} + - name: K10_EXECUTOR_WORKER_COUNT + valueFrom: + configMapKeyRef: + name: k10-config + key: K10ExecutorWorkerCount + - name: K10_EXECUTOR_MAX_CONCURRENT_RESTORE_CSI_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10ExecutorMaxConcurrentRestoreCsiSnapshots + - name: K10_EXECUTOR_MAX_CONCURRENT_RESTORE_GENERIC_VOLUME_SNAPSHOTS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10ExecutorMaxConcurrentRestoreGenericVolumeSnapshots + - name: K10_EXECUTOR_MAX_CONCURRENT_RESTORE_WORKLOADS + valueFrom: + configMapKeyRef: + name: k10-config + key: K10ExecutorMaxConcurrentRestoreWorkloads + - name: KANISTER_BACKUP_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterBackupTimeout + - name: KANISTER_RESTORE_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterRestoreTimeout + - name: KANISTER_DELETE_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterDeleteTimeout + - name: KANISTER_HOOK_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterHookTimeout + - name: KANISTER_CHECKREPO_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterCheckRepoTimeout + - name: KANISTER_STATS_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterStatsTimeout + - name: KANISTER_EFSPOSTRESTORE_TIMEOUT + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterEFSPostRestoreTimeout + - name: KANISTER_MANAGED_DATA_SERVICES_BLUEPRINTS_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterManagedDataServicesBlueprintsEnabled +{{- if .Values.maxJobWaitDuration }} + - name: K10_JOB_MAX_WAIT_DURATION + valueFrom: + configMapKeyRef: + name: k10-config + key: k10JobMaxWaitDuration +{{- end }} + - name: K10_FORCE_ROOT_IN_KANISTER_HOOKS + valueFrom: + configMapKeyRef: + name: k10-config + key: k10ForceRootInKanisterHooks +{{- end }} +{{- if and (eq $service "executor") (.Values.awsConfig.efsBackupVaultName) }} + - name: EFS_BACKUP_VAULT_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: efsBackupVaultName +{{- end }} +{{- if and (eq $service "executor") (.Values.genericStorageBackup.token) }} + - name: K10_GVS_ACTIVATION_TOKEN + valueFrom: + configMapKeyRef: + name: k10-config + key: GVSActivationToken +{{- end }} +{{- if and (eq $service "executor") (.Values.genericStorageBackup.overridepubkey) }} + - name: OVERRIDE_GVS_TOKEN_VERIFICATION_KEY + valueFrom: + configMapKeyRef: + name: k10-config + key: overridePublicKeyForGVS +{{- end }} +{{- if and (eq $service "executor") (.Values.vmWare.taskTimeoutMin) }} + - name: VMWARE_GOM_TIMEOUT_MIN + valueFrom: + configMapKeyRef: + name: k10-config + key: vmWareTaskTimeoutMin +{{- end }} +{{- if .Values.useNamespacedAPI }} + - name: K10_API_DOMAIN + valueFrom: + configMapKeyRef: + name: k10-config + key: apiDomain +{{- end }} +{{- if .Values.jaeger.enabled }} + - name: JAEGER_AGENT_HOST + value: {{ .Values.jaeger.agentDNS }} +{{- end }} +{{- if .Values.auth.tokenAuth.enabled }} + - name: TOKEN_AUTH + valueFrom: + secretKeyRef: + name: k10-token-auth + key: auth +{{- end }} + - name: KANISTER_TOOLS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsImage +{{- with (include "k10.cacertconfigmapname" .) }} + - name: CACERT_CONFIGMAP_NAME + value: {{ . }} +{{- end }} + - name: K10_RELEASE_NAME + value: {{ .Release.Name }} + - name: KANISTER_FUNCTION_VERSION + valueFrom: + configMapKeyRef: + name: k10-config + key: kanisterFunctionVersion +{{- if and (eq $service "controllermanager") (.Values.injectKanisterSidecar.enabled) }} + - name: K10_MUTATING_WEBHOOK_ENABLED + value: "true" + - name: K10_MUTATING_WEBHOOK_TLS_CERT_DIR + valueFrom: + configMapKeyRef: + name: k10-config + key: K10MutatingWebhookTLSCertDir + - name: K10_MUTATING_WEBHOOK_PORT + value: {{ .Values.injectKanisterSidecar.webhookServer.port | quote }} +{{- end }} +{{- if (list "controllermanager" "kanister" "executor" "dashboardbff" "repositories" | has $service) }} + - name: K10_DEFAULT_PRIORITY_CLASS_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: K10DefaultPriorityClassName +{{- if .Values.genericVolumeSnapshot.resources.requests.memory }} + - name: KANISTER_TOOLS_MEMORY_REQUESTS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsMemoryRequests +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.requests.cpu }} + - name: KANISTER_TOOLS_CPU_REQUESTS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsCPURequests +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.limits.memory }} + - name: KANISTER_TOOLS_MEMORY_LIMITS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsMemoryLimits +{{- end }} +{{- if .Values.genericVolumeSnapshot.resources.limits.cpu }} + - name: KANISTER_TOOLS_CPU_LIMITS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsCPULimits +{{- end }} +{{- end }} +{{- if (list "dashboardbff" "controllermanager" "executor" | has $service) }} + {{- if .Values.prometheus.server.enabled }} + - name: K10_PROMETHEUS_HOST + value: {{ include "k10.prometheus.service.name" . }}-exp + - name: K10_PROMETHEUS_PORT + value: {{ .Values.prometheus.server.service.servicePort | quote }} + - name: K10_PROMETHEUS_BASE_URL + value: {{ .Values.prometheus.server.baseURL }} + {{- else -}} + {{- if and .Values.global.prometheus.external.host .Values.global.prometheus.external.port}} + - name: K10_PROMETHEUS_HOST + value: {{ .Values.global.prometheus.external.host }} + - name: K10_PROMETHEUS_PORT + value: {{ .Values.global.prometheus.external.port | quote }} + - name: K10_PROMETHEUS_BASE_URL + value: {{ .Values.global.prometheus.external.baseURL }} + {{- end -}} + {{- end }} + {{- if or (.Values.grafana.enabled) (.Values.grafana.external.url) }} + - name: GRAFANA_URL + value: {{ include "k10.grafanaUrl" . }} + {{- end }} +{{- end }} +{{- if eq $service "dashboardbff" }} + {{- if ne .Values.global.persistence.diskSpaceAlertPercent nil }} + - name: K10_DISK_SPACE_ALERT_PERCENT + value: {{ .Values.global.persistence.diskSpaceAlertPercent | quote }} + {{- end -}} +{{- end -}} +{{- if eq $service "controllermanager" }} + {{- if .Values.multicluster.primary.create }} + {{- if not .Values.multicluster.enabled }} + {{- fail "Cannot setup cluster as primary without enabling feature with multicluster.enabled=true" -}} + {{- end }} + {{- if not .Values.multicluster.primary.name }} + {{- fail "Cannot setup cluster as primary without setting cluster name with multicluster.primary.name" -}} + {{- end }} + {{- if not .Values.multicluster.primary.ingressURL }} + {{- fail "Cannot setup cluster as primary without providing an ingress with multicluster.primary.ingressURL" -}} + {{- end }} + - name: K10_MC_CREATE_PRIMARY + value: "true" + - name: K10_MC_PRIMARY_NAME + value: {{ .Values.multicluster.primary.name | quote }} + - name: K10_MC_PRIMARY_INGRESS_URL + value: {{ .Values.multicluster.primary.ingressURL | quote }} + {{- end }} +{{- end -}} +{{- if or $.stateful (or (eq (include "check.googleCredsOrSecret" .) "true") (eq $service "auth" "logging")) }} + volumeMounts: +{{- else if or (or (eq (include "basicauth.check" .) "true") (or .Values.auth.oidcAuth.enabled (eq (include "check.dexAuth" .) "true"))) .Values.features }} + volumeMounts: +{{- else if and (eq $service "controllermanager") (.Values.injectKanisterSidecar.enabled) }} + volumeMounts: +{{- else if or (eq (include "check.cacertconfigmap" .) "true") (include "k10.ocpcacertsautoextraction" .) }} + volumeMounts: +{{- else if eq $service "frontend" }} + volumeMounts: +{{- else if and (list "controllermanager" "executor" | has $pod) (eq (include "check.projectSAToken" .) "true")}} + volumeMounts: +{{- else if and (eq $service "aggregatedapis") (include "k10.siemEnabled" .) }} + volumeMounts: +{{- end }} +{{- if $.stateful }} + - name: {{ $service }}-persistent-storage + mountPath: {{ .Values.global.persistence.mountPath | quote }} +{{- end }} +{{- if .Values.features }} + - name: k10-features + mountPath: "/mnt/k10-features" +{{- end }} +{{- if eq $service "logging" }} + - name: logging-configmap-storage + mountPath: "/mnt/conf" +{{- end }} +{{- if and (eq $service "controllermanager") (.Values.injectKanisterSidecar.enabled) }} + - name: mutating-webhook-certs + mountPath: /etc/ssl/certs/webhook + readOnly: true +{{- end }} +{{- if list "dashboardbff" "auth" "controllermanager" | has $service}} +{{- if eq (include "basicauth.check" .) "true" }} + - name: k10-basic-auth + mountPath: "/var/run/secrets/kasten.io/k10-basic-auth" + readOnly: true +{{- end }} +{{- if (or .Values.auth.oidcAuth.enabled (eq (include "check.dexAuth" .) "true")) }} + - name: {{ include "k10.oidcSecretName" .}} + mountPath: {{ printf "%s/%s" (include "k10.secretsDir" .) (include "k10.oidcSecretName" .) }} + readOnly: true +{{- if .Values.auth.oidcAuth.clientSecretName }} + - name: {{ include "k10.oidcCustomerSecretName" .}} + mountPath: {{ printf "%s/%s" (include "k10.secretsDir" .) (include "k10.oidcCustomerSecretName" .) }} + readOnly: true +{{- end }} +{{- end }} +{{- end }} +{{- if eq (include "check.googleCredsOrSecret" .) "true"}} + - name: service-account + mountPath: {{ include "k10.secretsDir" .}} +{{- end }} +{{- if and (list "controllermanager" "executor" | has $pod) (eq (include "check.projectSAToken" .) "true")}} + - name: bound-sa-token + mountPath: "/var/run/secrets/kasten.io/serviceaccount/GWIF" + readOnly: true +{{- end }} +{{- with (include "k10.cacertconfigmapname" .) }} + - name: {{ . }} + mountPath: "/etc/ssl/certs/custom-ca-bundle.pem" + subPath: custom-ca-bundle.pem +{{- end }} +{{- if eq $service "frontend" }} + - name: frontend-config + mountPath: /etc/nginx/nginx.conf + subPath: nginx.conf + readOnly: true + - name: frontend-config + mountPath: /etc/nginx/conf.d/frontend.conf + subPath: frontend.conf + readOnly: true +{{- end}} +{{- if and (eq $service "aggregatedapis") (include "k10.siemEnabled" .) }} + - name: aggauditpolicy-config + mountPath: /etc/kubernetes/{{ include "k10.aggAuditPolicyFile" .}} + subPath: {{ include "k10.aggAuditPolicyFile" .}} + readOnly: true +{{- end}} +{{- if and (eq $service "catalog") $.stateful }} + - name: kanister-sidecar + image: {{ include "get.kanisterToolsImage" .}} + imagePullPolicy: {{ .Values.kanisterToolsImage.pullPolicy }} +{{- dict "main" . "k10_service_pod_name" $podName "k10_service_container_name" "kanister-sidecar" | include "k10.resource.request" | indent 8}} + env: + {{- with $capabilities := include "k10.capabilities" . }} + - name: K10_CAPABILITIES + value: {{ $capabilities | quote }} + {{- end }} + {{- with $capabilities_mask := include "k10.capabilities_mask" . }} + - name: K10_CAPABILITIES_MASK + value: {{ $capabilities_mask | quote }} + {{- end }} +{{- if .Values.fips.enabled }} + {{- include "k10.enforceFIPSEnvironmentVariables" . | nindent 10 }} +{{- end }} + volumeMounts: + - name: {{ $service }}-persistent-storage + mountPath: {{ .Values.global.persistence.mountPath | quote }} +{{- with (include "k10.cacertconfigmapname" .) }} + - name: {{ . }} + mountPath: "/etc/ssl/certs/custom-ca-bundle.pem" + subPath: custom-ca-bundle.pem +{{- end }} +{{- if eq (include "check.projectSAToken" .) "true" }} + - name: bound-sa-token + mountPath: "/var/run/secrets/kasten.io/serviceaccount/GWIF" + readOnly: true +{{- end }} +{{- end }} {{/* and (eq $service "catalog") $.stateful */}} +{{- if and ( eq $service "auth" ) ( eq (include "check.dexAuth" .) "true" ) }} + - name: dex + image: {{ include "get.dexImage" . }} +{{- if .Values.auth.ldap.enabled }} + command: ["/usr/local/bin/dex", "serve", "/dex-config/config.yaml"] +{{- if .Values.fips.enabled }} + env: + {{- include "k10.enforceFIPSEnvironmentVariables" . | nindent 10 }} +{{- end }} +{{- else if .Values.auth.openshift.enabled }} + {{- /* + In the case of OpenShift, a template config is used instead of a plain config for Dex. + It requires a different command to be processed correctly. + */}} + command: ["/usr/local/bin/docker-entrypoint", "dex", "serve", "/etc/dex/cfg/config.yaml"] + env: + - name: {{ include "k10.openShiftClientSecretEnvVar" . }} +{{- if and (not .Values.auth.openshift.clientSecretName) (not .Values.auth.openshift.clientSecret) }} + valueFrom: + secretKeyRef: + name: {{ include "get.openshiftServiceAccountSecretName" . }} + key: token +{{- else if .Values.auth.openshift.clientSecretName }} + valueFrom: + secretKeyRef: + name: {{ .Values.auth.openshift.clientSecretName }} + key: token +{{- else }} + value: {{ .Values.auth.openshift.clientSecret }} +{{- end }} +{{- if .Values.fips.enabled }} + {{- include "k10.enforceFIPSEnvironmentVariables" . | indent 10 }} +{{- end }} +{{- end }} + ports: + - name: http + containerPort: 8080 + volumeMounts: +{{- if .Values.auth.ldap.enabled }} + - name: dex-config + mountPath: /dex-config + - name: k10-logos-dex + mountPath: {{ include "k10.dexFrontendDir" . }}/themes/custom/ +{{- else }} + - name: config + mountPath: /etc/dex/cfg +{{- end }} +{{- with (include "k10.cacertconfigmapname" .) }} + - name: {{ . }} + mountPath: "/etc/ssl/certs/custom-ca-bundle.pem" + subPath: custom-ca-bundle.pem +{{- end }} +{{- end }} {{/* end of dex check */}} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-container" */}} + +{{- define "k10-init-container-header" }} +{{- $pod := .k10_pod }} +{{- with .main }} +{{- $main_context := . }} +{{- $containerList := (dict "main" $main_context "k10_service_pod" $pod | include "get.serviceContainersInPod" | splitList " ") }} +{{- $needsInitContainersHeader := false }} +{{- range $skip, $service := $containerList }} +{{- $serviceStateful := has $service (dict "main" $main_context "k10_service_pod" $pod | include "get.statefulRestServicesInPod" | splitList " ") }} + {{- if and ( eq $service "auth" ) $main_context.Values.auth.ldap.enabled }} + {{- $needsInitContainersHeader = true }} + {{- else if $serviceStateful }} + {{- $needsInitContainersHeader = true }} + {{- end }}{{/* initContainers header needed check */}} +{{- end }}{{/* range $skip, $service := $containerList */}} +{{- if $needsInitContainersHeader }} + initContainers: +{{- end }} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-init-container-header" */}} + +{{- define "k10-init-container" }} +{{- $pod := .k10_pod }} +{{- $podName := (printf "%s-svc" $pod) }} +{{- with .main }} +{{- $main_context := . }} +{{- $containerList := (dict "main" $main_context "k10_service_pod" $pod | include "get.serviceContainersInPod" | splitList " ") }} +{{- range $skip, $service := $containerList }} +{{- $serviceStateful := has $service (dict "main" $main_context "k10_service_pod" $pod | include "get.statefulRestServicesInPod" | splitList " ") }} +{{- if and ( eq $service "auth" ) $main_context.Values.auth.ldap.enabled }} + - name: dex-init + command: + - /dex/dexconfigmerge + args: + - --config-path=/etc/dex/cfg/config.yaml + - --secret-path=/var/run/secrets/kasten.io/bind-secret/bindPW + - --new-config-path=/dex-config/config.yaml + - --secret-field=bindPW + {{- dict "main" $main_context "k10_service" $service | include "serviceImage" | indent 8 }} + {{- dict "main" $main_context "k10_service_pod_name" $podName "k10_service_container_name" "dex-init" | include "k10.resource.request" | indent 8}} + volumeMounts: + - mountPath: /etc/dex/cfg + name: config + - mountPath: /dex-config + name: dex-config + - name: bind-secret + mountPath: "/var/run/secrets/kasten.io/bind-secret" + readOnly: true +{{- else if $serviceStateful }} + - name: upgrade-init + securityContext: + capabilities: + add: + - FOWNER + - CHOWN + runAsUser: 1000 + allowPrivilegeEscalation: false + {{- dict "main" $main_context "k10_service" "upgrade" | include "serviceImage" | indent 8 }} + imagePullPolicy: {{ $main_context.Values.global.image.pullPolicy }} + {{- dict "main" $main_context "k10_service_pod_name" $podName "k10_service_container_name" "upgrade-init" | include "k10.resource.request" | indent 8}} + env: + - name: MODEL_STORE_DIR + valueFrom: + configMapKeyRef: + name: k10-config + key: modelstoredirname + volumeMounts: + - name: {{ $service }}-persistent-storage + mountPath: {{ $main_context.Values.global.persistence.mountPath | quote }} +{{- if eq $service "catalog" }} + - name: schema-upgrade-check + {{- dict "main" $main_context "k10_service" $service | include "serviceImage" | indent 8 }} + imagePullPolicy: {{ $main_context.Values.global.image.pullPolicy }} + {{- dict "main" $main_context "k10_service_pod_name" $podName "k10_service_container_name" "schema-upgrade-check" | include "k10.resource.request" | indent 8}} + env: +{{- if $main_context.Values.clusterName }} + - name: CLUSTER_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: clustername +{{- end }} + - name: INIT_CONTAINER + value: "true" + - name: K10_RELEASE_NAME + value: {{ $main_context.Release.Name }} + - name: LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: loglevel + - name: MODEL_STORE_DIR + valueFrom: + configMapKeyRef: + name: k10-config + key: modelstoredirname + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: VERSION + valueFrom: + configMapKeyRef: + name: k10-config + key: version + volumeMounts: + - name: {{ $service }}-persistent-storage + mountPath: {{ $main_context.Values.global.persistence.mountPath | quote }} +{{- end }}{{/* eq $service "catalog" */}} +{{- end }}{{/* initContainers definitions */}} +{{- end }}{{/* range $skip, $service := $containerList */}} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-init-container" */}} diff --git a/charts/kasten/k10/7.0.1101/templates/_k10_image_tag.tpl b/charts/kasten/k10/7.0.1101/templates/_k10_image_tag.tpl new file mode 100644 index 0000000000..7a3df147bb --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_k10_image_tag.tpl @@ -0,0 +1 @@ +{{- define "k10.imageTag" -}}7.0.11{{- end -}} \ No newline at end of file diff --git a/charts/kasten/k10/7.0.1101/templates/_k10_metering.tpl b/charts/kasten/k10/7.0.1101/templates/_k10_metering.tpl new file mode 100644 index 0000000000..d611839418 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_k10_metering.tpl @@ -0,0 +1,356 @@ +{{/* Generate service spec */}} +{{/* because of https://github.com/GoogleCloudPlatform/marketplace-k8s-app-tools/issues/165 +we have to start using .Values.reportingSecret instead +of correct version .Values.metering.reportingSecret */}} +{{- define "k10-metering" }} +{{ $service := .k10_service }} +{{- $podName := (printf "%s-svc" $service) }} +{{ $main := .main }} +{{- with .main }} +{{- $servicePort := .Values.service.externalPort -}} +{{- $optionalServices := .Values.optionalColocatedServices -}} +{{- $rbac := .Values.prometheus.rbac.create -}} +{{- if $.stateful }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + namespace: {{ .Release.Namespace }} + name: {{ $service }}-pv-claim + labels: +{{ include "helm.labels" . | indent 4 }} + component: {{ $service }} +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ default .Values.global.persistence.size (index .Values.global.persistence $service "size") }} +{{- if .Values.global.persistence.storageClass }} + {{- if (eq "-" .Values.global.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ .Values.global.persistence.storageClass }}" + {{- end }} +{{- end }} +--- +{{- end }}{{/* if $.stateful */}} +{{ $service_list := include "get.enabledRestServices" . | splitList " " }} +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: {{ include "fullname" . }}-metering-config +data: + config: | +{{- if .Values.metering.reportingKey }} + identities: + - name: gcp + gcp: + encodedServiceAccountKey: {{ .Values.metering.reportingKey }} +{{- end }} + metrics: + - name: node_time + type: int + passthrough: {} + endpoints: + - name: on_disk +{{- if .Values.metering.reportingKey }} + - name: servicecontrol +{{- end }} + endpoints: + - name: on_disk + disk: +{{- if .Values.global.persistence.enabled }} + reportDir: /var/reports/ubbagent/reports +{{- else }} + reportDir: /tmp/reports/ubbagent/reports +{{- end }} + expireSeconds: 3600 +{{- if .Values.metering.reportingKey }} + - name: servicecontrol + servicecontrol: + identity: gcp + serviceName: kasten-k10.mp-kasten-public.appspot.com + consumerId: {{ .Values.metering.consumerId }} +{{- end }} + prometheusTargets: | +{{- range $service_list }} +{{- if or (not (hasKey $optionalServices .)) (index $optionalServices .).enabled }} +{{- if not (eq . "executor") }} +{{ $tmpcontx := dict "main" $main "k10service" . -}} +{{ include "k10.prometheusTargetConfig" $tmpcontx | trim | indent 4 -}} +{{- end }} +{{- end }} +{{- end }} +{{- range include "get.enabledServices" . | splitList " " }} +{{- if (or (ne . "aggregatedapis") ($rbac)) }} +{{ $tmpcontx := dict "main" $main "k10service" . -}} +{{ include "k10.prometheusTargetConfig" $tmpcontx | indent 4 -}} +{{- end }} +{{- end }} +{{- range include "get.enabledAdditionalServices" . | splitList " " }} +{{- if not (eq . "frontend") }} +{{ $tmpcontx := dict "main" $main "k10service" . -}} +{{ include "k10.prometheusTargetConfig" $tmpcontx | indent 4 -}} +{{- end }} +{{- end }} + +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ .Release.Namespace }} + name: {{ $service }}-svc + labels: +{{ include "helm.labels" . | indent 4 }} + component: {{ $service }} +spec: + replicas: {{ $.replicas }} + strategy: + type: Recreate + selector: + matchLabels: +{{ include "k10.common.matchLabels" . | indent 6 }} + component: {{ $service }} + run: {{ $service }}-svc + template: + metadata: + annotations: + {{- include "k10.globalPodAnnotations" . | nindent 8 }} + checksum/config: {{ include (print .Template.BasePath "/k10-config.yaml") . | sha256sum }} + checksum/secret: {{ include (print .Template.BasePath "/secrets.yaml") . | sha256sum }} + labels: + {{- include "k10.globalPodLabels" . | nindent 8 }} + {{- include "helm.labels" . | nindent 8 }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + component: {{ $service }} + run: {{ $service }}-svc + spec: + securityContext: +{{ toYaml .Values.services.securityContext | indent 8 }} + serviceAccountName: {{ template "meteringServiceAccountName" . }} + {{- dict "main" . "k10_deployment_name" $podName | include "k10.priorityClassName" | indent 6}} + {{- include "k10.imagePullSecrets" . | indent 6 }} +{{- if $.stateful }} + initContainers: + - name: upgrade-init + securityContext: + capabilities: + add: + - FOWNER + - CHOWN + runAsUser: 1000 + allowPrivilegeEscalation: false + {{- dict "main" . "k10_service" "upgrade" | include "serviceImage" | indent 8 }} + imagePullPolicy: {{ .Values.global.image.pullPolicy }} + {{- dict "main" . "k10_service_pod_name" $podName "k10_service_container_name" "upgrade-init" | include "k10.resource.request" | indent 8}} + env: + - name: MODEL_STORE_DIR + value: /var/reports/ + volumeMounts: + - name: {{ $service }}-persistent-storage + mountPath: /var/reports/ +{{- end }} + containers: + - name: {{ $service }}-svc + {{- dict "main" . "k10_service" $service | include "serviceImage" | indent 8 }} + imagePullPolicy: {{ .Values.global.image.pullPolicy }} +{{- $containerName := (printf "%s-svc" $service) }} +{{- dict "main" . "k10_service_pod_name" $podName "k10_service_container_name" $containerName | include "k10.resource.request" | indent 8}} + ports: + - containerPort: {{ .Values.service.externalPort }} + livenessProbe: + httpGet: + path: /v0/healthz + port: {{ .Values.service.externalPort }} + initialDelaySeconds: 90 + timeoutSeconds: 1 + env: + - name: VERSION + valueFrom: + configMapKeyRef: + name: k10-config + key: version + - name: {{ include "k10.fluentbitEndpointEnvVar" . }} + valueFrom: + configMapKeyRef: + name: k10-config + key: fluentbitEndpoint + optional: true + - name: KANISTER_TOOLS + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterToolsImage +{{- if .Values.clusterName }} + - name: CLUSTER_NAME + valueFrom: + configMapKeyRef: + name: k10-config + key: clustername +{{- end }} +{{- if .Values.fips.enabled }} + {{- include "k10.enforceFIPSEnvironmentVariables" . | indent 10 }} +{{- end }} + {{- with $capabilities := include "k10.capabilities" . }} + - name: K10_CAPABILITIES + value: {{ $capabilities | quote }} + {{- end }} + {{- with $capabilities_mask := include "k10.capabilities_mask" . }} + - name: K10_CAPABILITIES_MASK + value: {{ $capabilities_mask | quote }} + {{- end }} + - name: K10_HOST_SVC + value: {{ $service }} + - name: LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: loglevel + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace +{{- if .Values.useNamespacedAPI }} + - name: K10_API_DOMAIN + valueFrom: + configMapKeyRef: + name: k10-config + key: apiDomain +{{- end }} + - name: AGENT_CONFIG_FILE + value: /var/ubbagent/config.yaml + - name: AGENT_STATE_DIR +{{- if .Values.global.persistence.enabled }} + value: "/var/reports/ubbagent" +{{- else }} + value: "/tmp/reports/ubbagent" + - name: K10SYNCSTATUSDIR + value: "/tmp/reports/k10" + - name: GRACE_PERIOD_STORE + value: /tmp/reports/clustergraceperiod + - name: NODE_USAGE_STORE + value: /tmp/reports/node_usage_history +{{- end }} +{{- if .Values.metering.awsRegion }} + - name: AWS_REGION + value: {{ .Values.metering.awsRegion }} +{{- end }} +{{- if .Values.metering.mode }} + - name: K10REPORTMODE + value: {{ .Values.metering.mode }} +{{- end }} +{{- if .Values.metering.reportCollectionPeriod }} + - name: K10_REPORT_COLLECTION_PERIOD + value: {{ .Values.metering.reportCollectionPeriod | quote }} +{{- end }} +{{- if .Values.metering.reportPushPeriod }} + - name: K10_REPORT_PUSH_PERIOD + value: {{ .Values.metering.reportPushPeriod | quote }} +{{- end }} +{{- if .Values.metering.promoID }} + - name: K10_PROMOTION_ID + value: {{ .Values.metering.promoID }} +{{- end }} + +{{- if .Values.prometheus.server.enabled }} + - name: K10_PROMETHEUS_HOST + value: {{ include "k10.prometheus.service.name" . }}-exp + - name: K10_PROMETHEUS_PORT + value: {{ .Values.prometheus.server.service.servicePort | quote }} + - name: K10_PROMETHEUS_BASE_URL + value: {{ .Values.prometheus.server.baseURL }} +{{- else -}} + {{- if and .Values.global.prometheus.external.host .Values.global.prometheus.external.port}} + - name: K10_PROMETHEUS_HOST + value: {{ .Values.global.prometheus.external.host }} + - name: K10_PROMETHEUS_PORT + value: {{ .Values.global.prometheus.external.port | quote }} + - name: K10_PROMETHEUS_BASE_URL + value: {{ .Values.global.prometheus.external.baseURL }} + {{- end -}} +{{- end }} +{{- if .Values.kanisterPodMetricSidecar.enabled }} + - name: K10_KANISTER_POD_METRICS_ENABLED + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarEnabled + - name: K10_PROMETHEUS_PUSHGATEWAY_METRIC_LIFETIME + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodMetricSidecarMetricLifetime + - name: PUSHGATEWAY_METRICS_INTERVAL + valueFrom: + configMapKeyRef: + name: k10-config + key: KanisterPodPushgatewayMetricsInterval +{{- end }} +{{- if .Values.reportingSecret }} + - name: AGENT_CONSUMER_ID + valueFrom: + secretKeyRef: + name: {{ .Values.reportingSecret }} + key: consumer-id + - name: AGENT_REPORTING_KEY + valueFrom: + secretKeyRef: + name: {{ .Values.reportingSecret }} + key: reporting-key + - name: K10_RELEASE_NAME + value: {{ .Release.Name }} +{{- end }} +{{- if .Values.metering.licenseConfigSecretName }} + - name: AWS_WEB_IDENTITY_REFRESH_TOKEN_FILE + value: "/var/run/secrets/product-license/license_token" + - name: AWS_ROLE_ARN + valueFrom: + secretKeyRef: + name: {{ .Values.metering.licenseConfigSecretName }} + key: iam_role +{{- end }} + volumeMounts: + - name: meter-config + mountPath: /var/ubbagent +{{- if $.stateful }} + - name: {{ $service }}-persistent-storage + mountPath: /var/reports/ +{{- end }} +{{- if .Values.metering.licenseConfigSecretName }} + - name: awsmp-product-license + mountPath: "/var/run/secrets/product-license" +{{- end }} +{{- if .Values.features }} + - name: k10-features + mountPath: "/mnt/k10-features" +{{- end }} + volumes: + - name: meter-config + configMap: + name: {{ include "fullname" . }}-metering-config + items: + - key: config + path: config.yaml + - key: prometheusTargets + path: prometheusTargets.yaml +{{- if .Values.features }} + - name: k10-features + configMap: + name: k10-features +{{- end }} +{{- if $.stateful }} + - name: {{ $service }}-persistent-storage + persistentVolumeClaim: + claimName: {{ $service }}-pv-claim +{{- end }} +{{- if .Values.metering.licenseConfigSecretName }} + - name: awsmp-product-license + secret: + secretName: {{ .Values.metering.licenseConfigSecretName }} +{{- end }} +--- +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-metering" */}} diff --git a/charts/kasten/k10/7.0.1101/templates/_k10_serviceimage.tpl b/charts/kasten/k10/7.0.1101/templates/_k10_serviceimage.tpl new file mode 100644 index 0000000000..9a333d92ce --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_k10_serviceimage.tpl @@ -0,0 +1,50 @@ +{{/* +Helper to get k10 service image +The details on how these image are being generated +is in below issue +https://kasten.atlassian.net/browse/K10-4036 +*/}} +{{- define "serviceImage" -}} +{{/* +we are maintaining the field .Values.global.images to override it when +we install the chart for red hat marketplace. If we dont +have the value specified use earlier flow, if it is, use the +value that is specified. +*/}} +{{- include "image.values.check" . -}} +{{- if not .main.Values.global.rhMarketPlace }} +{{- $serviceImage := "" -}} +{{- $tagFromDefs := "" -}} +{{- if .main.Values.global.airgapped.repository }} +{{- $serviceImage = (include "get.k10ImageTag" .main) | print .main.Values.global.airgapped.repository "/" .k10_service ":" }} +{{- else if .main.Values.global.azMarketPlace }} +{{- $az_image := (get .main.Values.global.azure.images .k10_service) }} +{{- $serviceImage = print $az_image.registry "/" $az_image.image ":" $az_image.tag }} +{{- else }} +{{- $serviceImage = (include "get.k10ImageTag" .main) | print .main.Values.global.image.registry "/" .k10_service ":" }} +{{- end }}{{/* if .main.Values.global.airgapped.repository */}} +{{- $serviceImageKey := print (replace "-" "" .k10_service) "Image" }} +{{- if eq $serviceImageKey "dexImage" }} +{{- $tagFromDefs = (include "dex.dexImageTag" .) }} +{{- end }}{{/* if eq $serviceImageKey "dexImage" */}} +{{- if index .main.Values $serviceImageKey }} +{{- $service_values := index .main.Values $serviceImageKey }} +{{- if .main.Values.global.airgapped.repository }} +{{ $valuesImage := (splitList "/" (index $service_values "image")) }} +{{- if $tagFromDefs }} +image: {{ printf "%s/%s:k10-%s" .main.Values.global.airgapped.repository (index $valuesImage (sub (len $valuesImage) 1) ) $tagFromDefs -}} +{{- end }} +{{- else }}{{/* .main.Values.global.airgapped.repository */}} +{{- if $tagFromDefs }} +image: {{ printf "%s:%s" (index $service_values "image") $tagFromDefs }} +{{- else }} +image: {{ index $service_values "image" }} +{{- end }} +{{- end }}{{/* .main.Values.global.airgapped.repository */}} +{{- else }} +image: {{ $serviceImage }} +{{- end -}}{{/* index .main.Values $serviceImageKey */}} +{{- else }} +image: {{ printf "%s" (get .main.Values.global.images .k10_service) }} +{{- end }}{{/* if not .main.Values.images.executor */}} +{{- end -}}{{/* define "serviceImage" */}} diff --git a/charts/kasten/k10/7.0.1101/templates/_k10_template.tpl b/charts/kasten/k10/7.0.1101/templates/_k10_template.tpl new file mode 100644 index 0000000000..7eec215fbb --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_k10_template.tpl @@ -0,0 +1,234 @@ +{{/* Generate service spec */}} +{{- define "k10-default" }} +{{- $service := .k10_service }} +{{- $deploymentName := (printf "%s-svc" $service) }} +{{- with .main }} +{{- $main_context := . }} +{{- range $skip, $statefulContainer := compact (dict "main" $main_context "k10_service_pod" $service | include "get.statefulRestServicesInPod" | splitList " ") }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + namespace: {{ $main_context.Release.Namespace }} + name: {{ $statefulContainer }}-pv-claim + labels: +{{ include "helm.labels" $main_context | indent 4 }} + component: {{ $statefulContainer }} +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: {{ index (index $main_context.Values.global.persistence $statefulContainer | default dict) "size" | default $main_context.Values.global.persistence.size }} +{{- if $main_context.Values.global.persistence.storageClass }} + {{- if (eq "-" $main_context.Values.global.persistence.storageClass) }} + storageClassName: "" + {{- else }} + storageClassName: "{{ $main_context.Values.global.persistence.storageClass }}" + {{- end }} +{{- end }} +--- +{{- end }}{{/* if $.stateful */}} +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ .Release.Namespace }} + name: {{ $deploymentName }} + labels: +{{ include "helm.labels" . | indent 4 }} + component: {{ $service }} +spec: + replicas: {{ $.replicas }} + strategy: + type: Recreate + selector: + matchLabels: +{{ include "k10.common.matchLabels" . | indent 6 }} + component: {{ $service }} + run: {{ $deploymentName }} + template: + metadata: + annotations: + {{- include "k10.globalPodAnnotations" . | nindent 8 }} + checksum/config: {{ include (print .Template.BasePath "/k10-config.yaml") . | sha256sum }} + checksum/secret: {{ include (print .Template.BasePath "/secrets.yaml") . | sha256sum }} + checksum/frontend-nginx-config: {{ include (print .Template.BasePath "/frontend-nginx-configmap.yaml") . | sha256sum }} +{{- if .Values.auth.ldap.restartPod }} + rollme: {{ randAlphaNum 5 | quote }} +{{- end}} +{{- if .Values.scc.create }} + openshift.io/required-scc: {{ .Release.Name }}-scc +{{- end }} + labels: + {{- include "k10.globalPodLabels" . | nindent 8 }} + {{- include "helm.labels" . | nindent 8 }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + component: {{ $service }} + run: {{ $deploymentName }} + spec: +{{- if eq $service "executor" }} +{{- if .Values.services.executor.hostNetwork }} + hostNetwork: true +{{- end }}{{/* .Values.services.executor.hostNetwork */}} +{{- end }}{{/* eq $service "executor" */}} +{{- if eq $service "aggregatedapis" }} +{{- if .Values.services.aggregatedapis.hostNetwork }} + hostNetwork: true +{{- end }}{{/* .Values.services.aggregatedapis.hostNetwork */}} +{{- end }}{{/* eq $service "aggregatedapis" */}} +{{- if eq $service "dashboardbff" }} +{{- if .Values.services.dashboardbff.hostNetwork }} + hostNetwork: true +{{- end }}{{/* .Values.services.dashboardbff.hostNetwork */}} +{{- end }}{{/* eq $service "dashboardbff" */}} + securityContext: +{{ toYaml .Values.services.securityContext | indent 8 }} + serviceAccountName: {{ template "serviceAccountName" . }} + {{- dict "main" . "k10_deployment_name" $deploymentName | include "k10.priorityClassName" | indent 6}} + {{- include "k10.imagePullSecrets" . | indent 6 }} +{{- /* initContainers: */}} +{{- (dict "main" . "k10_pod" $service | include "k10-init-container-header") }} +{{- (dict "main" . "k10_pod" $service | include "k10-init-container") }} +{{- /* containers: */}} +{{- (dict "main" . "k10_pod" $service | include "k10-containers") }} +{{- /* volumes: */}} +{{- (dict "main" . "k10_pod" $service | include "k10-deployment-volumes-header") }} +{{- (dict "main" . "k10_pod" $service | include "k10-deployment-volumes") }} +--- +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-default" */}} + +{{- define "k10-deployment-volumes-header" }} +{{- $pod := .k10_pod }} +{{- with .main }} +{{- $main_context := . }} +{{- $containerList := (dict "main" $main_context "k10_service_pod" $pod | include "get.serviceContainersInPod" | splitList " ") }} +{{- $needsVolumesHeader := false }} +{{- range $skip, $service := $containerList }} + {{- $serviceStateful := has $service (dict "main" $main_context "k10_service_pod" $pod | include "get.statefulRestServicesInPod" | splitList " ") }} + {{- if or $serviceStateful (or (eq (include "check.googlecreds" $main_context) "true") (eq $service "auth" "logging")) }} + {{- $needsVolumesHeader = true }} + {{- else if or (or (eq (include "basicauth.check" $main_context) "true") (or $main_context.Values.auth.oidcAuth.enabled (eq (include "check.dexAuth" $main_context) "true"))) $main_context.Values.features }} + {{- $needsVolumesHeader = true }} + {{- else if and (eq $service "controllermanager") ($main_context.Values.injectKanisterSidecar.enabled) }} + {{- $needsVolumesHeader = true }} + {{- else if or (eq (include "check.cacertconfigmap" $main_context) "true") (include "k10.ocpcacertsautoextraction" $main_context) }} + {{- $needsVolumesHeader = true }} + {{- else if and ( eq $service "auth" ) ( eq (include "check.dexAuth" $main_context) "true" ) }} + {{- $needsVolumesHeader = true }} + {{- else if eq $service "frontend" }} + {{- $needsVolumesHeader = true }} + {{- else if and (list "controllermanager" "executor" "catalog" | has $pod) (eq (include "check.projectSAToken" $main_context) "true")}} + {{- $needsVolumesHeader = true }} + {{- else if and (eq $service "aggregatedapis") (include "k10.siemEnabled" $main_context) }} + {{- $needsVolumesHeader = true }} + {{- end }}{{/* volumes header needed check */}} +{{- end }}{{/* range $skip, $service := $containerList */}} +{{- if $needsVolumesHeader }} + volumes: +{{- end }} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-init-container-header" */}} + +{{- define "k10-deployment-volumes" }} +{{- $pod := .k10_pod }} +{{- with .main }} +{{- if .Values.features }} + - name: k10-features + configMap: + name: k10-features +{{- end }} +{{- if list "dashboardbff" "auth" "controllermanager" | has $pod}} +{{- if eq (include "basicauth.check" .) "true" }} + - name: k10-basic-auth + secret: + secretName: {{ default "k10-basic-auth" .Values.auth.basicAuth.secretName }} +{{- end }} +{{- if .Values.auth.oidcAuth.enabled }} + - name: {{ include "k10.oidcSecretName" .}} + secret: + secretName: {{ default (include "k10.oidcSecretName" .) .Values.auth.oidcAuth.secretName }} +{{- if .Values.auth.oidcAuth.clientSecretName }} + - name: {{ include "k10.oidcCustomerSecretName" . }} + secret: + secretName: {{ .Values.auth.oidcAuth.clientSecretName }} +{{- end }} +{{- end }} +{{- if .Values.auth.openshift.enabled }} + - name: {{ include "k10.oidcSecretName" .}} + secret: + secretName: {{ default (include "k10.oidcSecretName" .) .Values.auth.openshift.secretName }} +{{- end }} +{{- if .Values.auth.ldap.enabled }} + - name: {{ include "k10.oidcSecretName" .}} + secret: + secretName: {{ default (include "k10.oidcSecretName" .) .Values.auth.ldap.secretName }} + - name: k10-logos-dex + configMap: + name: k10-logos-dex +{{- end }} +{{- end }} +{{- range $skip, $statefulContainer := compact (dict "main" . "k10_service_pod" $pod | include "get.statefulRestServicesInPod" | splitList " ") }} + - name: {{ $statefulContainer }}-persistent-storage + persistentVolumeClaim: + claimName: {{ $statefulContainer }}-pv-claim +{{- end }} +{{- if eq (include "check.googleCredsOrSecret" .) "true" }} +{{- $gkeSecret := default "google-secret" .Values.secrets.googleClientSecretName }} + - name: service-account + secret: + secretName: {{ $gkeSecret }} +{{- end }} +{{- if and (list "controllermanager" "executor" "catalog" | has $pod) (eq (include "check.projectSAToken" .) "true")}} + - name: bound-sa-token + projected: + sources: + - serviceAccountToken: +{{- if eq (include "check.gwifidpaud" .) "true" }} + audience: {{ .Values.google.workloadIdentityFederation.idp.aud }} +{{- end }} + expirationSeconds: 3600 + path: token +{{- end }} +{{- with (include "k10.cacertconfigmapname" .) }} + - name: {{ . }} + configMap: + name: {{ . }} +{{- end }} +{{- if eq $pod "frontend" }} + - name: frontend-config + configMap: + name: frontend-config +{{- end }} +{{- if and (eq $pod "aggregatedapis") (include "k10.siemEnabled" .) }} + - name: aggauditpolicy-config + configMap: + name: aggauditpolicy-config +{{- end }} +{{- $containersInThisPod := (dict "main" . "k10_service_pod" $pod | include "get.serviceContainersInPod" | splitList " ") }} +{{- if has "logging" $containersInThisPod }} + - name: logging-configmap-storage + configMap: + name: fluentbit-configmap +{{- end }} +{{- if and (has "controllermanager" $containersInThisPod) (.Values.injectKanisterSidecar.enabled) }} + - name: mutating-webhook-certs + secret: + secretName: controllermanager-certs +{{- end }} +{{- if and ( has "auth" $containersInThisPod) ( eq (include "check.dexAuth" .) "true" ) }} + - name: config + configMap: + name: k10-dex + items: + - key: config.yaml + path: config.yaml +{{- if .Values.auth.ldap.enabled }} + - name: dex-config + emptyDir: {} + - name: bind-secret + secret: + secretName: {{ default "k10-dex" .Values.auth.ldap.bindPWSecretName }} +{{- end }} +{{- end }} +{{- end }}{{/* with .main */}} +{{- end }}{{/* define "k10-init-container-header" */}} diff --git a/charts/kasten/k10/7.0.1101/templates/_prometheus.tpl b/charts/kasten/k10/7.0.1101/templates/_prometheus.tpl new file mode 100644 index 0000000000..a49a8363f6 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/_prometheus.tpl @@ -0,0 +1,29 @@ +{{/*** MATCH LABELS *** + NOTE: The match labels here (`app` and `release`) are divergent from + the match labels set by the upstream chart. This is intentional since a + Deployment's `spec.selector` is immutable and K10 has already been shipped + with these values. + + A change to these selector labels will mean that all customers must manually + delete the Prometheus Deployment before upgrading, which is a situation we don't + want for our customers. + + Instead, the `app.kubernetes.io/name` and `app.kubernetes.io/instance` labels + are included in the `prometheus.commonMetaLabels` in: + `helm/k10/templates/{values}/prometheus/charts/{charts}/values/prometheus_values.tpl`. +*/}} +{{- define "prometheus.common.matchLabels" -}} +app: {{ include "prometheus.name" . }} +release: {{ .Release.Name }} +{{- end -}} + +{{- define "prometheus.server.labels" -}} +{{ include "prometheus.server.matchLabels" . }} +{{ include "prometheus.common.metaLabels" . }} +app.kubernetes.io/component: {{ .Values.server.name }} +{{- end -}} + +{{- define "prometheus.server.matchLabels" -}} +component: {{ .Values.server.name | quote }} +{{ include "prometheus.common.matchLabels" . }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/aggregatedaudit-policy.yaml b/charts/kasten/k10/7.0.1101/templates/aggregatedaudit-policy.yaml new file mode 100644 index 0000000000..ef7f03c6c2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/aggregatedaudit-policy.yaml @@ -0,0 +1,34 @@ +{{- if include "k10.siemEnabled" . -}} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: aggauditpolicy-config + namespace: {{ .Release.Namespace }} +data: + {{ include "k10.aggAuditPolicyFile" .}}: | + apiVersion: audit.k8s.io/v1 + kind: Policy + omitStages: + - "RequestReceived" + rules: + - level: RequestResponse + resources: + - group: "actions.kio.kasten.io" + resources: ["backupactions", "cancelactions", "exportactions", "importactions", "restoreactions", "retireactions", "runactions"] + - group: "apps.kio.kasten.io" + resources: ["applications", "clusterrestorepoints", "restorepoints", "restorepointcontents"] + - group: "repositories.kio.kasten.io" + resources: ["storagerepositories"] + - group: "vault.kio.kasten.io" + resources: ["passkeys"] + verbs: ["create", "update", "patch", "delete", "get"] + - level: None + nonResourceURLs: + - /healthz* + - /version + - /openapi/v2* + - /openapi/v3* + - /timeout* +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/api-tls-secrets.yaml b/charts/kasten/k10/7.0.1101/templates/api-tls-secrets.yaml new file mode 100644 index 0000000000..386d3b9999 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/api-tls-secrets.yaml @@ -0,0 +1,13 @@ +{{- if and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey }} +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: gateway-certs +type: kubernetes.io/tls +data: + tls.crt: {{ .Values.secrets.apiTlsCrt }} + tls.key: {{ .Values.secrets.apiTlsKey }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/apiservice.yaml b/charts/kasten/k10/7.0.1101/templates/apiservice.yaml new file mode 100644 index 0000000000..1811df48a8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/apiservice.yaml @@ -0,0 +1,25 @@ +{{/* Template to generate the aggregated APIService/Service objects */}} +{{- if .Values.apiservices.deployed -}} +{{- $main := . -}} +{{- $container_port := .Values.service.internalPort -}} +{{- $namespace := .Release.Namespace -}} +{{- range include "k10.aggregatedAPIs" . | splitList " " -}} +--- +apiVersion: apiregistration.k8s.io/v1 +kind: APIService +metadata: + name: v1alpha1.{{ . }}.{{ template "apiDomain" $main }} + labels: + apiserver: "true" +{{ include "helm.labels" $ | indent 4 }} +spec: + version: v1alpha1 + group: {{ . }}.{{ template "apiDomain" $main }} + groupPriorityMinimum: 2000 + service: + namespace: {{$namespace}} + name: aggregatedapis-svc + versionPriority: 10 + insecureSkipTLSVerify: true +{{ end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/daemonsets.yaml b/charts/kasten/k10/7.0.1101/templates/daemonsets.yaml new file mode 100644 index 0000000000..b8c50b505f --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/daemonsets.yaml @@ -0,0 +1,26 @@ +{{- if .Values.metering.redhatMarketplacePayg }} +apiVersion: apps/v1 +kind: DaemonSet +metadata: + namespace: {{ .Release.Namespace }} + name: k10-rhmp-paygo + labels: +{{ include "helm.labels" . | indent 4 }} + component: paygo +spec: + selector: + matchLabels: +{{ include "k10.common.matchLabels" . | indent 6 }} + component: paygo + template: + metadata: + labels: +{{ include "helm.labels" . | indent 8 }} + component: paygo + spec: + containers: + - name: paygo + image: {{ .Values.global.images.paygo_daemonset }} + command: [ "sleep" ] + args: [ "36500d" ] +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/deployments.yaml b/charts/kasten/k10/7.0.1101/templates/deployments.yaml new file mode 100644 index 0000000000..b0a8ab9a20 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/deployments.yaml @@ -0,0 +1,31 @@ +{{/* +Generates deployment specs for K10 services and other services such as +"frontend" and "kanister". +*/}} +{{- include "singleAuth.check" . -}} +{{- $main_context := . -}} +{{- $stateless_services := include "get.enabledStatelessServices" . | splitList " " -}} +{{- $colocated_services := include "get.enabledColocatedServices" . | fromYaml -}} +{{ $service_list := include "get.enabledRestServices" . | splitList " " }} +{{- range $skip, $k10_service := $service_list }} + {{ if not (hasKey $colocated_services $k10_service ) }} + {{/* Set $stateful for stateful services when .Values.global.persistence.enabled is true */}} + {{- $stateful := and $.Values.global.persistence.enabled (not (has $k10_service $stateless_services)) -}} + {{ $replicas := get $.Values (printf "%sReplicas" $k10_service) | default 1 }} + {{ $tmp_contx := dict "main" $main_context "k10_service" $k10_service "stateful" $stateful "replicas" $replicas }} + {{ if eq $k10_service "metering" }} + {{- include "k10-metering" $tmp_contx -}} + {{ else }} + {{- include "k10-default" $tmp_contx -}} + {{ end }} + {{ end }}{{/* if not (hasKey $colocated_services $k10_service ) */}} +{{- end }} +{{/* +Generate deployment specs for additional services. These are stateless and have +1 replica. +*/}} +{{- range $skip, $k10_service := concat (include "get.enabledServices" . | splitList " ") (include "get.enabledAdditionalServices" . | splitList " ") }} + {{- if eq $k10_service "gateway" -}}{{- continue -}}{{- end -}} + {{ $tmp_contx := dict "main" $main_context "k10_service" $k10_service "stateful" false "replicas" 1 }} + {{- include "k10-default" $tmp_contx -}} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/fluentbit-configmap.yaml b/charts/kasten/k10/7.0.1101/templates/fluentbit-configmap.yaml new file mode 100644 index 0000000000..71cecb9668 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/fluentbit-configmap.yaml @@ -0,0 +1,34 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: fluentbit-configmap +data: + fluentbit.conf: | + [SERVICE] + HTTP_Server On + HTTP_Listen 0.0.0.0 + HTTP_PORT 24225 + + [INPUT] + Name tcp + Listen 0.0.0.0 + Port 24224 + + [OUTPUT] + Name stdout + Match * + + [OUTPUT] + Name file + Match * + File {{ .Values.global.persistence.mountPath }}/k10.log + logrotate.conf: | + {{ .Values.global.persistence.mountPath }}/k10.log { + create + missingok + rotate 6 + size 1G + } diff --git a/charts/kasten/k10/7.0.1101/templates/frontend-nginx-configmap.yaml b/charts/kasten/k10/7.0.1101/templates/frontend-nginx-configmap.yaml new file mode 100644 index 0000000000..93d17b3a17 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/frontend-nginx-configmap.yaml @@ -0,0 +1,50 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: frontend-config +data: + frontend.conf: | + server { + listen {{ .Values.service.externalPort }} default_server; + {{- if .Values.global.network.enable_ipv6 }} + listen [::]:{{ .Values.service.externalPort }} default_server; + {{- end }} + server_name localhost; + + gzip on; + # serves *.gz files (when present) instead of dynamic compression + gzip_static on; + + root /frontend; + index index.html; + + location / { + try_files $uri $uri/ =404; + } + } + nginx.conf: | + #user nginx; # this directive is ignored if we use a non-root user in Dockerfile + worker_processes 4; + + error_log stderr warn; + pid /var/run/nginx/nginx.pid; + + events { + worker_connections 1024; + } + + http { + include /etc/nginx/mime.types; + default_type application/octet-stream; + access_log /dev/stdout; + sendfile on; + keepalive_timeout 650; + + # turn off nginx version in responses + server_tokens off; + + include /etc/nginx/conf.d/*.conf; + } diff --git a/charts/kasten/k10/7.0.1101/templates/gateway-ext.yaml b/charts/kasten/k10/7.0.1101/templates/gateway-ext.yaml new file mode 100644 index 0000000000..00da4c27b2 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/gateway-ext.yaml @@ -0,0 +1,36 @@ +{{/* Externally exposed service for gateway endpoint. */}} +{{- $container_port := .Values.service.internalPort -}} +{{- if .Values.externalGateway.create -}} +{{- include "authEnabled.check" . -}} +apiVersion: v1 +kind: Service +metadata: + namespace: {{ $.Release.Namespace }} + name: gateway-ext + labels: + service: gateway + {{- if eq "route53-mapper" (default " " .Values.externalGateway.fqdn.type) }} + dns: route53 + {{- end }} +{{ include "helm.labels" . | indent 4 }} + annotations: + {{- if .Values.externalGateway.annotations }} +{{ toYaml .Values.externalGateway.annotations | indent 4 }} + {{- end }} +{{ include "dnsAnnotations" . | indent 4 }} + {{- if .Values.externalGateway.awsSSLCertARN }} + service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {{ .Values.externalGateway.awsSSLCertARN }} + service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https + {{- if .Values.externalGateway.awsSecurityGroup }} + service.beta.kubernetes.io/aws-load-balancer-extra-security-groups: {{ .Values.externalGateway.awsSecurityGroup }} + {{- end }} + {{- end }} +spec: + type: LoadBalancer + ports: + - name: https + port: {{ if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) .Values.externalGateway.awsSSLCertARN }}443{{ else }}80{{ end }} + targetPort: {{ $container_port }} + selector: + service: gateway +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/gateway.yaml b/charts/kasten/k10/7.0.1101/templates/gateway.yaml new file mode 100644 index 0000000000..93792a10d6 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/gateway.yaml @@ -0,0 +1,253 @@ +{{- $container_port := .Values.gateway.service.internalPort | default 8000 -}} +{{- $service_port := .Values.gateway.service.externalPort -}} +{{- $admin_port := default 8877 .Values.service.gatewayAdminPort -}} +--- +apiVersion: v1 +kind: Service +metadata: + namespace: {{ $.Release.Namespace }} + labels: + service: gateway +{{ include "helm.labels" . | indent 4 }} + name: gateway + {{- if not (include "k10.capability.gateway" $) }} + annotations: + getambassador.io/config: | + --- + apiVersion: getambassador.io/v3alpha1 + kind: AuthService + name: authentication + auth_service: "auth-svc:8000" + path_prefix: "/v0/authz" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + allowed_authorization_headers: + - x-cluster-name + allowed_request_headers: + - "x-forwarded-access-token" + --- + apiVersion: getambassador.io/v3alpha1 + kind: Host + name: ambassadorhost + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + {{- if .Values.secrets.tlsSecret }} + tlsSecret: + name: {{ .Values.secrets.tlsSecret }} + {{- else if and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey }} + tlsSecret: + name: gateway-certs + {{- end }} + requestPolicy: + insecure: + action: Route + --- + apiVersion: getambassador.io/v3alpha1 + kind: Listener + name: ambassadorlistener + port: {{ $container_port }} + securityModel: XFP + protocol: HTTPS + hostBinding: + namespace: + from: SELF + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + --- + {{- if (eq "endpoint" .Values.apigateway.serviceResolver) }} + apiVersion: getambassador.io/v3alpha1 + kind: KubernetesEndpointResolver + name: endpoint + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + --- + {{- end }} + apiVersion: getambassador.io/v3alpha1 + kind: Module + name: ambassador + config: + diagnostics: + enabled: false + service_port: {{ $container_port }} + {{- if .Values.global.network.enable_ipv6 }} + enable_ipv6: true + {{- end }} + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + {{- if (eq "endpoint" .Values.apigateway.serviceResolver) }} + resolver: endpoint + load_balancer: + policy: round_robin + {{- end }} + {{- end }} +spec: + ports: + - name: http + port: {{ $service_port }} + targetPort: {{ $container_port }} + selector: + service: gateway +--- +{{- if not (include "k10.capability.gateway" $) }} +{{- if .Values.gateway.exposeAdminPort }} +apiVersion: v1 +kind: Service +metadata: + namespace: {{ $.Release.Namespace }} + name: gateway-admin + labels: + service: gateway + annotations: + getambassador.io/config: | + apiVersion: getambassador.io/v3alpha1 + kind: Module + name: ambassador + config: + diagnostics: + enabled: false +{{ include "helm.labels" . | indent 4 }} +spec: + ports: + - name: metrics + port: {{ $admin_port }} + targetPort: {{ $admin_port }} + selector: + service: gateway +--- +{{- end }} +{{- end }} +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ $.Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} + component: gateway + name: gateway +spec: + replicas: 1 + selector: + matchLabels: + service: gateway + template: + metadata: + annotations: + {{- include "k10.globalPodAnnotations" . | nindent 8 }} + checksum/config: {{ include (print .Template.BasePath "/k10-config.yaml") . | sha256sum }} + checksum/secret: {{ include (print .Template.BasePath "/secrets.yaml") . | sha256sum }} + labels: + {{- include "k10.globalPodLabels" . | nindent 8 }} + {{- include "helm.labels" . | nindent 8 }} + {{- include "k10.azMarketPlace.billingIdentifier" . | nindent 8 }} + service: gateway + component: gateway +{{- if (include "k10.capability.gateway" $) }} + spec: + serviceAccountName: {{ template "serviceAccountName" . }} + {{- dict "main" . "k10_deployment_name" "gateway" | include "k10.priorityClassName" | indent 6}} + {{- include "k10.imagePullSecrets" . | indent 6 }} + containers: + - name: gateway + {{- dict "main" . "k10_service" "gateway" | include "serviceImage" | indent 8 }} + {{- if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) }} + volumeMounts: + - name: tls-volume + mountPath: "/etc/tls" + readOnly: true + {{- end }} + resources: + limits: + cpu: {{ .Values.gateway.resources.limits.cpu | quote }} + memory: {{ .Values.gateway.resources.limits.memory | quote }} + requests: + cpu: {{ .Values.gateway.resources.requests.cpu | quote }} + memory: {{ .Values.gateway.resources.requests.memory | quote }} + env: + - name: LOG_LEVEL + valueFrom: + configMapKeyRef: + name: k10-config + key: loglevel + - name: VERSION + valueFrom: + configMapKeyRef: + name: k10-config + key: version +{{- if .Values.fips.enabled }} + {{- include "k10.enforceFIPSEnvironmentVariables" . | indent 10 }} +{{- end }} + {{- with $capabilities := include "k10.capabilities" . }} + - name: K10_CAPABILITIES + value: {{ $capabilities | quote }} + {{- end }} + {{- with $capabilities_mask := include "k10.capabilities_mask" . }} + - name: K10_CAPABILITIES_MASK + value: {{ $capabilities_mask | quote }} + {{- end }} + {{- if eq (include "check.dexAuth" .) "true" }} + - name: {{ include "k10.gatewayEnableDex" . }} + value: "true" + {{- end }} + envFrom: + - configMapRef: + name: k10-gateway + livenessProbe: + httpGet: + path: /healthz + port: {{ $container_port }} + initialDelaySeconds: 5 + readinessProbe: + httpGet: + path: /healthz + port: {{ $container_port }} + restartPolicy: Always + {{- if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) }} + volumes: + - name: tls-volume + secret: + secretName: {{ .Values.secrets.tlsSecret | default "gateway-certs" }} + {{- end }} +{{- else }} + spec: + serviceAccountName: {{ template "serviceAccountName" . }} + {{- dict "main" . "k10_deployment_name" "gateway" | include "k10.priorityClassName" | indent 6}} + {{- include "k10.imagePullSecrets" . | indent 6 }} + containers: + - name: ambassador + image: {{ include "get.emissaryImage" . }} + resources: + limits: + cpu: {{ .Values.gateway.resources.limits.cpu | quote }} + memory: {{ .Values.gateway.resources.limits.memory | quote }} + requests: + cpu: {{ .Values.gateway.resources.requests.cpu | quote }} + memory: {{ .Values.gateway.resources.requests.memory | quote }} + env: + - name: AMBASSADOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: AMBASSADOR_SINGLE_NAMESPACE + value: "true" + - name: SCOUT_DISABLE + value: "1" + - name: "AMBASSADOR_VERIFY_SSL_FALSE" + value: {{ .Values.gateway.insecureDisableSSLVerify | quote }} + - name: AMBASSADOR_ID + value: {{ include "k10.ambassadorId" . }} +{{- if .Values.global.network.enable_ipv6}} + - name: AMBASSADOR_ENVOY_BIND_ADDRESS + value: '::' + - name: AMBASSADOR_DIAGD_BIND_ADDREASS + value: '[::]' +{{- end }} + livenessProbe: + httpGet: + path: /ambassador/v0/check_alive + port: {{ $admin_port }} + initialDelaySeconds: 30 + periodSeconds: 3 + readinessProbe: + httpGet: + path: /ambassador/v0/check_ready + port: {{ $admin_port }} + initialDelaySeconds: 30 + periodSeconds: 3 + restartPolicy: Always +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/grafana-scc.yaml b/charts/kasten/k10/7.0.1101/templates/grafana-scc.yaml new file mode 100644 index 0000000000..014d1be46c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/grafana-scc.yaml @@ -0,0 +1,45 @@ +{{- if .Values.scc.create }} +{{- if .Values.grafana.enabled }} +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Name }}-grafana +allowPrivilegedContainer: false +allowHostNetwork: false +allowHostDirVolumePlugin: true +allowHostPorts: true +allowHostPID: false +allowHostIPC: false +readOnlyRootFilesystem: false +requiredDropCapabilities: + - KILL + - MKNOD + - SETUID + - SETGID +defaultAddCapabilities: [] +allowedCapabilities: + - CHOWN +priority: 0 +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +fsGroup: + type: RunAsAny +supplementalGroups: + type: RunAsAny +seccompProfiles: + - runtime/default +volumes: + - configMap + - downwardAPI + - emptyDir + - persistentVolumeClaim + - projected + - secret +users: + - system:serviceaccount:{{.Release.Namespace}}:{{.Release.Name}}-grafana +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/ingress.yaml b/charts/kasten/k10/7.0.1101/templates/ingress.yaml new file mode 100644 index 0000000000..df347477c6 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/ingress.yaml @@ -0,0 +1,73 @@ +{{- $ingressApiIsStable := eq (include "ingress.isStable" .) "true" -}} +{{- $service_port := .Values.gateway.service.externalPort -}} +{{ if .Values.ingress.create }} +{{ include "authEnabled.check" . }} +{{ include "check.ingress.defaultBackend" . }} +apiVersion: {{ template "ingress.apiVersion" . }} +kind: Ingress +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: {{ .Values.ingress.name | default (printf "%s-ingress" .Release.Name) }} + annotations: +{{ include "ingressClassAnnotation" . | indent 4 }} + {{- if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) }} + nginx.ingress.kubernetes.io/secure-backends: "true" + nginx.ingress.kubernetes.io/backend-protocol: HTTPS + {{- end }} + {{- if .Values.ingress.annotations }} +{{ toYaml .Values.ingress.annotations | indent 4 }} + {{- end }} +spec: +{{ include "specIngressClassName" . | indent 2 }} +{{ with .Values.ingress.defaultBackend }} + {{- if or .service.enabled .resource.enabled }} + defaultBackend: + {{- with .service }} + {{- if .enabled }} + service: + name: {{ required "`name` is required in the `ingress.defaultBackend.service`." .name }} + port: + {{- if .port.name }} + name: {{ .port.name }} + {{- else if .port.number }} + number: {{ .port.number }} + {{- end }} + {{- end }} + {{- end }} + {{- with .resource }} + {{- if .enabled }} + resource: + apiGroup: {{ .apiGroup }} + name: {{ required "`name` is required in the `ingress.defaultBackend.resource`." .name }} + kind: {{ required "`kind` is required in the `ingress.defaultBackend.resource`." .kind }} + {{- end }} + {{- end }} + {{- end }} +{{- end }} +{{- if .Values.ingress.tls.enabled }} + tls: + - hosts: + - {{ required "ingress.host value is required for TLS configuration" .Values.ingress.host }} + secretName: {{ .Values.ingress.tls.secretName }} +{{- end }} + rules: + - http: + paths: + - path: /{{ default .Release.Name .Values.ingress.urlPath | trimPrefix "/" | trimSuffix "/" }}/ + pathType: {{ default "ImplementationSpecific" .Values.ingress.pathType }} + backend: + {{- if $ingressApiIsStable }} + service: + name: gateway + port: + number: {{ $service_port }} + {{- else }} + serviceName: gateway + servicePort: {{ $service_port }} + {{- end }} + {{- if .Values.ingress.host }} + host: {{ .Values.ingress.host }} + {{- end }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/templates/k10-config.yaml b/charts/kasten/k10/7.0.1101/templates/k10-config.yaml new file mode 100644 index 0000000000..a6c52552e9 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/k10-config.yaml @@ -0,0 +1,271 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-config +data: + DataStoreLogLevel: {{ default "error" | quote }} + DataStoreFileLogLevel: {{ default "" | quote }} + loglevel: {{ .Values.logLevel | quote }} + {{- if .Values.clusterName }} + clustername: {{ quote .Values.clusterName }} + {{- end }} + version: {{ .Chart.AppVersion }} + {{- $capabilities := include "k10.capabilities" . | splitList " " }} + {{- $capabilities_mask := include "k10.capabilities_mask" . | splitList " " }} + {{- if and ( has "mc" $capabilities ) ( not ( has "mc" $capabilities_mask ) ) }} + multiClusterVersion: {{ include "k10.multiClusterVersion" . | quote }} + {{- end }} + modelstoredirname: "//mnt/k10state/kasten-io/" + apiDomain: {{ include "apiDomain" . }} + concurrentSnapConversions: {{ default (include "k10.defaultConcurrentSnapshotConversions" .) .Values.limiter.concurrentSnapConversions | quote }} + concurrentWorkloadSnapshots: {{ include "k10.defaultConcurrentWorkloadSnapshots" . | quote }} + k10DataStoreDisableCompression: "false" + k10DataStoreParallelUpload: {{ .Values.datastore.parallelUploads | quote }} + k10DataStoreParallelDownload: {{ .Values.datastore.parallelDownloads | quote }} + k10DataStoreGeneralContentCacheSizeMB: {{ include "k10.defaultK10DataStoreGeneralContentCacheSizeMB" . | quote }} + k10DataStoreGeneralMetadataCacheSizeMB: {{ include "k10.defaultK10DataStoreGeneralMetadataCacheSizeMB" . | quote }} + k10DataStoreRestoreContentCacheSizeMB: {{ include "k10.defaultK10DataStoreRestoreContentCacheSizeMB" . | quote }} + k10DataStoreRestoreMetadataCacheSizeMB: {{ include "k10.defaultK10DataStoreRestoreMetadataCacheSizeMB" . | quote }} + K10BackupBufferFileHeadroomFactor: {{ include "k10.defaultK10BackupBufferFileHeadroomFactor" . | quote }} + AWSAssumeRoleDuration: {{ default (include "k10.defaultAssumeRoleDuration" .) .Values.awsConfig.assumeRoleDuration | quote }} + KanisterBackupTimeout: {{ default (include "k10.defaultKanisterBackupTimeout" .) .Values.kanister.backupTimeout | quote }} + KanisterRestoreTimeout: {{ default (include "k10.defaultKanisterRestoreTimeout" .) .Values.kanister.restoreTimeout | quote }} + KanisterDeleteTimeout: {{ default (include "k10.defaultKanisterDeleteTimeout" .) .Values.kanister.deleteTimeout | quote }} + KanisterHookTimeout: {{ default (include "k10.defaultKanisterHookTimeout" .) .Values.kanister.hookTimeout | quote }} + KanisterCheckRepoTimeout: {{ default (include "k10.defaultKanisterCheckRepoTimeout" .) .Values.kanister.checkRepoTimeout | quote }} + KanisterStatsTimeout: {{ default (include "k10.defaultKanisterStatsTimeout" .) .Values.kanister.statsTimeout | quote }} + KanisterEFSPostRestoreTimeout: {{ default (include "k10.defaultKanisterEFSPostRestoreTimeout" .) .Values.kanister.efsPostRestoreTimeout | quote }} + KanisterManagedDataServicesBlueprintsEnabled: {{ .Values.kanister.managedDataServicesBlueprintsEnabled | quote }} + KanisterPodReadyWaitTimeout: {{ .Values.kanister.podReadyWaitTimeout | quote }} + KanisterPodMetricSidecarEnabled: {{ .Values.kanisterPodMetricSidecar.enabled | quote }} + KanisterPodMetricSidecarMetricLifetime: {{ .Values.kanisterPodMetricSidecar.metricLifetime | quote }} + KanisterPodPushgatewayMetricsInterval: {{ .Values.kanisterPodMetricSidecar.pushGatewayInterval | quote }} +{{- include "kanisterPodMetricSidecarResources" . | indent 2 }} + KanisterToolsImage: {{ include "get.kanisterToolsImage" . | quote }} + K10MutatingWebhookTLSCertDir: "/etc/ssl/certs/webhook" + + K10LimiterGenericVolumeSnapshots: {{ default (include "k10.defaultK10LimiterGenericVolumeSnapshots" .) .Values.limiter.genericVolumeSnapshots | quote }} + K10LimiterGenericVolumeCopies: {{ default (include "k10.defaultK10LimiterGenericVolumeCopies" .) .Values.limiter.genericVolumeCopies | quote }} + K10LimiterGenericVolumeRestores: {{ default (include "k10.defaultK10LimiterGenericVolumeRestores" .) .Values.limiter.genericVolumeRestores | quote }} + K10LimiterCsiSnapshots: {{ default (include "k10.defaultK10LimiterCsiSnapshots" .) .Values.limiter.csiSnapshots | quote }} + K10LimiterProviderSnapshots: {{ default (include "k10.defaultK10LimiterProviderSnapshots" .) .Values.limiter.providerSnapshots | quote }} + K10LimiterImageCopies: {{ default (include "k10.defaultK10LimiterImageCopies" .) .Values.limiter.imageCopies | quote }} + K10ExecutorWorkerCount: {{ default (include "k10.defaultK10ExecutorWorkerCount" .) .Values.services.executor.workerCount | quote }} + K10ExecutorMaxConcurrentRestoreCsiSnapshots: {{ default (include "k10.defaultK10ExecutorMaxConcurrentRestoreCsiSnapshots" .) .Values.services.executor.maxConcurrentRestoreCsiSnapshots | quote }} + K10ExecutorMaxConcurrentRestoreGenericVolumeSnapshots: {{ default (include "k10.defaultK10ExecutorMaxConcurrentRestoreGenericVolumeSnapshots" .) .Values.services.executor.maxConcurrentRestoreGenericVolumeSnapshots | quote }} + K10ExecutorMaxConcurrentRestoreWorkloads: {{ default (include "k10.defaultK10ExecutorMaxConcurrentRestoreWorkloads" .) .Values.services.executor.maxConcurrentRestoreWorkloads | quote }} + + K10GCDaemonPeriod: {{ default (include "k10.defaultK10GCDaemonPeriod" .) .Values.garbagecollector.daemonPeriod | quote }} + K10GCKeepMaxActions: {{ default (include "k10.defaultK10GCKeepMaxActions" .) .Values.garbagecollector.keepMaxActions | quote }} + K10GCActionsEnabled: {{ default (include "k10.defaultK10GCActionsEnabled" .) .Values.garbagecollector.actions.enabled | quote }} + + K10EphemeralPVCOverhead: {{ .Values.ephemeralPVCOverhead | quote }} + + K10PersistenceStorageClass: {{ .Values.global.persistence.storageClass | quote }} + + K10DefaultPriorityClassName: {{ default (include "k10.defaultK10DefaultPriorityClassName" .) .Values.defaultPriorityClassName | quote }} + {{- if .Values.global.podLabels }} + K10CustomPodLabels: {{ include "k10.globalPodLabelsJson" . | quote }} + {{- end }} + {{- if .Values.global.podAnnotations }} + K10CustomPodAnnotations: {{ include "k10.globalPodAnnotationsJson" . | quote }} + {{- end }} + + kubeVirtVMsUnFreezeTimeout: {{ default (include "k10.defaultKubeVirtVMsUnfreezeTimeout" .) .Values.kubeVirtVMs.snapshot.unfreezeTimeout | quote }} + + k10JobMaxWaitDuration: {{ .Values.maxJobWaitDuration | quote }} + + quickDisasterRecoveryEnabled: {{ .Values.kastenDisasterRecovery.quickMode.enabled | quote }} + + k10ForceRootInKanisterHooks: {{ .Values.forceRootInKanisterHooks | quote }} + + workerPodResourcesCRDEnabled: {{ .Values.workerPodCRDs.enabled | quote }} +{{- include "workerPodResourcesCRD" . | indent 2 }} + + {{- if .Values.awsConfig.efsBackupVaultName }} + efsBackupVaultName: {{ quote .Values.awsConfig.efsBackupVaultName }} + {{- end }} + + {{- if .Values.excludedApps }} + excludedApps: '{{ join "," .Values.excludedApps }}' + {{- end }} + + {{- if .Values.vmWare.taskTimeoutMin }} + vmWareTaskTimeoutMin: {{ quote .Values.vmWare.taskTimeoutMin }} + {{- end }} + +{{- include "get.kanisterPodCustomLabels" . | indent 2}} +{{- include "get.kanisterPodCustomAnnotations" . | indent 2}} + + {{- if .Values.kanisterFunctionVersion }} + kanisterFunctionVersion: {{ .Values.kanisterFunctionVersion | quote }} + {{- else }} + kanisterFunctionVersion: {{ quote "v1.0.0-alpha" }} + {{- end }} +{{- include "kanisterToolsResources" . | indent 2 }} +{{- include "get.gvsActivationToken" . | indent 2 }} + + {{- if .Values.genericStorageBackup.overridepubkey }} + overridePublicKeyForGVS: {{ .Values.genericStorageBackup.overridepubkey | quote }} + {{- end }} + + {{- with (include "k10.fluentbitEndpoint" .) }} + fluentbitEndpoint: {{ . | quote }} + {{- end }} + +{{ if .Values.features }} +--- +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-features +data: +{{ include "k10.features" . | indent 2}} +{{ end }} +{{ if .Values.auth.openshift.enabled }} +--- +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-dex + namespace: {{ .Release.Namespace }} +data: + config.yaml: | + issuer: {{ printf "%s/dex" (trimSuffix "/" .Values.auth.openshift.dashboardURL) }} + storage: + type: memory + web: + http: 0.0.0.0:8080 + logger: + level: info + format: text + connectors: + - type: openshift + id: openshift + name: OpenShift + config: + issuer: {{ .Values.auth.openshift.openshiftURL }} + clientID: {{ printf "system:serviceaccount:%s:%s" .Release.Namespace (include "get.openshiftServiceAccountName" .) }} + clientSecret: {{ printf "{{ getenv \"%s\" }}" (include "k10.openShiftClientSecretEnvVar" . ) }} + redirectURI: {{ printf "%s/dex/callback" (trimSuffix "/" .Values.auth.openshift.dashboardURL) }} + insecureCA: {{ .Values.auth.openshift.insecureCA }} +{{- if and (eq (include "check.cacertconfigmap" .) "false") .Values.auth.openshift.useServiceAccountCA }} + rootCA: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt +{{- end }} + oauth2: + skipApprovalScreen: true + staticClients: + - name: 'K10' + id: kasten + secret: kastensecret + redirectURIs: + - {{ printf "%s/auth-svc/v0/oidc/redirect" (trimSuffix "/" .Values.auth.openshift.dashboardURL) }} +{{ end }} +{{ if .Values.auth.ldap.enabled }} +--- +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-dex + namespace: {{ .Release.Namespace }} +data: + config.yaml: | + issuer: {{ printf "%s/dex" (trimSuffix "/" .Values.auth.ldap.dashboardURL) }} + storage: + type: memory + web: + http: 0.0.0.0:8080 + frontend: + dir: {{ include "k10.dexFrontendDir" . }} + theme: custom + logoURL: theme/kasten-logo.svg + logger: + level: info + format: text + connectors: + - type: ldap + id: ldap + name: LDAP + config: + host: {{ .Values.auth.ldap.host }} + insecureNoSSL: {{ .Values.auth.ldap.insecureNoSSL }} + insecureSkipVerify: {{ .Values.auth.ldap.insecureSkipVerifySSL }} + startTLS: {{ .Values.auth.ldap.startTLS }} + bindDN: {{ .Values.auth.ldap.bindDN }} + bindPW: BIND_PASSWORD_PLACEHOLDER + userSearch: + baseDN: {{ .Values.auth.ldap.userSearch.baseDN }} + filter: {{ .Values.auth.ldap.userSearch.filter }} + username: {{ .Values.auth.ldap.userSearch.username }} + idAttr: {{ .Values.auth.ldap.userSearch.idAttr }} + emailAttr: {{ .Values.auth.ldap.userSearch.emailAttr }} + nameAttr: {{ .Values.auth.ldap.userSearch.nameAttr }} + preferredUsernameAttr: {{ .Values.auth.ldap.userSearch.preferredUsernameAttr }} + groupSearch: + baseDN: {{ .Values.auth.ldap.groupSearch.baseDN }} + filter: {{ .Values.auth.ldap.groupSearch.filter }} + nameAttr: {{ .Values.auth.ldap.groupSearch.nameAttr }} +{{- with .Values.auth.ldap.groupSearch.userMatchers }} + userMatchers: +{{ toYaml . | indent 10 }} +{{- end }} + oauth2: + skipApprovalScreen: true + staticClients: + - name: 'K10' + id: kasten + secret: kastensecret + redirectURIs: + - {{ printf "%s/auth-svc/v0/oidc/redirect" (trimSuffix "/" .Values.auth.ldap.dashboardURL) }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: k10-logos-dex + namespace: {{ .Release.Namespace }} +binaryData: + {{- $files := .Files }} + {{- range tuple "files/favicon.png" "files/kasten-logo.svg" "files/styles.css" }} + {{ trimPrefix "files/" . }}: |- + {{ $files.Get . | b64enc }} + {{- end }} +{{ end }} +{{ if (include "k10.capability.gateway" $) }} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + name: k10-gateway + namespace: {{ .Release.Namespace }} +data: + {{ include "k10.gatewayPrefixVarName" . }}: {{ include "k10.prefixPath" . }} + {{ include "k10.gatewayGrafanaSvcVarName" . }}: {{ printf "%s-grafana" .Release.Name }} + + {{- if .Values.gateway.requestHeaders }} + {{ include "k10.gatewayRequestHeadersVarName" .}}: {{ (.Values.gateway.requestHeaders | default list) | join " " }} + {{- end }} + + {{- if .Values.gateway.authHeaders }} + {{ include "k10.gatewayAuthHeadersVarName" .}}: {{ (.Values.gateway.authHeaders | default list) | join " " }} + {{- end }} + + {{- if .Values.gateway.service.internalPort }} + {{ include "k10.gatewayPortVarName" .}}: {{ .Values.gateway.service.internalPort | quote }} + {{- end }} + + {{- if or .Values.secrets.tlsSecret (and .Values.secrets.apiTlsCrt .Values.secrets.apiTlsKey) }} + {{ include "k10.gatewayTLSCertFile" . }}: /etc/tls/tls.crt + {{ include "k10.gatewayTLSKeyFile" . }}: /etc/tls/tls.key + {{- end }} + +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/templates/k10-eula.yaml b/charts/kasten/k10/7.0.1101/templates/k10-eula.yaml new file mode 100644 index 0000000000..21e251d6c8 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/k10-eula.yaml @@ -0,0 +1,21 @@ +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-eula +data: + text: {{ .Files.Get "eula.txt" | quote }} +--- +{{ if .Values.eula.accept }} +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-eula-info +data: +{{ include "k10.eula.fields" . | indent 2 }} +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/templates/k10-scc.yaml b/charts/kasten/k10/7.0.1101/templates/k10-scc.yaml new file mode 100644 index 0000000000..12a449f6fa --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/k10-scc.yaml @@ -0,0 +1,46 @@ +{{- if .Values.scc.create }} +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + name: {{ .Release.Name }}-scc + labels: +{{ include "helm.labels" . | indent 4 }} +allowHostDirVolumePlugin: false +allowHostIPC: false +allowHostNetwork: false +allowHostPID: false +allowHostPorts: false +allowPrivilegeEscalation: false +allowPrivilegedContainer: false +allowedCapabilities: + - CHOWN + - FOWNER + - DAC_OVERRIDE +defaultAddCapabilities: + - CHOWN + - FOWNER + - DAC_OVERRIDE +fsGroup: + type: RunAsAny +priority: {{ .Values.scc.priority }} +readOnlyRootFilesystem: false +requiredDropCapabilities: + - ALL +runAsUser: + type: RunAsAny +seLinuxContext: + type: RunAsAny +supplementalGroups: + type: RunAsAny +seccompProfiles: + - runtime/default +users: + - system:serviceaccount:{{.Release.Namespace}}:{{ template "serviceAccountName" . }} +volumes: + - configMap + - downwardAPI + - emptyDir + - persistentVolumeClaim + - projected + - secret +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/kopia-tls-certs.yaml b/charts/kasten/k10/7.0.1101/templates/kopia-tls-certs.yaml new file mode 100644 index 0000000000..ac0635f51c --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/kopia-tls-certs.yaml @@ -0,0 +1,33 @@ +# alternate names of the services. This renders to: [ component-svc.namespace, component-svc.namespace.svc ] +{{- $altNamesKopia := list ( printf "%s-svc.%s" "data-mover" .Release.Namespace ) ( printf "%s-svc.%s.svc" "data-mover" .Release.Namespace ) }} +# generate ca cert with 365 days of validity +{{- $caKopia := genCA ( printf "%s-svc-ca" "data-mover" ) 365 }} +# generate cert with CN="component-svc", SAN=$altNames and with 365 days of validity +{{- $certKopia := genSignedCert ( printf "%s-svc" "data-mover" ) nil $altNamesKopia 365 $caKopia }} +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: kopia-tls-cert + labels: +{{ include "helm.labels" . | indent 4 }} +{{- if .Values.global.rhMarketPlace }} + annotations: + "helm.sh/hook": "pre-install" +{{- end }} +data: + tls.crt: {{ $certKopia.Cert | b64enc }} +--- +apiVersion: v1 +kind: Secret +type: Opaque +metadata: + name: kopia-tls-key + labels: +{{ include "helm.labels" . | indent 4 }} +{{- if .Values.global.rhMarketPlace }} + annotations: + "helm.sh/hook": "pre-install" +{{- end }} +data: + tls.key: {{ $certKopia.Key | b64enc }} diff --git a/charts/kasten/k10/7.0.1101/templates/license.yaml b/charts/kasten/k10/7.0.1101/templates/license.yaml new file mode 100644 index 0000000000..f409fb7e51 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/license.yaml @@ -0,0 +1,25 @@ +{{- if not ( or ( .Values.license ) ( .Values.metering.awsMarketplace ) ( .Values.metering.awsManagedLicense ) ( .Values.metering.licenseConfigSecretName ) ) }} +{{- if .Files.Get "triallicense" }} +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-trial-license +type: Opaque +data: + license: {{ print (.Files.Get "triallicense") }} +{{- end }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-license +type: Opaque +data: + license: {{ include "k10.getlicense" . }} diff --git a/charts/kasten/k10/7.0.1101/templates/mc.yaml b/charts/kasten/k10/7.0.1101/templates/mc.yaml new file mode 100644 index 0000000000..2c23f94ae5 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/mc.yaml @@ -0,0 +1,6 @@ +{{- if not .Values.multicluster.enabled -}} + {{- $clusterInfo := lookup "v1" "Secret" .Release.Namespace "mc-cluster-info" -}} + {{- if $clusterInfo -}} + {{- fail "WARNING: Multi-cluster features must remain enabled as long as this cluster is connected to a multi-cluster system.\nEither disconnect this cluster from the multi-cluster system or use multicluster.enabled=true to enable multi-cluster features." -}} + {{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/mutatingwebhook.yaml b/charts/kasten/k10/7.0.1101/templates/mutatingwebhook.yaml new file mode 100644 index 0000000000..729df58656 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/mutatingwebhook.yaml @@ -0,0 +1,51 @@ +{{- if .Values.injectKanisterSidecar.enabled -}} +# alternate names of the services. This renders to: [ component-svc.namespace, component-svc.namespace.svc ] +{{- $altNames := list ( printf "%s-svc.%s" "controllermanager" .Release.Namespace ) ( printf "%s-svc.%s.svc" "controllermanager" .Release.Namespace ) }} +# generate ca cert with 365 days of validity +{{- $ca := genCA ( printf "%s-svc-ca" "controllermanager" ) 365 }} +# generate cert with CN="component-svc", SAN=$altNames and with 365 days of validity +{{- $cert := genSignedCert ( printf "%s-svc" "controllermanager" ) nil $altNames 365 $ca }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: controllermanager-certs + labels: +{{ include "helm.labels" . | indent 4 }} +data: + tls.crt: {{ $cert.Cert | b64enc }} + tls.key: {{ $cert.Key | b64enc }} +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-sidecar-injector +webhooks: +- name: k10-sidecar-injector.kasten.io + admissionReviewVersions: ["v1", "v1beta1"] + failurePolicy: Ignore + sideEffects: None + clientConfig: + service: + name: controllermanager-svc + namespace: {{ .Release.Namespace }} + path: "/k10/mutate" + port: 443 + caBundle: {{ b64enc $ca.Cert }} + rules: + - operations: ["CREATE", "UPDATE"] + apiGroups: ["*"] + apiVersions: ["v1"] + resources: ["deployments", "statefulsets", "deploymentconfigs"] +{{- if .Values.injectKanisterSidecar.namespaceSelector }} + namespaceSelector: +{{ toYaml .Values.injectKanisterSidecar.namespaceSelector | indent 4 }} +{{- end }} +{{- if .Values.injectKanisterSidecar.objectSelector }} + objectSelector: +{{ toYaml .Values.injectKanisterSidecar.objectSelector | indent 4 }} +{{- end }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/networkpolicy.yaml b/charts/kasten/k10/7.0.1101/templates/networkpolicy.yaml new file mode 100644 index 0000000000..c026de5147 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/networkpolicy.yaml @@ -0,0 +1,282 @@ +{{- $admin_port := default 8877 .Values.service.gatewayAdminPort -}} +{{- $mutating_webhook_port := default 8080 .Values.injectKanisterSidecar.webhookServer.port -}} +{{- if .Values.networkPolicy.create }} +apiVersion: networking.k8s.io/v1 +kind: NetworkPolicy +metadata: + name: default-deny + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: {} + policyTypes: + - Ingress +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: access-k10-services + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + ingress: + - from: + - podSelector: + matchLabels: + access-k10-services: allowed + ports: + - protocol: TCP + port: {{ .Values.service.internalPort }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: cross-services-allow + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + ingress: + - from: + - podSelector: + matchLabels: + release: {{ .Release.Name }} + ports: + - protocol: TCP + port: {{ .Values.service.internalPort }} +--- +{{/* TODO: Consider a flag to turn this off. */}} +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-gateway-to-mc-external + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + component: controllermanager + release: {{ .Release.Name }} + ingress: + - from: + - podSelector: + matchLabels: + service: gateway + release: {{ .Release.Name }} + ports: + - protocol: TCP + port: {{ include "k10.mcExternalPort" nil }} +{{- if .Values.logging.internal }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: logging-allow-internal + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + run: logging-svc + ingress: + - from: + - podSelector: + matchLabels: + release: {{ .Release.Name }} + ports: + # Logging input port + - protocol: TCP + port: 24224 + - protocol: TCP + port: 24225 +{{- end }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-external + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + service: gateway + release: {{ .Release.Name }} + ingress: + - from: [] + ports: + - protocol: TCP + port: {{ .Values.gateway.service.internalPort | default 8000 }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-all-api + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + run: aggregatedapis-svc + release: {{ .Release.Name }} + ingress: + - from: + ports: + - protocol: TCP + port: {{ .Values.service.aggregatedApiPort }} +{{- if .Values.gateway.exposeAdminPort }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-gateway-admin + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + service: gateway + ingress: + - from: + - podSelector: + matchLabels: + app: prometheus + component: server + release: {{ .Release.Name }} + ports: + - protocol: TCP + port: {{ $admin_port }} +{{- end -}} +{{- if .Values.kanisterPodMetricSidecar.enabled }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-metrics-kanister-pods + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + run: metering-svc + ingress: + - from: + - podSelector: + matchLabels: + createdBy: kanister + ports: + - protocol: TCP + port: {{ .Values.service.internalPort }} +{{- end -}} +{{- if .Values.injectKanisterSidecar.enabled }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: allow-mutating-webhook + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + run: controllermanager-svc + ingress: + - from: + ports: + - protocol: TCP + port: {{ $mutating_webhook_port }} +{{- end -}} +{{- if eq (include "check.dexAuth" .) "true" }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: gateway-dex-allow + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + run: auth-svc + ingress: + - from: + - podSelector: + matchLabels: + service: gateway + release: {{ .Release.Name }} + ports: + - protocol: TCP + port: 8080 +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: auth-dex-allow + namespace: {{ .Release.Namespace }} + labels: +{{ include "helm.labels" . | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ .Release.Name }} + run: auth-svc + ingress: + - from: + - podSelector: + matchLabels: + run: auth-svc + release: {{ .Release.Name }} + ports: + - protocol: TCP + port: 8080 +{{- end -}} +{{- $mainCtx := . }} +{{- $colocatedList := include "get.enabledColocatedSvcList" . | fromYaml }} +{{- range $primary, $secondaryList := $colocatedList }} +--- +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + name: {{ $primary }}-svc-allow-secondary-services + namespace: {{ $mainCtx.Release.Namespace }} + labels: +{{ include "helm.labels" $mainCtx | indent 4 }} +spec: + podSelector: + matchLabels: + release: {{ $mainCtx.Release.Name }} + run: {{ $primary }}-svc + ingress: + - from: + - podSelector: + matchLabels: + release: {{ $mainCtx.Release.Name }} + ports: + {{- range $skip, $secondary := $secondaryList }} + {{- $colocConfig := index (include "get.enabledColocatedServices" $mainCtx | fromYaml) $secondary }} + - protocol: TCP + port: {{ $colocConfig.port }} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/ocp-ca-cert-extract-hook.yaml b/charts/kasten/k10/7.0.1101/templates/ocp-ca-cert-extract-hook.yaml new file mode 100644 index 0000000000..ac1523a9cf --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/ocp-ca-cert-extract-hook.yaml @@ -0,0 +1,195 @@ +{{- if (include "k10.ocpcacertsautoextraction" .) -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: openshift-cluster-config-reader +rules: + - apiGroups: ["config.openshift.io"] + resources: ["proxies", "apiservers"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: openshift-config-reader + namespace: openshift-config +rules: + - apiGroups: [""] + resources: ["configmaps", "secrets"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: openshift-ingress-operator-reader + namespace: openshift-ingress-operator +rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: openshift-kube-apiserver-reader + namespace: openshift-kube-apiserver +rules: + - apiGroups: [""] + resources: ["secrets"] + verbs: ["get", "list"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "1" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: {{ .Release.Namespace }}-configmaps-editor + namespace: {{ .Release.Namespace }} +rules: + - apiGroups: [""] + resources: ["configmaps"] + verbs: ["create", "get", "list", "watch", "patch", "update"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: read-openshift-cluster-config +subjects: + - kind: ServiceAccount + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +roleRef: + kind: ClusterRole + name: openshift-cluster-config-reader + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: read-openshift-config + namespace: openshift-config +subjects: + - kind: ServiceAccount + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: openshift-config-reader + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: read-openshift-ingress-operator + namespace: openshift-ingress-operator +subjects: + - kind: ServiceAccount + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: openshift-ingress-operator-reader + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: read-openshift-kube-apiserver + namespace: openshift-kube-apiserver +subjects: + - kind: ServiceAccount + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: openshift-kube-apiserver-reader + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "2" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + name: edit-{{ .Release.Namespace }}-configmaps + namespace: {{ .Release.Namespace }} +subjects: + - kind: ServiceAccount + name: {{ .Release.Name }}-ocp-ca-cert-extractor + namespace: {{ .Release.Namespace }} +roleRef: + kind: Role + name: {{ .Release.Namespace }}-configmaps-editor + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ .Release.Name }}-extract-ocp-ca-cert-job + labels: +{{ include "helm.labels" . | indent 4 }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded,hook-failed + "helm.sh/hook-weight": "3" +spec: + template: + metadata: + name: {{ .Release.Name }}-extract-ocp-ca-cert-job + labels: +{{ include "helm.labels" . | indent 8 }} + spec: + restartPolicy: Never + serviceAccountName: {{ .Release.Name }}-ocp-ca-cert-extractor + containers: + - name: {{ .Release.Name }}-extract-ocp-ca-cert-job + image: {{ include "k10.k10ToolsImage" . }} + command: ["./k10tools", "openshift", "extract-certificates"] + args: ["-n", "{{ .Release.Namespace }}", "--release-name", "{{ .Release.Name }}", "--ca-cert-configmap-name", "{{ .Values.cacertconfigmap.name }}"] + backoffLimit: 0 +{{ end }} diff --git a/charts/kasten/k10/7.0.1101/templates/prometheus-configmap.yaml b/charts/kasten/k10/7.0.1101/templates/prometheus-configmap.yaml new file mode 100644 index 0000000000..227d19ae25 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/prometheus-configmap.yaml @@ -0,0 +1,97 @@ +{{ include "check.validatePrometheusConfig" .}} +{{- if .Values.prometheus.server.enabled -}} +{{- $cluster_domain := "" -}} +{{- with .Values.cluster.domainName -}} + {{- $cluster_domain = printf ".%s" . -}} +{{- end -}} +{{- $rbac := .Values.prometheus.rbac.create -}} +kind: ConfigMap +apiVersion: v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: {{ .Release.Name }}-{{ .Values.prometheus.server.configMapOverrideName }} +data: + prometheus.yml: | + global: + scrape_interval: 1m + scrape_timeout: 10s + evaluation_interval: 1m + scrape_configs: + - job_name: httpServiceDiscovery + http_sd_configs: + - url: {{ printf "http://metering-svc.%s.svc%s:8000/v0/listScrapeTargets" .Release.Namespace $cluster_domain }} +{{- if .Values.kanisterPodMetricSidecar.enabled }} + - job_name: pushAggregator + honor_timestamps: true + metrics_path: /v0/push-metric-agg/metrics + static_configs: + - targets: + - {{ printf "metering-svc.%s.svc%s:8000" .Release.Namespace $cluster_domain }} +{{- end -}} +{{- if .Values.prometheus.scrapeCAdvisor }} + - job_name: 'kubernetes-cadvisor' + scheme: https + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token + kubernetes_sd_configs: + - role: node + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_node_label_(.+) + - target_label: __address__ + replacement: kubernetes.default.svc:443 + - source_labels: [__meta_kubernetes_node_name] + regex: (.+) + target_label: __metrics_path__ + replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor +{{- end}} + - job_name: prometheus + metrics_path: {{ .Values.prometheus.server.baseURL }}metrics + static_configs: + - targets: + - "localhost:9090" + labels: + app: prometheus + component: server + - job_name: k10-pods + scheme: http + metrics_path: /metrics + kubernetes_sd_configs: + - role: pod + namespaces: + own_namespace: true + selectors: + - role: pod + label: "component=executor" + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_pod_container_port_number] + action: keep + regex: 8\d{3} +{{- if ne .Values.metering.mode "airgap" }} + - job_name: k10-grafana + scheme: http + metrics_path: /metrics + kubernetes_sd_configs: + - role: pod + namespaces: + own_namespace: true + selectors: + - role: pod + label: "component=grafana" + relabel_configs: + - action: labelmap + regex: __meta_kubernetes_pod_label_(.+) + - source_labels: [__meta_kubernetes_pod_container_port_number] + action: keep + regex: 3000 + metric_relabel_configs: + - source_labels: [__name__] + regex: grafana_http_request_duration_seconds_count + action: keep +{{- end}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/prometheus-scc.yaml b/charts/kasten/k10/7.0.1101/templates/prometheus-scc.yaml new file mode 100644 index 0000000000..4d039ef00e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/prometheus-scc.yaml @@ -0,0 +1,41 @@ +{{- if .Values.scc.create }} +kind: SecurityContextConstraints +apiVersion: security.openshift.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Name }}-prometheus-server +allowPrivilegedContainer: false +allowHostNetwork: false +allowHostDirVolumePlugin: true +allowHostPorts: true +allowHostPID: false +allowHostIPC: false +readOnlyRootFilesystem: false +requiredDropCapabilities: +- CHOWN +- KILL +- MKNOD +- SETUID +- SETGID +defaultAddCapabilities: [] +allowedCapabilities: [] +priority: 0 +runAsUser: + type: MustRunAsNonRoot +seLinuxContext: + type: RunAsAny +fsGroup: + type: RunAsAny +supplementalGroups: + type: RunAsAny +volumes: +- configMap +- downwardAPI +- emptyDir +- persistentVolumeClaim +- projected +- secret +users: + - system:serviceaccount:{{.Release.Namespace}}:prometheus-server +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/prometheus-service.yaml b/charts/kasten/k10/7.0.1101/templates/prometheus-service.yaml new file mode 100644 index 0000000000..a5a2281710 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/prometheus-service.yaml @@ -0,0 +1,46 @@ +{{/* Template to generate service spec for v0 rest services */}} +{{- if .Values.prometheus.server.enabled -}} +{{- $postfix := default .Release.Name .Values.ingress.urlPath -}} +{{- $os_postfix := default .Release.Name .Values.route.path -}} +{{- $service_port := .Values.prometheus.server.service.servicePort -}} +apiVersion: v1 +kind: Service +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "k10.prometheus.service.name" . }}-exp + labels: +{{ include "helm.labels" $ | indent 4 }} + component: {{ include "k10.prometheus.service.name" . }} + run: {{ include "k10.prometheus.service.name" . }} + annotations: + getambassador.io/config: | + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + name: {{ include "k10.prometheus.service.name" . }}-mapping + {{- if .Values.prometheus.server.baseURL }} + rewrite: /{{ .Values.prometheus.server.baseURL | trimPrefix "/" | trimSuffix "/" }}/ + {{- else }} + rewrite: / + {{- end }} + {{- if .Values.route.enabled }} + prefix: /{{ $os_postfix | trimPrefix "/" | trimSuffix "/" }}/prometheus/ + {{- else }} + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/prometheus/ + {{- end }} + service: {{ include "k10.prometheus.service.name" . }}:{{ $service_port }} + timeout_ms: 15000 + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + +spec: + ports: + - name: http + protocol: TCP + port: {{ $service_port }} + targetPort: 9090 + selector: + app: {{ include "k10.prometheus.name" . }} + component: {{ .Values.prometheus.server.name }} + release: {{ .Release.Name }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/rbac.yaml b/charts/kasten/k10/7.0.1101/templates/rbac.yaml new file mode 100644 index 0000000000..ec68013e98 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/rbac.yaml @@ -0,0 +1,381 @@ +{{- $main := . -}} +{{- $apiDomain := include "apiDomain" . -}} + +{{- $actionsAPIs := splitList " " (include "k10.actionsAPIs" .) -}} +{{- $aggregatedAPIs := splitList " " (include "k10.aggregatedAPIs" .) -}} +{{- $appsAPIs := splitList " " (include "k10.appsAPIs" .) -}} +{{- $authAPIs := splitList " " (include "k10.authAPIs" .) -}} +{{- $configAPIs := splitList " " (include "k10.configAPIs" .) -}} +{{- $distAPIs := splitList " " (include "k10.distAPIs" .) -}} +{{- $reportingAPIs := splitList " " (include "k10.reportingAPIs" .) -}} + +{{- if .Values.rbac.create }} +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Namespace }}-{{ template "serviceAccountName" . }}-cluster-admin +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: cluster-admin +subjects: +- kind: ServiceAccount + name: {{ template "serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +{{- if not ( eq (include "meteringServiceAccountName" .) (include "serviceAccountName" .) )}} +- kind: ServiceAccount + name: {{ template "meteringServiceAccountName" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-admin +rules: +- apiGroups: +{{- range sortAlpha (concat $aggregatedAPIs $configAPIs $reportingAPIs) }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - "*" + verbs: + - "*" +- apiGroups: + - cr.kanister.io + resources: + - '*' + verbs: + - '*' +- apiGroups: + - "" + resources: + - namespaces + verbs: + - create + - get + - list +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-ns-admin + namespace: {{ .Release.Namespace }} +rules: +- apiGroups: + - "apps" + resources: + - deployments + verbs: + - get + - update + - watch + - list +- apiGroups: + - "" + resources: + - pods + verbs: + - get + - create + - delete + - list +- apiGroups: + - "apik10.kasten.io" + resources: + - k10s + verbs: + - list + - patch +- apiGroups: + - "" + resources: + - persistentvolumeclaims + verbs: + - get +- apiGroups: + - "" + resources: + - secrets + verbs: + - create + - delete + - get + - list + - update +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - delete + - get + - list + - update +- apiGroups: + - "batch" + resources: + - jobs + verbs: + - get +- apiGroups: + - "" + resources: + - services + verbs: + - create + - get + - delete +- apiGroups: + - "networking.k8s.io" + resources: + - networkpolicies + verbs: + - get + - create + - list + - delete +- apiGroups: + - "" + resources: + - endpoints + verbs: + - list + - get +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-mc-admin +rules: +- apiGroups: +{{- range sortAlpha (concat $authAPIs $configAPIs $distAPIs) }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - "*" + verbs: + - "*" +- apiGroups: + - "" + resources: + - secrets + verbs: + - "*" +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-basic +rules: +- apiGroups: +{{- range sortAlpha $actionsAPIs }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - {{ include "k10.backupActions" $main}} + - {{ include "k10.backupActionsDetails" $main}} + - {{ include "k10.restoreActions" $main}} + - {{ include "k10.restoreActionsDetails" $main}} + - {{ include "k10.exportActions" $main}} + - {{ include "k10.exportActionsDetails" $main}} + - {{ include "k10.cancelActions" $main}} + - {{ include "k10.runActions" $main}} + - {{ include "k10.runActionsDetails" $main}} + verbs: + - "*" +- apiGroups: +{{- range sortAlpha $appsAPIs }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - {{ include "k10.restorePoints" $main}} + - {{ include "k10.restorePointsDetails" $main}} + - {{ include "k10.applications" $main}} + - {{ include "k10.applicationsDetails" $main}} + verbs: + - "*" +- apiGroups: + - "" + resources: + - namespaces + verbs: + - get +- apiGroups: +{{- range sortAlpha $configAPIs }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - {{ include "k10.policies" $main}} + verbs: + - "*" +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-config-view +rules: +- apiGroups: +{{- range sortAlpha $configAPIs }} + - {{ . }}.{{ $apiDomain }} +{{- end }} + resources: + - {{ include "k10.auditconfigs" $main}} + - {{ include "k10.profiles" $main}} + - {{ include "k10.policies" $main}} + - {{ include "k10.policypresets" $main}} + - {{ include "k10.transformsets" $main}} + - {{ include "k10.blueprintbindings" $main}} + - {{ include "k10.storagesecuritycontexts" $main}} + - {{ include "k10.storagesecuritycontextbindings" $main}} + verbs: + - get + - list +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Namespace }}-{{ template "serviceAccountName" . }}-admin +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ .Release.Name }}-admin +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: k10:admins +{{- range .Values.auth.k10AdminUsers }} +- apiGroup: rbac.authorization.k8s.io + kind: User + name: {{ . }} +{{- end }} +{{- range default .Values.auth.groupAllowList .Values.auth.k10AdminGroups }} +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: {{ . }} +{{- end }} +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Namespace }}-{{ template "serviceAccountName" . }}-ns-admin + namespace: {{ .Release.Namespace }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ .Release.Name }}-ns-admin +subjects: +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: k10:admins +{{- range .Values.auth.k10AdminUsers }} +- apiGroup: rbac.authorization.k8s.io + kind: User + name: {{ . }} +{{- end }} +{{- range default .Values.auth.groupAllowList .Values.auth.k10AdminGroups }} +- apiGroup: rbac.authorization.k8s.io + kind: Group + name: {{ . }} +{{- end }} +--- +kind: ClusterRoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Namespace }}-{{ template "serviceAccountName" . }}-mc-admin +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ .Release.Name }}-mc-admin +subjects: + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: k10:admins +{{- range .Values.auth.k10AdminUsers }} + - apiGroup: rbac.authorization.k8s.io + kind: User + name: {{ . }} +{{- end }} +{{- range default .Values.auth.groupAllowList .Values.auth.k10AdminGroups }} + - apiGroup: rbac.authorization.k8s.io + kind: Group + name: {{ . }} +{{- end }} +{{- end }} +{{- if and .Values.rbac.create (not .Values.prometheus.rbac.create) }} +--- +kind: Role +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} +{{ include "k10.defaultRBACLabels" . | indent 4 }} + name: {{ .Release.Name }}-prometheus-server + namespace: {{ .Release.Namespace }} +rules: +- apiGroups: + - "" + resources: + - nodes + - nodes/proxy + - nodes/metrics + - services + - endpoints + - pods + - ingresses + - configmaps + verbs: + - get + - list + - watch +- apiGroups: + - extensions + - networking.k8s.io + resources: + - ingresses/status + - ingresses + verbs: + - get + - list + - watch +--- +kind: RoleBinding +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ .Release.Namespace }}-{{ template "serviceAccountName" . }}-prometheus-server + namespace: {{ .Release.Namespace }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ .Release.Name }}-prometheus-server +subjects: + - kind: ServiceAccount + name: prometheus-server + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/rhmarketplace.tpl b/charts/kasten/k10/7.0.1101/templates/rhmarketplace.tpl new file mode 100644 index 0000000000..e64022641e --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/rhmarketplace.tpl @@ -0,0 +1,8 @@ +{{/* +This file is used to fail the helm deployment if certain values are set which are +not compatible with an Operator deployment. +*/}} + +{{- if and (.Values.global.rhMarketPlace) (.Values.reporting.pdfReports) -}} + {{- fail "reporting.pdfReports cannot be enabled for the K10 Red Hat Marketplace Operator" -}} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/route.yaml b/charts/kasten/k10/7.0.1101/templates/route.yaml new file mode 100644 index 0000000000..1ecd244bed --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/route.yaml @@ -0,0 +1,36 @@ +{{- $route := .Values.route -}} +{{- if $route.enabled -}} +{{ include "authEnabled.check" . }} +apiVersion: route.openshift.io/v1 +kind: Route +metadata: + name: {{ .Release.Name }}-route + {{- with $route.annotations }} + namespace: {{ .Release.Namespace }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} + labels: +{{ include "helm.labels" . | indent 4 }} + {{- with $route.labels }} + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + host: {{ $route.host }} + path: /{{ default .Release.Name $route.path | trimPrefix "/" | trimSuffix "/" }}/ + port: + targetPort: http + to: + kind: Service + name: gateway + weight: 100 + {{- if $route.tls.enabled }} + tls: + {{- if $route.tls.insecureEdgeTerminationPolicy }} + insecureEdgeTerminationPolicy: {{ $route.tls.insecureEdgeTerminationPolicy }} + {{- end }} + {{- if $route.tls.termination }} + termination: {{ $route.tls.termination }} + {{- end }} + {{- end }} +{{- end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/secrets.yaml b/charts/kasten/k10/7.0.1101/templates/secrets.yaml new file mode 100644 index 0000000000..0a040e2c0b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/secrets.yaml @@ -0,0 +1,257 @@ +{{- include "enforce.singlecloudcreds" . -}} +{{- include "enforce.singleazurecreds" . -}} +{{- include "check.validateImagePullSecrets" . -}} +{{- if and (eq (include "check.awscreds" . ) "true") (not (eq (include "check.awsSecretName" . ) "true")) }} +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: aws-creds +type: Opaque +data: + aws_access_key_id: {{ required "secrets.awsAccessKeyId field is required!" .Values.secrets.awsAccessKeyId | b64enc | quote }} + aws_secret_access_key: {{ required "secrets.awsSecretAccessKey field is required!" .Values.secrets.awsSecretAccessKey | b64enc | quote }} +{{- if .Values.secrets.awsIamRole }} + role: {{ .Values.secrets.awsIamRole | trim | b64enc | quote }} +{{- end }} +{{- end }} +{{- if or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: k10-ecr +type: kubernetes.io/dockerconfigjson +data: + .dockerconfigjson: {{ or .Values.secrets.dockerConfig ( .Values.secrets.dockerConfigPath | b64enc ) }} +{{- end }} +{{- if and (eq (include "check.googlecreds" .) "true") ( not (eq (include "check.googleCredsSecret" .) "true")) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: google-secret +type: Opaque +data: + kasten-gke-sa.json: {{ .Values.secrets.googleApiKey }} +{{- if eq (include "check.googleproject" .) "true" }} + kasten-gke-project: {{ .Values.secrets.googleProjectId | b64enc }} +{{- end }} +{{- end }} +{{- if eq (include "check.azurecreds" .) "true" }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: azure-creds +type: Opaque +data: + {{- if not (eq (include "check.azuresecret" .) "true" ) }} + {{- if or (eq (include "check.azureMSIWithClientID" .) "true") (eq (include "check.azureClientSecretCreds" .) "true") }} + azure_client_id: {{ required "secrets.azureClientId field is required!" .Values.secrets.azureClientId | b64enc | quote }} + {{- end }} + {{- if eq (include "check.azureClientSecretCreds" .) "true" }} + azure_tenant_id: {{ required "secrets.azureTenantId field is required!" .Values.secrets.azureTenantId | b64enc | quote }} + azure_client_secret: {{ required "secrets.azureClientSecret field is required!" .Values.secrets.azureClientSecret | b64enc | quote }} + {{- end }} + {{- end }} + azure_resource_group: {{ default "" .Values.secrets.azureResourceGroup | b64enc | quote }} + azure_subscription_id: {{ default "" .Values.secrets.azureSubscriptionID | b64enc | quote }} + azure_resource_manager_endpoint: {{ default "" .Values.secrets.azureResourceMgrEndpoint | b64enc | quote }} + entra_id_endpoint: {{ default "" (default .Values.secrets.azureADEndpoint .Values.secrets.microsoftEntraIDEndpoint) | b64enc | quote }} + entra_id_resource_id: {{ default "" (default .Values.secrets.azureADResourceID .Values.secrets.microsoftEntraIDResourceID) | b64enc | quote }} + azure_cloud_env_id: {{ default "" .Values.secrets.azureCloudEnvID | b64enc | quote }} +{{- end }} +{{- if and (eq (include "check.vspherecreds" .) "true") (not (eq (include "check.vsphereClientSecret" . ) "true")) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + namespace: {{ .Release.Namespace }} + name: vsphere-creds +type: Opaque +data: + vsphere_endpoint: {{ required "secrets.vsphereEndpoint field is required!" .Values.secrets.vsphereEndpoint | b64enc | quote }} + vsphere_username: {{ required "secrets.vsphereUsername field is required!" .Values.secrets.vsphereUsername | b64enc | quote }} + vsphere_password: {{ required "secrets.vspherePassword field is required!" .Values.secrets.vspherePassword | b64enc | quote }} +{{- end }} +{{- if and (eq (include "basicauth.check" .) "true") (not .Values.auth.basicAuth.secretName) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-basic-auth + namespace: {{ .Release.Namespace }} +data: + auth: {{ required "auth.basicAuth.htpasswd field is required!" .Values.auth.basicAuth.htpasswd | b64enc | quote}} +type: Opaque +{{- end }} +{{- if .Values.auth.tokenAuth.enabled }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-token-auth + namespace: {{ .Release.Namespace }} +data: + auth: {{ "true" | b64enc | quote}} +type: Opaque +{{- end }} +{{- if and .Values.auth.oidcAuth.enabled (not .Values.auth.oidcAuth.secretName) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ include "k10.oidcSecretName" .}} + namespace: {{ .Release.Namespace }} +data: + provider-url: {{ required "auth.oidcAuth.providerURL field is required!" .Values.auth.oidcAuth.providerURL | b64enc | quote }} + redirect-url: {{ required "auth.oidcAuth.redirectURL field is required!" .Values.auth.oidcAuth.redirectURL | b64enc | quote }} +{{- if not .Values.auth.oidcAuth.clientSecretName }} + client-id: {{ required "auth.oidcAuth.clientID field is required!" .Values.auth.oidcAuth.clientID | b64enc | quote }} + client-secret: {{ required "auth.oidcAuth.clientSecret field is required!" .Values.auth.oidcAuth.clientSecret | b64enc | quote }} +{{- end }} + scopes: {{ required "auth.oidcAuth.scopes field is required!" .Values.auth.oidcAuth.scopes | b64enc | quote }} + prompt: {{ default "select_account" .Values.auth.oidcAuth.prompt | b64enc | quote }} + usernameClaim: {{ default "sub" .Values.auth.oidcAuth.usernameClaim | b64enc | quote }} + usernamePrefix: {{ default "" .Values.auth.oidcAuth.usernamePrefix | b64enc | quote }} + groupClaim: {{ default "" .Values.auth.oidcAuth.groupClaim | b64enc | quote }} + groupPrefix: {{ default "" .Values.auth.oidcAuth.groupPrefix | b64enc | quote }} + sessionDuration: {{ default "1h" .Values.auth.oidcAuth.sessionDuration | b64enc | quote }} +{{- if .Values.auth.oidcAuth.refreshTokenSupport }} + refreshTokenSupport: {{ "true" | b64enc | quote }} +{{- else }} + refreshTokenSupport: {{ "false" | b64enc | quote }} +{{ end }} +stringData: + groupAllowList: |- +{{- range $.Values.auth.groupAllowList }} + {{ . -}} +{{ end }} + logout-url: {{ default "" .Values.auth.oidcAuth.logoutURL | b64enc | quote }} +type: Opaque +{{- end }} +{{- if and (.Values.auth.openshift.enabled) (and (not .Values.auth.openshift.clientSecretName) (not .Values.auth.openshift.clientSecret)) }} +--- +apiVersion: v1 +kind: Secret +type: kubernetes.io/service-account-token +metadata: + name: {{ include "get.openshiftServiceAccountSecretName" . }} + annotations: + kubernetes.io/service-account.name: {{ include "get.openshiftServiceAccountName" . | quote }} +{{- end }} +{{- if and (.Values.auth.openshift.enabled) (not .Values.auth.openshift.secretName) }} +{{ $dashboardURL := required "auth.openshift.dashboardURL field is required!" .Values.auth.openshift.dashboardURL }} +{{ $redirectURL := trimSuffix "/" (trimSuffix (default .Release.Name .Values.ingress.urlPath) (trimSuffix "/" $dashboardURL)) | b64enc | quote }} +{{- if .Values.route.enabled }} +{{ $redirectURL := trimSuffix "/" (trimSuffix (default .Release.Name .Values.route.path) (trimSuffix "/" $dashboardURL)) | b64enc | quote }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ include "k10.oidcSecretName" .}} + namespace: {{ .Release.Namespace }} +data: + provider-url: {{ printf "%s/dex" (trimSuffix "/" $dashboardURL) | b64enc | quote }} + redirect-url: {{ $redirectURL }} + client-id: {{ (printf "kasten") | b64enc | quote }} + client-secret: {{ (printf "kastensecret") | b64enc | quote }} + scopes: {{ (printf "groups profile email") | b64enc | quote }} + prompt: {{ (printf "select_account") | b64enc | quote }} + usernameClaim: {{ default "email" .Values.auth.openshift.usernameClaim | b64enc | quote }} + usernamePrefix: {{ default "" .Values.auth.openshift.usernamePrefix | b64enc | quote }} + groupClaim: {{ default "groups" .Values.auth.openshift.groupClaim | b64enc | quote }} + groupPrefix: {{ default "" .Values.auth.openshift.groupPrefix | b64enc | quote }} +stringData: + groupAllowList: |- +{{- range $.Values.auth.groupAllowList }} + {{ . -}} +{{ end }} +type: Opaque +{{- end }} +{{- if and .Values.auth.ldap.enabled (not .Values.auth.ldap.secretName) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ include "k10.oidcSecretName" .}} + namespace: {{ .Release.Namespace }} +data: + provider-url: {{ required "auth.ldap.dashboardURL field is required!" (printf "%s/dex" (trimSuffix "/" .Values.auth.ldap.dashboardURL)) | b64enc | quote }} + {{- if .Values.route.enabled }} + redirect-url: {{ required "auth.ldap.dashboardURL field is required!" (trimSuffix "/" (trimSuffix (default .Release.Name .Values.route.path) (trimSuffix "/" .Values.auth.ldap.dashboardURL))) | b64enc | quote }} + {{- else }} + redirect-url: {{ required "auth.ldap.dashboardURL field is required!" (trimSuffix "/" (trimSuffix (default .Release.Name .Values.ingress.urlPath) (trimSuffix "/" .Values.auth.ldap.dashboardURL))) | b64enc | quote }} + {{- end }} + client-id: {{ (printf "kasten") | b64enc | quote }} + client-secret: {{ (printf "kastensecret") | b64enc | quote }} + scopes: {{ (printf "groups profile email") | b64enc | quote }} + prompt: {{ (printf "select_account") | b64enc | quote }} + usernameClaim: {{ default "email" .Values.auth.ldap.usernameClaim | b64enc | quote }} + usernamePrefix: {{ default "" .Values.auth.ldap.usernamePrefix | b64enc | quote }} + groupClaim: {{ default "groups" .Values.auth.ldap.groupClaim | b64enc | quote }} + groupPrefix: {{ default "" .Values.auth.ldap.groupPrefix | b64enc | quote }} +stringData: + groupAllowList: |- +{{- range $.Values.auth.groupAllowList }} + {{ . -}} +{{ end }} +type: Opaque +{{- end }} +{{- if and .Values.auth.ldap.enabled (not .Values.auth.ldap.bindPWSecretName) }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-dex + namespace: {{ .Release.Namespace }} +data: + bindPW: {{ required "auth.ldap.bindPW field is required!" .Values.auth.ldap.bindPW | b64enc | quote }} +type: Opaque +{{- end }} +{{- if eq (include "check.primaryKey" . ) "true" }} +--- +apiVersion: v1 +kind: Secret +metadata: + labels: +{{ include "helm.labels" . | indent 4 }} + name: k10-encryption-primary-key + namespace: {{ .Release.Namespace }} +data: + {{- if .Values.encryption.primaryKey.awsCmkKeyId }} + awscmkkeyid: {{ default "" .Values.encryption.primaryKey.awsCmkKeyId | trim | b64enc | quote }} + {{- end }} + {{- if .Values.encryption.primaryKey.vaultTransitKeyName }} + vaulttransitkeyname: {{ default "" .Values.encryption.primaryKey.vaultTransitKeyName | trim | b64enc | quote }} + vaulttransitpath: {{ default "transit" .Values.encryption.primaryKey.vaultTransitPath | trim | b64enc | quote }} + {{- end }} +type: Opaque +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/secure_deployment.tpl b/charts/kasten/k10/7.0.1101/templates/secure_deployment.tpl new file mode 100644 index 0000000000..b8e5a66428 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/secure_deployment.tpl @@ -0,0 +1,19 @@ +{{/* +This file is used to fail the helm deployment if certain values are set which are +not compatible with a secure deployment. + +A secure deployment is defined as one of the following: +- Iron Bank +- FIPS +*/}} + +{{/* Iron Bank */}} +{{- include "k10.fail.ironbankGrafana" . -}} +{{- include "k10.fail.ironbankPdfReports" . -}} +{{- include "k10.fail.ironbankPrometheus" . -}} +{{- include "k10.fail.ironbankRHMarketplace" . -}} + +{{/* FIPS */}} +{{- include "k10.fail.fipsGrafana" . -}} +{{- include "k10.fail.fipsPrometheus" . -}} +{{- include "k10.fail.fipsPDFReports" . -}} diff --git a/charts/kasten/k10/7.0.1101/templates/serviceaccount.yaml b/charts/kasten/k10/7.0.1101/templates/serviceaccount.yaml new file mode 100644 index 0000000000..b4e61c8922 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/serviceaccount.yaml @@ -0,0 +1,44 @@ +{{- if and .Values.serviceAccount.create ( not .Values.metering.awsMarketplace ) }} +kind: ServiceAccount +apiVersion: v1 +metadata: +{{- if .Values.secrets.awsIamRole }} + annotations: + eks.amazonaws.com/role-arn: {{ .Values.secrets.awsIamRole }} +{{- end }} + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ template "serviceAccountName" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +{{- if and (not ( eq (include "meteringServiceAccountName" .) (include "serviceAccountName" .))) ( not .Values.metering.awsManagedLicense ) .Values.metering.serviceAccount.create }} +--- +kind: ServiceAccount +apiVersion: v1 +metadata: +{{- if .Values.metering.awsMarketPlaceIamRole }} + annotations: + eks.amazonaws.com/role-arn: {{ .Values.metering.awsMarketPlaceIamRole }} +{{- end }} + labels: +{{ include "helm.labels" . | indent 4 }} + name: {{ template "meteringServiceAccountName" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +{{- if and (.Values.auth.openshift.enabled) (not .Values.auth.openshift.serviceAccount) }} +{{- if or (.Values.auth.openshift.clientSecret) (.Values.auth.openshift.clientSecretName) }} + {{ fail "auth.openshift.serviceAccount is required when auth.openshift.clientSecret or auth.openshift.clientSecretName is used "}} +{{- end }} +--- +kind: ServiceAccount +apiVersion: v1 +metadata: + name: {{ include "k10.dexServiceAccountName" . }} + namespace: {{ .Release.Namespace }} + annotations: + {{- $dashboardURL := (trimSuffix "/" (required "auth.openshift.dashboardURL field is required" .Values.auth.openshift.dashboardURL)) -}} + {{- if (not (hasSuffix .Release.Name $dashboardURL)) }} + {{ fail "auth.openshift.dashboardURL should end with the K10's release name" }} + {{- end }} + serviceaccounts.openshift.io/oauth-redirecturi.dex: {{ printf "%s/dex/callback" $dashboardURL }} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/v0services.yaml b/charts/kasten/k10/7.0.1101/templates/v0services.yaml new file mode 100644 index 0000000000..8a744cfe36 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/v0services.yaml @@ -0,0 +1,200 @@ +{{/* Template to generate service spec for v0 rest services */}} +{{- $container_port := .Values.service.internalPort -}} +{{- $service_port := .Values.service.externalPort -}} +{{- $aggregated_api_port := .Values.service.aggregatedApiPort -}} +{{- $postfix := default .Release.Name .Values.ingress.urlPath -}} +{{- $colocated_services := include "get.enabledColocatedServices" . | fromYaml -}} +{{- $exposed_services := include "get.enabledExposedServices" . | splitList " " -}} +{{- $os_postfix := default .Release.Name .Values.route.path -}} +{{- $main_context := . -}} +{{ $service_list := append (include "get.enabledRestServices" . | splitList " ") "frontend" }} +{{- range $service_list }} + {{- $exposed_service := (has . $exposed_services) }} + {{- $mc_exposed_service := (eq . "controllermanager") }} + {{ if not (hasKey $colocated_services . ) }} +apiVersion: v1 +kind: Service +metadata: + namespace: {{ $.Release.Namespace }} + name: {{ . }}-svc + labels: +{{ include "helm.labels" $ | indent 4 }} + component: {{ . }} + run: {{ . }}-svc +{{- if not (include "k10.capability.gateway" $) }} +{{- if or $exposed_service (eq . "frontend") $mc_exposed_service }} + annotations: + getambassador.io/config: | + {{- if or $exposed_service (eq . "frontend") }} + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + name: {{ . }}-mapping + {{- if $.Values.route.enabled }} + {{- if eq . "frontend" }} + prefix: /{{ $os_postfix | trimPrefix "/" | trimSuffix "/" }}/ + {{- else }} + prefix: /{{ $os_postfix | trimPrefix "/" | trimSuffix "/" }}/{{ . }}-svc/ + {{- end }} + {{- else }} + {{- if eq . "frontend" }} + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/ + {{- else }} + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/{{ . }}-svc/ + {{- end }} + {{- end }} + rewrite: / + service: {{ . }}-svc.{{ $.Release.Namespace }}:{{ $service_port }} + timeout_ms: 30000 + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + {{- end }} + {{- $colocatedList := include "get.enabledColocatedSvcList" $main_context | fromYaml }} + {{- range $skip, $secondary := index $colocatedList . }} + {{- $colocConfig := index (include "get.enabledColocatedServices" $main_context | fromYaml) $secondary }} + {{- if (has $secondary $exposed_services) }} + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + name: {{ $secondary }}-mapping + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/{{ $secondary }}-svc/ + rewrite: / + service: {{ $colocConfig.primary }}-svc.{{ $.Release.Namespace }}:{{ $colocConfig.port }} + timeout_ms: 30000 + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + {{- end }} + {{- end }} + {{- if $mc_exposed_service }} + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + name: {{ . }}-mc-mapping + {{- if $.Values.route.enabled }} + prefix: /{{ $os_postfix | trimPrefix "/" | trimSuffix "/" }}/mc/ + {{- else }} + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/mc/ + {{- end }} + rewrite: / + service: {{ . }}-svc.{{ $.Release.Namespace }}:{{ include "k10.mcExternalPort" nil }} + timeout_ms: 30000 + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] + {{- end }} +{{- end }} +{{- end }} +spec: + ports: + - name: http + protocol: TCP + port: {{ $service_port }} + targetPort: {{ $container_port }} + {{- if and (eq . "controllermanager") ($.Values.injectKanisterSidecar.enabled) }} + - name: https + protocol: TCP + port: 443 + targetPort: {{ $.Values.injectKanisterSidecar.webhookServer.port }} + {{- end }} +{{- $colocatedList := include "get.enabledColocatedSvcList" $main_context | fromYaml }} +{{- range $skip, $secondary := index $colocatedList . }} + {{- $colocConfig := index (include "get.enabledColocatedServices" $main_context | fromYaml) $secondary }} + - name: {{ $secondary }} + protocol: TCP + port: {{ $colocConfig.port }} + targetPort: {{ $colocConfig.port }} +{{- end }} +{{- if eq . "logging" }} + - name: logging + protocol: TCP + port: 24224 + targetPort: 24224 + - name: logging-metrics + protocol: TCP + port: 24225 + targetPort: 24225 +{{- end }} +{{- if eq . "controllermanager" }} + - name: mc-http + protocol: TCP + port: {{ include "k10.mcExternalPort" nil }} + targetPort: {{ include "k10.mcExternalPort" nil }} +{{- end }} + selector: + run: {{ . }}-svc +--- + {{ end }}{{/* if not (hasKey $colocated_services $k10_service ) */}} +{{ end -}}{{/* range append (include "get.enabledRestServices" . | splitList " ") "frontend" */}} +{{- range append (include "get.enabledServices" . | splitList " ") "kanister" }} +{{- if eq . "gateway" -}}{{- continue -}}{{- end -}} +apiVersion: v1 +kind: Service +metadata: + namespace: {{ $.Release.Namespace }} + name: {{ . }}-svc + labels: +{{ include "helm.labels" $ | indent 4 }} + component: {{ . }} + run: {{ . }}-svc +spec: + ports: + {{- if eq . "aggregatedapis" }} + - name: http + port: 443 + protocol: TCP + targetPort: {{ $aggregated_api_port }} + {{- else }} + - name: http + protocol: TCP + port: {{ $service_port }} + targetPort: {{ $container_port }} + {{- end }} +{{- $colocatedList := include "get.enabledColocatedSvcList" $main_context | fromYaml }} +{{- range $skip, $secondary := index $colocatedList . }} + {{- $colocConfig := index (include "get.enabledColocatedServices" . | fromYaml) $secondary }} + - name: {{ $secondary }} + protocol: TCP + port: {{ $colocConfig.port }} + targetPort: {{ $colocConfig.port }} +{{- end }} + selector: + run: {{ . }}-svc +--- +{{ end -}} +{{- if eq (include "check.dexAuth" .) "true" }} +apiVersion: v1 +kind: Service +metadata: +{{- if not (include "k10.capability.gateway" $) }} + annotations: + getambassador.io/config: | + --- + apiVersion: getambassador.io/v3alpha1 + kind: Mapping + name: dex-mapping + {{- if $.Values.route.enabled }} + prefix: /{{ $os_postfix | trimPrefix "/" | trimSuffix "/" }}/dex/ + {{- else }} + prefix: /{{ $postfix | trimPrefix "/" | trimSuffix "/" }}/dex/ + {{- end }} + rewrite: "" + service: dex.{{ $.Release.Namespace }}:8000 + timeout_ms: 30000 + hostname: "*" + ambassador_id: [ {{ include "k10.ambassadorId" . }} ] +{{- end }} + name: dex + namespace: {{ $.Release.Namespace }} + labels: +{{ include "helm.labels" $ | indent 4 }} + component: dex + run: auth-svc +spec: + ports: + - name: http + port: {{ $service_port }} + protocol: TCP + targetPort: 8080 + selector: + run: auth-svc + type: ClusterIP +{{ end -}} diff --git a/charts/kasten/k10/7.0.1101/templates/workloadIdentityFederation.tpl b/charts/kasten/k10/7.0.1101/templates/workloadIdentityFederation.tpl new file mode 100644 index 0000000000..75296e98b0 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/workloadIdentityFederation.tpl @@ -0,0 +1,10 @@ +{{/* +This file is used to fail the helm deployment if Workload Identity settings are not +compatible. +*/}} +{{- include "validate.gwif.idp.type" . -}} +{{- include "validate.gwif.idp.aud" . -}} + + + + diff --git a/charts/kasten/k10/7.0.1101/templates/{values}/grafana/values/grafana_values.tpl b/charts/kasten/k10/7.0.1101/templates/{values}/grafana/values/grafana_values.tpl new file mode 100644 index 0000000000..d6ac74f63b --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/{values}/grafana/values/grafana_values.tpl @@ -0,0 +1,275 @@ +{{/* + With some of K10's features being provided by external Helm charts, those Helm + charts need to be configured to work with K10. + + Unfortunately, some of the values needed to configure the subcharts aren't + accessible to the subcharts (only global.* and chart_name.* are accessible). + + This means the values need to be duplicated, making the configuration of K10 + quite cumbersome for users (the same setting has to be provided in multiple + places, making it easy to misconfigure one thing or another). + + Alternatively, the subchart's templates could be customized to read global.* + values instead. However, this means upgrading the subchart is quite burdensome + since the customizations have to be re-applied to the upgraded chart. This is + even less tenable with the frequency with which chart updates are needed. + + With this in mind, this template was specially crafted to be able to read K10 + values and update the values that will be passed to the subchart. + + --- + + To accomplish this, Helm's template parsing and rendering order is exploited. + + Helm allows parent charts to override templates in subcharts. This is done by + parsing templates with lower precedence first (templates that are more deeply + nested than others). This allows templates with higher precedence to redefine + templates with lower precedence. + + Helm also renders templates in this same order. This template exploits this + ordering in order to set subchart values before the subchart's templates are + rendered, having the same effect as the user setting the values. + + WARNING: The name and directory structure of this template was carefully + selected to ensure that it is rendered before other templates! +*/}} + +{{- if .Values.grafana.enabled }} +{{- $grafana_prefix := printf "%s/grafana/" (include "k10.prefixPath" $) -}} +{{- $grafana_scoped_values := (dict "Chart" (dict "Name" "grafana") "Release" .Release "Values" .Values.grafana) -}} + +{{- /*** GRAFANA LABELS ***/ -}} +{{- /* Merge global pod labels with any grafana-specific labels, where the latter is of highest priority */ -}} +{{- $podLabels := merge (dict) (dict "component" "grafana") (.Values.grafana.podLabels | default dict) (.Values.global.podLabels) -}} + +{{- /* Merge global pod annotations with any grafana-specific annotations, where the latter is of highest priority */ -}} +{{- $podAnnotations := merge (dict) (.Values.grafana.podAnnotations | default dict) (.Values.global.podAnnotations) -}} +{{- $_ := mergeOverwrite .Values.grafana + (dict + "extraLabels" (dict + "app.kubernetes.io/name" (include "grafana.name" $grafana_scoped_values) + "app.kubernetes.io/instance" .Release.Name + "component" "grafana" + ) + "podLabels" $podLabels + "podAnnotations" $podAnnotations + ) +-}} + +{{- /*** GRAFANA SERVER CONFIGURATION ***/ -}} +{{- $_ := mergeOverwrite (index .Values.grafana "grafana.ini") + (dict + "auth" (dict + "disable_login_form" true + "disable_signout_menu" true + ) + "auth.basic" (dict + "enabled" false + ) + "auth.anonymous" (dict + "enabled" true + ) + "server" (dict + "root_url" $grafana_prefix + ) + ) +-}} +{{- $authAnonymous := index .Values.grafana "grafana.ini" "auth.anonymous" -}} +{{- $_ := set $authAnonymous "org_name" ($authAnonymous.org_name | default "Main Org.") -}} +{{- $_ := set $authAnonymous "org_role" ($authAnonymous.org_role | default "Admin") -}} + +{{- /*** GRAFANA DEPLOYMENT STRATEGY ***/ -}} +{{- $_ := set .Values.grafana.deploymentStrategy "type" "Recreate" -}} + +{{- /*** GRAFANA NETWORKING POLICY ***/ -}} +{{- $_ := set .Values.grafana.networkPolicy "enabled" true -}} + +{{- /*** GRAFANA TEST FRAMEWORK ***/ -}} +{{- $_ := set .Values.grafana.testFramework "enabled" false -}} + +{{- /*** GRAFANA RBAC ***/ -}} +{{- $_ := set .Values.grafana.rbac "namespaced" true -}} + +{{- /*** K10 PROMETHEUS DATASOURCE ***/ -}} +{{- $_ := set .Values.grafana.datasources + "datasources.yaml" (dict + "apiVersion" 1 + "datasources" (list + (dict + "access" "proxy" + "editable" false + "isDefault" true + "name" "Prometheus" + "type" "prometheus" + "url" (printf "http://%s-exp%s" (include "k10.prometheus.service.name" $) .Values.prometheus.server.baseURL) + "jsonData" (dict + "timeInterval" "1m" + ) + ) + ) + ) +-}} + +{{- /*** K10 DASHBOARD ***/ -}} +{{- $_ := set .Values.grafana.dashboards + "default" (dict + "default" (dict + "json" (.Files.Get "grafana/dashboards/default/default.json") + ) + ) +-}} + +{{- $_ := mergeOverwrite (index .Values.grafana "grafana.ini") + (dict + "dashboards" (dict + "default_home_dashboard_path" "/var/lib/grafana/dashboards/default/default.json" + ) + ) +-}} + +{{- $_ := set .Values.grafana.dashboardProviders + "dashboardproviders.yaml" (dict + "apiVersion" 1 + "providers" (list + (dict + "name" "default" + "orgId" 1 + "folder" "" + "type" "file" + "disableDeletion" true + "editable" false + "options" (dict + "path" "/var/lib/grafana/dashboards" + ) + ) + ) + ) +-}} + +{{- /*** K10 PERSISTENCE *** + - global.persistence.enabled + - global.persistence.accessMode + - global.persistence.storageClass + - global.persistence.grafana.size + - global.persistence.size +*/ -}} +{{- if .Values.global.persistence.enabled -}} + {{ $grafana_storage_class := dict }} + {{- if eq .Values.global.persistence.storageClass "-" -}} + {{ $grafana_storage_class = (dict "storageClassName" "") }} + {{- else if .Values.global.persistence.storageClass -}} + {{ $grafana_storage_class = (dict "storageClassName" .Values.global.persistence.storageClass) }} + {{- end -}} + + {{- $_ := mergeOverwrite .Values.grafana.persistence + $grafana_storage_class + (dict + "enabled" true + "accessModes" (list .Values.global.persistence.accessMode) + "size" (.Values.global.persistence.grafana.size | default .Values.global.persistence.size) + ) + -}} +{{- end -}} + +{{- /*** K10 IMAGE PULL SECRETS *** + - secrets.dockerConfig + - secrets.dockerConfigPath + - global.imagePullSecret +*/ -}} +{{- $image_pull_secrets := list -}} +{{- if .Values.global.imagePullSecret -}} + {{- $image_pull_secrets = append $image_pull_secrets .Values.global.imagePullSecret -}} +{{- end -}} +{{- if (or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath) -}} + {{ $image_pull_secrets = append $image_pull_secrets "k10-ecr" -}} +{{- end -}} +{{- $image_pull_secrets = $image_pull_secrets | compact | uniq -}} + +{{- if $image_pull_secrets -}} + {{- $image_pull_secrets = concat (.Values.grafana.image.pullSecrets | default list) $image_pull_secrets -}} + {{- $_ := set .Values.grafana.image "pullSecrets" $image_pull_secrets -}} +{{- end -}} + +{{- /*** K10 GRAFANA IMAGE *** + - global.airgapped.repository + - global.image.registry + - global.image.tag + - global.images.grafana +*/ -}} +{{- $grafana_image := (dict + "registry" (.Values.global.airgapped.repository | default .Values.global.image.registry) + "repository" "grafana" + "tag" (include "get.k10ImageTag" $) +) -}} +{{- if .Values.global.images.grafana -}} + {{- $grafana_image_args := (dict "image" .Values.global.images.grafana "path" "global.images.grafana") -}} + {{- $grafana_image = (include "k10.splitImage" $grafana_image_args) | fromJson -}} +{{- end -}} + +{{- if .Values.global.azMarketPlace -}} + {{- $grafana_image = ( dict + "registry" .Values.global.azure.images.grafana.registry + "repository" .Values.global.azure.images.grafana.image + "tag" .Values.global.azure.images.grafana.tag + ) + -}} +{{- end -}} + +{{- $_ := set .Values.grafana.image "registry" $grafana_image.registry -}} +{{- $_ := set .Values.grafana.image "repository" $grafana_image.repository -}} +{{- $_ := set .Values.grafana.image "tag" $grafana_image.tag -}} +{{- $_ := set .Values.grafana.image "sha" $grafana_image.sha -}} + +{{- /*** K10 INIT IMAGE *** + - global.airgapped.repository + - global.image.registry + - global.image.tag + - global.images.init +*/ -}} +{{- $init_image := (dict + "registry" (.Values.global.airgapped.repository | default .Values.global.image.registry) + "repository" "init" + "tag" (include "get.k10ImageTag" $) +) -}} + +{{- if .Values.global.images.init -}} + {{- $init_image_args := (dict "image" .Values.global.images.init "path" "global.images.init") -}} + {{- $init_image = (include "k10.splitImage" $init_image_args) | fromJson -}} +{{- end -}} + +{{- if .Values.global.azMarketPlace -}} + {{- $init_image = ( dict + "registry" .Values.global.azure.images.init.registry + "repository" .Values.global.azure.images.init.image + "tag" .Values.global.azure.images.init.tag + ) + -}} +{{- end -}} + +{{- $_ := set .Values.grafana.downloadDashboardsImage "registry" $init_image.registry -}} +{{- $_ := set .Values.grafana.downloadDashboardsImage "repository" $init_image.repository -}} +{{- $_ := set .Values.grafana.downloadDashboardsImage "tag" $init_image.tag -}} +{{- $_ := set .Values.grafana.downloadDashboardsImage "sha" $init_image.sha -}} + +{{- $_ := set .Values.grafana.initChownData.image "registry" $init_image.registry -}} +{{- $_ := set .Values.grafana.initChownData.image "repository" $init_image.repository -}} +{{- $_ := set .Values.grafana.initChownData.image "tag" $init_image.tag -}} +{{- $_ := set .Values.grafana.initChownData.image "sha" $init_image.sha -}} + +{{- /*** K10 SERVICE ***/ -}} +{{- $_ := set .Values.grafana.service.annotations + "getambassador.io/config" (dict + "apiVersion" "getambassador.io/v3alpha1" + "kind" "Mapping" + "name" "grafana-server-mapping" + "prefix" $grafana_prefix + "rewrite" "/" + "service" (printf "%s-grafana:%0.f" .Release.Name .Values.grafana.service.port) + "timeout_ms" 15000 + "hostname" "*" + "ambassador_id" (list + (include "k10.ambassadorId" nil | replace "\"" "") + ) + | toYaml) +-}} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/templates/{values}/prometheus/charts/{charts}/values/prometheus_values.tpl b/charts/kasten/k10/7.0.1101/templates/{values}/prometheus/charts/{charts}/values/prometheus_values.tpl new file mode 100644 index 0000000000..c40dd7a079 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/templates/{values}/prometheus/charts/{charts}/values/prometheus_values.tpl @@ -0,0 +1,183 @@ +{{/* + With some of K10's features being provided by external Helm charts, those Helm + charts need to be configured to work with K10. + + Unfortunately, some of the values needed to configure the subcharts aren't + accessible to the subcharts (only global.* and chart_name.* are accessible). + + This means the values need to be duplicated, making the configuration of K10 + quite cumbersome for users (the same setting has to be provided in multiple + places, making it easy to misconfigure one thing or another). + + Alternatively, the subchart's templates could be customized to read global.* + values instead. However, this means upgrading the subchart is quite burdensome + since the customizations have to be re-applied to the upgraded chart. This is + even less tenable with the frequency with which chart updates are needed. + + With this in mind, this template was specially crafted to be able to read K10 + values and update the values that will be passed to the subchart. + + --- + + To accomplish this, Helm's template parsing and rendering order is exploited. + + Helm allows parent charts to override templates in subcharts. This is done by + parsing templates with lower precedence first (templates that are more deeply + nested than others). This allows templates with higher precedence to redefine + templates with lower precedence. + + Helm also renders templates in this same order. This template exploits this + ordering in order to set subchart values before the subchart's templates are + rendered, having the same effect as the user setting the values. + + WARNING: The name and directory structure of this template was carefully + selected to ensure that it is rendered before other templates! +*/}} + +{{- if .Values.prometheus.server.enabled }} +{{- $prometheus_scoped_values := (dict "Chart" (dict "Name" "prometheus") "Release" .Release "Values" .Values.prometheus) -}} + +{{- $prometheus_name := (include "prometheus.name" $prometheus_scoped_values) -}} +{{- $prometheus_prefix := "/k10/prometheus/" -}} +{{- $release_name := .Release.Name -}} + +{{- /*** PROMETHEUS LABELS ***/ -}} +{{- $_ := mergeOverwrite .Values.prometheus + (dict + "commonMetaLabels" (dict + "app.kubernetes.io/name" $prometheus_name + "app.kubernetes.io/instance" $release_name + ) + ) +-}} + +{{- /*** PROMETHEUS SERVER OVERRIDES ***/ -}} +{{- $fullnameOverride := .Values.prometheus.server.fullnameOverride | default "prometheus-server" -}} +{{- $clusterRoleNameOverride := .Values.prometheus.server.clusterRoleNameOverride | default (printf "%s-%s" .Release.Name $fullnameOverride) -}} + +{{- /* Merge global pod labels with any prometheus-specific labels, where the latter is of highest priority */ -}} +{{- $podLabels := merge (dict) (.Values.prometheus.server.podLabels | default dict) (.Values.global.podLabels) -}} + +{{- /* Merge global pod labels with any prometheus-specific annotations, where the latter is of highest priority */ -}} +{{- $podAnnotations := merge (dict) (.Values.prometheus.server.podAnnotations | default dict) (.Values.global.podAnnotations) -}} +{{- $_ := mergeOverwrite .Values.prometheus.server + (dict + "baseURL" (.Values.prometheus.server.baseURL | default $prometheus_prefix) + "prefixURL" (.Values.prometheus.server.prefixURL | default $prometheus_prefix | trimSuffix "/") + + "clusterRoleNameOverride" $clusterRoleNameOverride + "configMapOverrideName" "k10-prometheus-config" + "fullnameOverride" $fullnameOverride + "podLabels" $podLabels + "podAnnotations" $podAnnotations + ) +-}} + +{{- /*** K10 PROMETHEUS CONFIGMAP-RELOAD IMAGE *** + - global.airgapped.repository + - global.image.registry + - global.image.tag + - global.images.configmap-reload +*/ -}} +{{- $prometheus_configmap_reload_image := (dict + "registry" (.Values.global.airgapped.repository | default .Values.global.image.registry) + "repository" "configmap-reload" + "tag" (include "get.k10ImageTag" $) +) -}} + +{{- if (index .Values.global.images "configmap-reload") -}} + {{- $prometheus_configmap_reload_image = ( + include "k10.splitImage" (dict + "image" (index .Values.global.images "configmap-reload") + "path" "global.images.configmap-reload" + ) + ) | fromJson + -}} +{{- end -}} + +{{- if .Values.global.azMarketPlace -}} + {{- $prometheus_configmap_reload_image = (dict + "registry" .Values.global.azure.images.configmapreload.registry + "repository" .Values.global.azure.images.configmapreload.image + "tag" .Values.global.azure.images.configmapreload.tag + ) + -}} +{{- end -}} + +{{- $_ := mergeOverwrite .Values.prometheus.configmapReload.prometheus.image + (dict + "repository" (list $prometheus_configmap_reload_image.registry $prometheus_configmap_reload_image.repository | compact | join "/") + "tag" $prometheus_configmap_reload_image.tag + "digest" $prometheus_configmap_reload_image.digest + ) +-}} + +{{- /*** K10 PROMETHEUS SERVER IMAGE *** + - global.airgapped.repository + - global.image.registry + - global.image.tag + - global.images.prometheus +*/ -}} +{{- $prometheus_server_image := (dict + "registry" (.Values.global.airgapped.repository | default .Values.global.image.registry) + "repository" "prometheus" + "tag" (include "get.k10ImageTag" $) +) -}} +{{- if .Values.global.images.prometheus -}} + {{- $prometheus_server_image = ( + include "k10.splitImage" (dict + "image" .Values.global.images.prometheus + "path" "global.images.prometheus" + ) + ) | fromJson + -}} +{{- end -}} + +{{- if .Values.global.azMarketPlace -}} + {{- $prometheus_server_image = ( dict + "registry" .Values.global.azure.images.prometheus.registry + "repository" .Values.global.azure.images.prometheus.image + "tag" .Values.global.azure.images.prometheus.tag + ) + -}} +{{- end -}} + +{{- $_ := mergeOverwrite .Values.prometheus.server.image + (dict + "repository" (list $prometheus_server_image.registry $prometheus_server_image.repository | compact | join "/") + "tag" $prometheus_server_image.tag + "digest" $prometheus_server_image.digest + ) +-}} + +{{- /*** K10 IMAGE PULL SECRETS *** + - secrets.dockerConfig + - secrets.dockerConfigPath + - global.imagePullSecret +*/ -}} +{{- $image_pull_secret_names := list -}} +{{- if .Values.global.imagePullSecret -}} + {{- $image_pull_secret_names = append $image_pull_secret_names .Values.global.imagePullSecret -}} +{{- end -}} +{{- if (or .Values.secrets.dockerConfig .Values.secrets.dockerConfigPath) -}} + {{ $image_pull_secret_names = append $image_pull_secret_names "k10-ecr" -}} +{{- end -}} +{{- $image_pull_secret_names = $image_pull_secret_names | compact | uniq -}} + +{{- if $image_pull_secret_names -}} + {{- $image_pull_secrets := .Values.prometheus.imagePullSecrets | default list -}} + {{- range $name := $image_pull_secret_names -}} + {{- $image_pull_secrets = append $image_pull_secrets (dict "name" $name) -}} + {{- end -}} + {{- $_ := set .Values.prometheus "imagePullSecrets" $image_pull_secrets -}} +{{- end -}} + +{{- /*** K10 PERSISTENCE *** + - global.persistence.storageClass +*/ -}} +{{- $_ := mergeOverwrite .Values.prometheus.server.persistentVolume + (dict + "storageClass" (.Values.prometheus.server.persistentVolume.storageClass | default .Values.global.persistence.storageClass) + ) +-}} +{{- end }} diff --git a/charts/kasten/k10/7.0.1101/triallicense b/charts/kasten/k10/7.0.1101/triallicense new file mode 100644 index 0000000000..cfe6dd46bf --- /dev/null +++ b/charts/kasten/k10/7.0.1101/triallicense @@ -0,0 +1 @@ +Y3VzdG9tZXJOYW1lOiB0cmlhbHN0YXJ0ZXItbGljZW5zZQpkYXRlRW5kOiAnMjEwMC0wMS0wMVQwMDowMDowMC4wMDBaJwpkYXRlU3RhcnQ6ICcyMDIwLTAxLTAxVDAwOjAwOjAwLjAwMFonCmZlYXR1cmVzOgogIHRyaWFsOiBudWxsCmlkOiB0cmlhbC0wOWY4MzE5Zi0xODBmLTRhOTAtOTE3My1kOTJiNzZmMTgzNWUKcHJvZHVjdDogSzEwCnJlc3RyaWN0aW9uczoKICBub2RlczogNTAwCnNlcnZpY2VBY2NvdW50S2V5OiBudWxsCnZlcnNpb246IHYxLjAuMApzaWduYXR1cmU6IEYxbnVLUFV5STJtbDJGMmV1VHdGOXNZRTZMVU5rQ3ZiR2tTV1ZkT0ZqdERCb1B6SjUyVWFsVkFmRjVmQUxpcm5BcVhkcERnYi9YcnpxSEYrTE0xS2pEMVdXUFd0ZUdXNFc1anBPSFN0T296Y0c5M0pUUHF5M2l6TVk3RmczZVFLYTZzWDhBdnFwOXArWXVBMWNwTENlQ2dsR2dnOTVzSUFmYmRMMTBmV2d2RmR6QUt4dUZLN2psRzVtbG1CRVF5R0hrYWdoZFIrVGxzeUNTNEFkbXVBOEZodVUwZnRBdXN0b1M3R2JKd1BuTFI3STFZY1Q4OW8wU2xRZEJ2Yjg2QzdKbm1OdnY0aHhiSUo5TTJvWGJPSnQ4ZnBNcjhNWFR6YWRMTWJzSndhZ3VBVHlNUWF2cExHNXRPb0U2ZE1uMVlFVDZLdWZiYy9NdThVRDVYYXlDYTdkZz09Cg== diff --git a/charts/kasten/k10/7.0.1101/values.schema.json b/charts/kasten/k10/7.0.1101/values.schema.json new file mode 100644 index 0000000000..8ebb1424d5 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/values.schema.json @@ -0,0 +1,2874 @@ +{ + "$schema": "https://json-schema.org/draft/2019-09/schema", + "type": "object", + "properties": { + "rbac": { + "type": "object", + "title": "RBAC configuration", + "description": "Create RBAC seetings", + "properties": { + "create": { + "title": "Enable RBAC creation", + "description": "Toggle RBAC resource creation", + "type": "boolean", + "default": true + } + } + }, + "serviceAccount": { + "type": "object", + "title": "ServiceAccount details", + "description": "Configure ServiceAccount", + "properties": { + "create": { + "type": "boolean", + "default": true, + "title": "Create a ServiceAccount", + "description": "Specifies whether a ServiceAccount should be created" + }, + "name": { + "type": "string", + "default": "", + "title": "The name of the ServiceAccount", + "description": "The name of the ServiceAccount to use. If not set and create is true, a name is derived using the release and chart names" + } + } + }, + "scc": { + "type": "object", + "title": "Security Context Constraints details", + "description": "Configure Security Context Constraints", + "properties": { + "create": { + "type": "boolean", + "default": false, + "title": "Create K10 SSC", + "description": "Whether to create a SecurityContextConstraints for K10 ServiceAccounts" + }, + "priority": { + "type": "integer", + "default": 0, + "title": "SCC priority", + "description": "Sets the SecurityContextConstraints priority" + } + } + }, + "networkPolicy": { + "type": "object", + "title": "NetworkPolicy details", + "description": "Configure NetworkPolicy", + "properties": { + "create": { + "type": "boolean", + "default": true, + "title": "Create NetworkPolicies", + "description": "Whether to create NetworkPolicies for the K10 services" + } + } + }, + "global": { + "type": "object", + "title": "Global settings", + "properties": { + "image": { + "type": "object", + "title": "K10 image configurations", + "description": "Change K10 image settings", + "properties": { + "registry": { + "type": "string", + "default": "gcr.io/kasten-images", + "title": "K10 image registry", + "description": "Change default K10 image registry" + }, + "tag": { + "type": "string", + "default": "", + "title": "K10 image tag", + "description": "Change default K10 tag" + }, + "pullPolicy": { + "type": "string", + "default": "Always", + "title": "Container images pullPolicy", + "description": "Change default pullPolicy for all the images", + "enum": [ + "IfNotPresent", + "Always", + "Never" + ] + } + } + }, + "airgapped": { + "type": "object", + "title": "Airgapped offline installation", + "description": "Configure Airgapped offline installation", + "properties": { + "repository": { + "type": "string", + "default": "", + "title": "helm repository", + "description": "The helm repository for offline (airgapped) installation" + } + } + }, + "persistence": { + "type": "object", + "title": "Persistent Volume global details", + "description": "Configure global settings for Persistent Volume", + "properties": { + "mountPath": { + "type": "string", + "default": "/mnt/k10state", + "title": "Persistent Volume global mount path", + "description": "Change default path for Persistent Volume mount" + }, + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable Persistent Volume", + "description": "Create Persistent Volumes" + }, + "storageClass": { + "type": "string", + "default": "", + "title": "Persistent Volume global Storageclass", + "description": "If set to '-', dynamic provisioning is disabled. If undefined (the default) or set to null, the default provisioner is used. (e.g gp2 on AWS, standard on GKE, AWS & OpenStack)" + }, + "accessMode": { + "type": "string", + "default": "ReadWriteOnce", + "title": "Persistent Volume global AccessMode", + "description": "Change default AccessMode for Persistent Volumes", + "enum": [ + "ReadWriteOnce", + "ReadOnlyMany", + "ReadWriteMany" + ] + }, + "size": { + "type": "string", + "default": "20Gi", + "title": "Persistent Volume size", + "description": "Change default size for Persistent Volumes" + }, + "metering": { + "type": "object", + "title": "Metering service Persistent Volume details", + "description": "Configure Persistence Volume for metering service", + "properties": { + "size": { + "type": "string", + "default": "2Gi", + "title": "Metering service Persistent Volume size", + "description": "If not set, global.persistence.size is used" + } + } + }, + "catalog": { + "type": "object", + "title": "Catalog service Persistent Volume details", + "description": "Configure Persistence Volume for catalog service", + "properties": { + "size": { + "type": "string", + "default": "", + "title": "Catalog service Persistent Volume size", + "description": "If not set, global.persistence.size is used." + } + } + }, + "jobs": { + "type": "object", + "title": "Jobs service Persistent Volume details", + "description": "Configure Persistence Volume for jobs service", + "properties": { + "size": { + "type": "string", + "default": "", + "title": "Jobs service Persistent Volume size", + "description": "If not set, global.persistence.size is used." + } + } + }, + "logging": { + "type": "object", + "title": "Logging service Persistent Volume details", + "description": "Configure Persistence Volume for logging service", + "properties": { + "size": { + "type": "string", + "default": "", + "title": "Logging service Persistent Volume size", + "description": "If not set, global.persistence.size is used." + } + } + }, + "grafana": { + "type": "object", + "title": "Grafana service Persistent Volume details", + "description": "Configure Persistence Volume for grafana service", + "properties": { + "size": { + "type": "string", + "default": "5Gi", + "title": "Grafana service Persistent Volume size", + "description": "If not set, global.persistence.size is used." + } + } + } + } + }, + "podLabels": { + "type": "object", + "default": {}, + "title": "Custom labels to be set to all Kasten pods", + "description": "Configures custom pod labels to be set to all Kasten pods.", + "examples": [ + { + "foo": "bar" + } + ] + }, + "podAnnotations": { + "type": "object", + "default": {}, + "title": "Custom annotations to be set to all Kasten pods", + "description": "Configures custom pod annotations to be set to all Kasten pods.", + "examples": [ + { + "foo": "bar" + } + ] + }, + "rhMarketPlace": { + "type": "boolean", + "default": false, + "title": "RedHat marketplace config", + "description": "Set it to true while generating helm operator" + }, + "images": { + "type": "object", + "title": "Global image settings", + "properties": { + "aggregatedapis": { + "type": "string", + "default": "", + "title": "Aggregatedapis service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "auth": { + "type": "string", + "default": "", + "title": "Auth service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "bloblifecyclemanager": { + "type": "string", + "default": "", + "title": "Bloblifecyclemanager service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "catalog": { + "type": "string", + "default": "", + "title": "Catalog service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "configmap-reload": { + "type": "string", + "title": "Configmap-reload service container image", + "default": "", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "controllermanager": { + "type": "string", + "default": "", + "title": "Controllermanager service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "crypto": { + "type": "string", + "default": "", + "title": "Crypto service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "dashboardbff": { + "type": "string", + "default": "", + "title": "Dashboardbff service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "datamover": { + "type": "string", + "default": "", + "title": "Datamover service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "dex": { + "type": "string", + "default": "", + "title": "Dex service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "emissary": { + "type": "string", + "default": "", + "title": "Emissary service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "events": { + "type": "string", + "default": "", + "title": "Events service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "executor": { + "type": "string", + "default": "", + "title": "Executor service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "frontend": { + "type": "string", + "default": "", + "title": "Frontend service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "gateway": { + "type": "string", + "default": "", + "title": "Gateway service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "grafana": { + "type": "string", + "title": "Grafana service container image", + "default": "", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "init": { + "type": "string", + "title": "Generic init container image", + "default": "", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "jobs": { + "type": "string", + "default": "", + "title": "Jobs service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "kanister-tools": { + "type": "string", + "default": "", + "title": "Kanister-tools service container image", + "description": "Kanister-tools service container image contains set of tools, required for all kanister related operations. It is used for debug, troubleshooting, primer purposes as well" + }, + "kanister": { + "type": "string", + "default": "", + "title": "Kanister service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "k10tools": { + "type": "string", + "default": "", + "title": "k10tools service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "logging": { + "type": "string", + "default": "", + "title": "Logging service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "metering": { + "type": "string", + "default": "", + "title": "Metering service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "paygo_daemonset": { + "type": "string", + "default": "", + "title": "Paygo_daemonset service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "prometheus": { + "type": "string", + "default": "", + "title": "Prometheus service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "repositories": { + "type": "string", + "default": "", + "title": "Repositories service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "state": { + "type": "string", + "default": "", + "title": "State service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "upgrade": { + "type": "string", + "default": "", + "title": "Upgrade service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes. If not set, the image name is formed with '(global.airgapped.repository)|(global.image.registry)/:(Chart.AppVersion)|(image.tag)'" + }, + "vbrintegrationapi": { + "type": "string", + "default": "", + "title": "Vbrintegrationapi service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "garbagecollector": { + "type": "string", + "default": "", + "title": "Garbagecollector service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + }, + "metric-sidecar": { + "type": "string", + "default": "", + "title": "Metric-sidecar service container image", + "description": "Used for packaging RedHat Operator. Setting this flag along with global.rhMarketPlace=true overrides the default image name. This flag is only for internal purposes." + } + } + }, + "imagePullSecret": { + "type": "string", + "default": "", + "title": "Container image pull secret", + "description": "Secret which contains docker config for private repository. Use `k10-ecr` when secrets.dockerConfigPath is used." + }, + "prometheus": { + "type": "object", + "title": "Prometheus settings", + "description": "Global prometheus settings", + "properties": { + "external": { + "type": "object", + "title": "External prometheus settings", + "description": "Configure prometheus", + "properties": { + "host": { + "type": "string", + "default": "", + "title": "External prometheus host name", + "description": "Set prometheus host name" + }, + "port": { + "type": "string", + "default": "", + "title": "External prometheus port number", + "description": "Set prometheus port number" + }, + "baseURL": { + "type": "string", + "default": "", + "title": "External prometheus baseURL", + "description": "Set prometheus baseURL" + } + } + } + } + }, + "network": { + "type": "object", + "title": "Network settings", + "description": "Global network settings", + "properties": { + "enable_ipv6": { + "type": "boolean", + "default": false, + "title": "Enable ipv6", + "description": "Set true to enable ipv6" + } + } + } + } + }, + "route": { + "type": "object", + "title": "OpenShift route configuration", + "description": "Configure OpenShift Route", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Exposed dashboard via route", + "description": "Whether the K10 dashboard should be exposed via route" + }, + "host": { + "type": "string", + "default": "", + "title": "Host name", + "description": "Set Host name for the route" + }, + "path": { + "type": "string", + "default": "", + "title": "Route path", + "description": "Set Path for the route" + }, + "annotations": { + "type": "object", + "default": {}, + "title": "Route annotations", + "description": "Set annotations for the route", + "examples": [ + { + "kubernetes.io/tls-acme": "true", + "haproxy.router.openshift.io/disable_cookies": "true", + "haproxy.router.openshift.io/balance": "roundrobin" + } + ] + }, + "labels": { + "type": "object", + "default": {}, + "title": "Route label", + "description": "Set Labels for the route resource", + "examples": [ + { + "foo": "bar" + } + ] + }, + "tls": { + "type": "object", + "title": "Route TLS configuration", + "description": "Set TLS configuration for the route", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable TLS", + "description": "Whether to enable TLS" + }, + "insecureEdgeTerminationPolicy": { + "type": "string", + "default": "Redirect", + "title": "Route Termination Policy", + "description": "What to do in case of an insecure traffic edge termination", + "enum": [ + "None", + "Allow", + "Redirect", + "" + ] + }, + "termination": { + "type": "string", + "default": "edge", + "title": "Termination Schema", + "description": "Set termination Schema", + "enum": [ + "edge", + "passthrough", + "reencrypt" + ] + } + } + } + } + }, + "dexImage": { + "type": "object", + "title": "Dex image config", + "description": "Specify Dex image config", + "properties": { + "registry": { + "type": "string", + "default": "ghcr.io", + "title": "Dex image registry", + "description": "Change default image registry for Dex images" + }, + "repository": { + "type": "string", + "default": "dexidp", + "title": "Dex image repository", + "description": "Change default image repository for Dex images" + }, + "image": { + "type": "string", + "default": "dex", + "title": "Dex image name", + "description": "Change default image name for Dex images" + } + } + }, + "kanisterToolsImage": { + "type": "object", + "title": "kanister tools image config", + "description": "Set kanister tools image config", + "properties": { + "registry": { + "type": "string", + "default": "ghcr.io", + "title": "kanister-tools image registry", + "description": "Change default image registry for kanister-tools images" + }, + "repository": { + "type": "string", + "default": "kanisterio", + "title": "kanister-tools image repository", + "description": "Change default image repository for kanister-tools images" + }, + "image": { + "type": "string", + "default": "kanister-tools", + "title": "Kanister tools image name", + "description": "Change default image name for kanister-tools images" + }, + "pullPolicy": { + "type": "string", + "default": "Always", + "title": "Kanister tools image pullPolicy", + "description": "Change kanister-tools image pullPolicy", + "enum": [ + "IfNotPresent", + "Always", + "Never" + ] + } + } + }, + "ingress": { + "type": "object", + "title": "Ingress configuration", + "description": "Add ingress resource configuration", + "properties": { + "annotations": { + "type": "object", + "default": {}, + "title": "Ingress annotations", + "description": "Add optional annotations to the Ingress resource" + }, + "create": { + "type": "boolean", + "default": false, + "title": "Expose dashboard via ingress", + "description": "whether the K10 dashboard should be exposed via ingress" + }, + "tls": { + "type": "object", + "title": "TLS configuration for ingress", + "description": "Set TLS configuration for ingress", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable TLS", + "description": "Configures a TLS use for ingress.host" + }, + "secretName": { + "type": "string", + "default": "", + "title": "Optional TLS secret name", + "description": "Specifies the name of the secret to configure ingress.tls[].secretName" + } + } + }, + "name": { + "type": "string", + "default": "", + "title": "Ingress name", + "description": "Optional name of the Ingress object for the K10 dashboard." + }, + "class": { + "type": "string", + "default": "", + "title": "Ingress controller class", + "description": "Cluster ingress controller class: nginx, GCE" + }, + "host": { + "type": "string", + "default": "", + "title": "Ingress host name", + "description": "FQDN for name-based virtual host", + "examples": [ + "/k10.example.com" + ] + }, + "urlPath": { + "type": "string", + "default": "", + "title": "Ingress URL path", + "description": "URL path for K10 Dashboard", + "examples": [ + "/k10" + ] + }, + "pathType": { + "type": "string", + "default": "ImplementationSpecific", + "title": "Ingress path type", + "description": "Set the path type for the ingress resource", + "enum": [ + "Exact", + "Prefix", + "ImplementationSpecific" + ] + }, + "defaultBackend": { + "type": "object", + "title": "Ingress default backend", + "description": "Optional default backend for the Ingress object.", + "properties": { + "service": { + "type": "object", + "title": "Ingress default backend service", + "description": "A service referenced by the default backend (mutually exclusive with `resource`).", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable service default backend.", + "description": "Enable the default backend backed by a service." + }, + "name": { + "type": "string", + "default": "", + "title": "Service name", + "description": "Name of a service referenced by the default backend." + }, + "port": { + "type": "object", + "title": "Service port", + "description": "A port of a service referenced by the default backend.", + "properties": { + "name": { + "type": "string", + "default": "", + "title": "Port name", + "description": "Port name of a service referenced by the default backend (mutually exclusive with `number`)." + }, + "number": { + "type": "integer", + "default": 0, + "title": "Port number", + "description": "Port number of a service referenced by the default backend (mutually exclusive with `name`)." + } + } + } + } + }, + "resource": { + "type": "object", + "title": "Ingress default backend resource", + "description": "A resource referenced by the default backend (mutually exclusive with `service`).", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable resource default backend.", + "description": "Enable the default backend backed by a resource." + }, + "apiGroup": { + "type": "string", + "default": "", + "title": "Resource API group", + "description": "Optional API group of a resource referenced by the default backend.", + "examples": [ + "k8s.example.com" + ] + }, + "kind": { + "type": "string", + "default": "", + "title": "Resource kind", + "description": "Type of a resource referenced by the default backend.", + "examples": [ + "StorageBucket" + ] + }, + "name": { + "type": "string", + "default": "", + "title": "Resource name", + "description": "Name of a resource referenced by the default backend." + } + } + } + } + } + } + }, + "eula": { + "type": "object", + "title": "EULA configuration", + "properties": { + "accept": { + "type": "boolean", + "default": false, + "title": "Enable accept EULA before installation", + "description": "An End-User license agreement (EULA) is a legal agreement that grants a user a license to use an application or software. Users must consent to the EULA before purchasing, installing, or downloading an application or software owned by the service provider." + } + } + }, + "license": { + "type": "string", + "default": "", + "title": "License from Kasten", + "description": "Add license string obtained from Kasten" + }, + "cluster": { + "type": "object", + "title": "Cluster configuration", + "description": "Set cluster configuration", + "properties": { + "domainName": { + "type": "string", + "default": "", + "title": "Domain name of the cluster", + "description": "Set domain name of the cluster" + } + } + }, + "multicluster": { + "type": "object", + "title": "Multi-cluster configuration", + "description": "Configure the multi-cluster system", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable the multi-cluster system", + "description": "Choose whether to enable the multi-cluster system components and capabilities" + }, + "primary": { + "type": "object", + "title": "Multi-cluster primary configuration", + "description": "Configure multi-cluster primary", + "properties": { + "create": { + "type": "boolean", + "default": false, + "title": "Setup cluster as a multi-cluster primary", + "description": "Choose whether to setup cluster as a multi-cluster primary" + }, + "name": { + "type": "string", + "default": "", + "title": "Primary cluster name", + "description": "Choose the cluster name for multi-cluster primary" + }, + "ingressURL": { + "type": "string", + "default": "", + "title": "Primary cluster dashboard URL", + "description": "Choose the dashboard URL for the multi-cluster primary; e.g. https://cluster-name.domain/k10" + } + } + } + } + }, + "prometheus": { + "type": "object", + "title": "Internal Prometheus configuration", + "description": "Configure internal Prometheus", + "properties": { + "rbac": { + "type": "object", + "title": "Prometheus rbac", + "description": "Configure Prometheus rbac resources", + "properties": { + "create": { + "type": "boolean", + "default": false, + "title": "Enable Prometheus rbac. Warning - cluster wide permissions", + "description": "Choose whether to create Prometheus RBAC configuration. Warning: Enabling this action will allow Prometheus permission to scrape pods in all K8s namespaces." + } + } + }, + "server": { + "type": "object", + "title": "Prometheus Server", + "description": "Configure Prometheus Server", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable Prometheus server", + "description": "Create Prometheus server" + }, + "securityContext": { + "type": "object", + "title": "Prometheus server securityContext", + "description": "Configure Prometheus server securityContext", + "properties": { + "runAsUser": { + "type": "integer", + "default": 65534, + "title": "runAsUser ID", + "description": "Set securityContext runAsUser ID" + }, + "runAsNonRoot": { + "type": "boolean", + "default": true, + "title": "Enable runAsNonRoot", + "description": "Enable securityContext runAsNonRoot" + }, + "runAsGroup": { + "type": "integer", + "default": 65534, + "title": "runAsGroup ID", + "description": "Set securityContext runAsGroup ID" + }, + "fsGroup": { + "type": "integer", + "default": 65534, + "title": "fsGroup ID", + "description": "Set securityContext fsGroup ID" + } + } + }, + "retention": { + "type": "string", + "default": "30d", + "title": "Prometheus retention", + "description": "Set retention period for Prometheus" + }, + "persistentVolume": { + "type": "object", + "title": "Prometheus persistent volume", + "description": "Configure Prometheus persistent volume", + "properties": { + "storageClass": { + "type": "string", + "default": "", + "title": "StorageClassName used to create Prometheus PVC", + "description": "Setting this option overwrites global StorageClass value" + } + } + }, + "fullnameOverride": { + "type": "string", + "default": "prometheus-server", + "title": "Prometheus server deployment name", + "description": "Override default Prometheus server deployment name" + }, + "baseURL": { + "type": "string", + "default": "/k10/prometheus/", + "title": "Prometheus external url path", + "description": "Prometheus external url path at which the server can be accessed" + }, + "prefixURL": { + "type": "string", + "default": "/k10/prometheus", + "title": "Prometheus prefix slug", + "description": "Prometheus prefix slug at which the server can be accessed" + } + } + } + } + }, + "jaeger": { + "type": "object", + "title": "Jaeger configuration", + "description": "Jaeger tracing settings", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable Jaeger tracing", + "description": "Set true to enable Jaeger tracing" + }, + "agentDNS": { + "type": "string", + "default": "", + "title": "Jaeger agentDNS", + "description": "Set agentDNS for Jaeger tracing" + } + } + }, + "service": { + "type": "object", + "title": "K10 K8s services config", + "properties": { + "externalPort": { + "type": "integer", + "default": 8000, + "title": "externalPort for K10 services", + "description": "Override default 8000 externalPort for K10 services" + }, + "internalPort": { + "type": "integer", + "default": 8000, + "title": "internalPort for K10 services", + "description": "Override default 8000 internalPort for K10 services" + }, + "aggregatedApiPort": { + "type": "integer", + "default": 10250, + "title": "aggregatedApiPort for aggapi service", + "description": "Override default 10250 port for aggapi service" + }, + "gatewayAdminPort": { + "type": "integer", + "default": 8877, + "title": "Gateway admin port", + "description": "Override default 8877 gateway admin port" + } + } + }, + "secrets": { + "type": "object", + "title": "K10 secrets", + "description": "K10 secrets configuration", + "properties": { + "awsAccessKeyId": { + "type": "string", + "default": "", + "title": "AWS access key ID", + "description": "Set AWS access key ID required for AWS deployment" + }, + "awsSecretAccessKey": { + "type": "string", + "default": "", + "title": "AWS secret access key", + "description": "Set AWS access key secret" + }, + "awsIamRole": { + "type": "string", + "default": "", + "title": "AWS IAM Role", + "description": "ARN of the AWS IAM role assumed by K10 to perform any AWS operation" + }, + "awsClientSecretName": { + "type": "string", + "default": "", + "title": "Secret with AWS credentials and/or IAM Role", + "description": "Specify a Secret directly instead of having to provide awsAccessKeyId, awsSecretAccessKey and awsIamRole" + }, + "googleApiKey": { + "type": "string", + "default": "", + "title": "Google API Key", + "description": "Non-default base64 encoded GCP Service Account key" + }, + "googleProjectId": { + "type": "string", + "default": "", + "title": "Google Project ID", + "description": "Set Google Project ID other than the one in the GCP Service Account" + }, + "googleClientSecretName": { + "type": "string", + "default": "", + "title": "Secret with Google credentials", + "description": "Specify a Secret directly instead of having to provide googleApiKey and googleProjectId" + }, + "tlsSecret": { + "type": "string", + "default": "", + "title": "K8s TLS secret name contains for k10 Gateway service", + "description": "Specify a Secret directly instead of having to provide both the cert and key. This reduces the security risk a bit by not caching the certs and keys in the bash history." + }, + "dockerConfig": { + "type": "string", + "default": "", + "title": "Docker config", + "description": "base64 representation of your Docker credentials to pull docker images from a private registry" + }, + "dockerConfigPath": { + "type": "string", + "default": "", + "title": "Docker config path", + "description": "Path to Docker config file to create secret from" + }, + "azureTenantId": { + "type": "string", + "default": "", + "title": "Azure tenant ID", + "description": "Azure tenant ID required for Azure deployment" + }, + "azureClientId": { + "type": "string", + "default": "", + "title": "Azure client ID", + "description": "Azure Service App ID" + }, + "azureClientSecret": { + "type": "string", + "default": "", + "title": "Azure client Secret", + "description": "Azure Service APP secret" + }, + "azureClientSecretName": { + "type": "string", + "default": "", + "title": "Secret with Azure credentials", + "description": "Specify a Secret directly instead of having to provide azureClientId, azureTenantId and azureClientSecret" + }, + "azureResourceGroup": { + "type": "string", + "default": "", + "title": "Azure resource group", + "description": "Resource Group name that was created for the Kubernetes cluster" + }, + "azureSubscriptionID": { + "type": "string", + "default": "", + "title": "Azure subscription ID", + "description": "Subscription ID in your Azure tenant" + }, + "azureResourceMgrEndpoint": { + "type": "string", + "default": "", + "title": "Azure resource manager endpoint", + "description": "Resource management endpoint for the Azure Stack instance" + }, + "azureADEndpoint": { + "type": "string", + "default": "", + "title": "Azure AD endpoint", + "description": "Azure Active Directory login endpoint" + }, + "azureADResourceID": { + "type": "string", + "default": "", + "title": "Azure Active Directory resource ID", + "description": "Azure Active Directory resource ID to obtain AD tokens" + }, + "microsoftEntraIDEndpoint": { + "type": "string", + "default": "", + "title": "Microsoft Entra ID endpoint", + "description": "Microsoft Entra ID login endpoint" + }, + "microsoftEntraIDResourceID": { + "type": "string", + "default": "", + "title": "Microsoft Entra ID resource ID", + "description": "Microsoft Entra ID resource ID to obtain AD tokens" + }, + "azureCloudEnvID": { + "type": "string", + "default": "", + "title": "Azure Cloud Environment ID", + "description": "Azure Cloud Environment ID" + }, + "apiTlsCrt": { + "type": "string", + "default": "", + "title": "API TLS Certificate", + "description": "K8s API server TLS certificate" + }, + "apiTlsKey": { + "type": "string", + "default": "", + "title": "API TLS Key", + "description": "K8s API server TLS key" + }, + "vsphereEndpoint": { + "type": "string", + "default": "", + "title": "vSphere endpoint", + "description": "vSphere endpoint for login" + }, + "vsphereUsername": { + "type": "string", + "default": "", + "title": "", + "description": "" + }, + "vspherePassword": { + "type": "string", + "default": "", + "title": "vSphere password", + "description": "vSphere password for login" + }, + "vsphereClientSecretName": { + "type": "string", + "default": "", + "title": "Secret with vSphere credentials", + "description": "Specify a Secret directly instead of having to provide vsphereUsername, vspherePassword and vspherePassword" + } + } + }, + "metering": { + "type": "object", + "title": "Metering service config", + "description": "Metering service settings", + "properties": { + "reportingKey": { + "type": "string", + "default": "", + "title": "Reporting key", + "description": "Base64 encoded reporting key" + }, + "consumerId": { + "type": "string", + "default": "", + "title": "Consumer ID", + "description": "Consumer ID in the format project:" + }, + "awsRegion": { + "type": "string", + "default": "", + "title": "AWS Region", + "description": "Set AWS_REGION for metering service" + }, + "awsMarketPlaceIamRole": { + "type": "string", + "default": "", + "title": "AWS Marketplace IAM Role", + "description": "Set AWS marketplace IAM Role" + }, + "awsMarketplace": { + "type": "boolean", + "default": false, + "title": "AWS Marketplace", + "description": "Set AWS cloud metering license mode" + }, + "awsManagedLicense": { + "type": "boolean", + "default": false, + "title": "AWS managed license", + "description": "Set AWS managed license mode" + }, + "licenseConfigSecretName": { + "type": "string", + "default": "", + "title": "License config secret name", + "description": "AWS managed license config secret" + }, + "serviceAccount": { + "type": "object", + "title": "Metering service serviceAccount", + "description": "Configuration for metering service serviceAccount", + "properties": { + "create": { + "type": "boolean", + "default": false, + "title": "Create metering service serviceAccount", + "description": "Create metering service serviceAccount" + }, + "name": { + "type": "string", + "default": "", + "title": "Metering ServiceAccount name", + "description": "Set name for metering ServiceAccount" + } + } + }, + "mode": { + "type": "string", + "default": "", + "title": "Control license reporting", + "description": "Set to `airgap` for private-network installs" + }, + "redhatMarketplacePayg": { + "type": "boolean", + "default": false, + "title": "Red Hat cloud metering", + "description": "Set Red Hat cloud metering license mode" + }, + "reportCollectionPeriod": { + "type": "integer", + "default": 1800, + "title": "Report collection period", + "description": "Metric report collection period (in seconds)" + }, + "reportPushPeriod": { + "type": "integer", + "default": 3600, + "title": "Report push period", + "description": "Metric report push period (in seconds)" + }, + "promoID": { + "type": "string", + "default": "", + "title": "K10 promotion ID", + "description": "K10 promotion ID from marketing campaigns" + } + } + }, + "clusterName": { + "type": "string", + "default": "", + "title": "Cluster name", + "description": "Cluster name for better logs visibility" + }, + "executorReplicas": { + "type": "integer", + "default": 3, + "title": "Number of executor service pod replicas", + "description": "Set number of executor service pod replicas for better performance" + }, + "logLevel": { + "type": "string", + "default": "info", + "title": "Log level", + "description": "Change default log level" + }, + "externalGateway": { + "type": "object", + "title": "External gateway", + "description": "Configure external gateway for K10 API services", + "properties": { + "create": { + "type": "boolean", + "default": false, + "title": "Enable external gateway", + "description": "Create external gateway service" + }, + "annotations": { + "type": "object", + "title": "The annotations Schema", + "default": {}, + "description": "Standard annotations for the services" + }, + "fqdn": { + "type": "object", + "title": "Host and domain name for the K10 API services", + "description": "Configure host and domain name for the K10 API services", + "properties": { + "name": { + "type": "string", + "default": "", + "title": "Domain name for the K10 API services", + "description": "Domain name for the K10 API services" + }, + "type": { + "type": "string", + "default": "", + "title": "Gateway type", + "description": "Supported gateway type: route53-mapper or external-dns" + } + } + }, + "awsSSLCertARN": { + "type": "string", + "default": "", + "title": "AWS SSL Cert ARN", + "description": "ARN for the AWS ACM SSL certificate used in the K10 API server" + } + } + }, + "auth": { + "type": "object", + "title": "Authentication settings", + "description": "Configure K10 dashboard authentication", + "properties": { + "groupAllowList": { + "type": "array", + "default": [], + "items": { + "type": "string" + }, + "title": "List of groups allowed to access K10 dashboard", + "description": "A list of groups whose members are allowed access to K10's dashboard", + "examples": [ + [ + "group1", + "group2" + ] + ] + }, + "basicAuth": { + "type": "object", + "title": "Basic authentication for the K10 dashboard", + "description": "Configure basic authentication for the K10 dashboard", + "properties": { + "enabled": { + "title": "Enable basic authentication", + "description": "Enables basic authentication to the K10 dashboard that allows users to login with username and password", + "type": "boolean", + "default": false + }, + "secretName": { + "type": "string", + "default": "", + "title": "Secret with basic auth creds", + "description": "Name of an existing Secret that contains a file generated with htpasswd" + }, + "htpasswd": { + "type": "string", + "default": "", + "title": "Basic authentication creds", + "description": "A username and password pair separated by a colon character" + } + } + }, + "tokenAuth": { + "type": "object", + "title": "Token based authentication", + "description": "Configuration for Token based authentication for the K10 dashboard", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable token based authentication", + "description": "Enable token based authentication to access K10 dashboard" + } + } + }, + "oidcAuth": { + "type": "object", + "default": {}, + "title": "Open ID Connect based authentication", + "description": "Configuration for Open ID Connect based authentication for the K10 dashboard", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable Open ID Connect based authentication", + "description": "Enable Open ID Connect based authentication to access K10 dashboard" + }, + "providerURL": { + "type": "string", + "default": "", + "title": "OIDC Provider URL", + "description": "URL for the OIDC Provider" + }, + "redirectURL": { + "type": "string", + "default": "", + "title": "K10 gateway service URL", + "description": "URL to the K10 gateway service" + }, + "scopes": { + "type": "string", + "default": "", + "title": "OIDC scopes", + "description": "Space separated OIDC scopes required for userinfo", + "examples": [ + "profile email" + ] + }, + "prompt": { + "type": "string", + "title": "OIDC prompt type", + "description": "The type of prompt to be used during authentication", + "default": "select_account", + "enum": [ + "none", + "consent", + "login", + "select_account" + ] + }, + "clientID": { + "type": "string", + "default": "", + "title": "OIDC client ID", + "description": "Client ID given by the OIDC provider" + }, + "clientSecret": { + "type": "string", + "default": "", + "title": "OIDC client secret", + "description": "Client secret given by the OIDC provider" + }, + "clientSecretName": { + "type": "string", + "default": "", + "title": "Reference to secret", + "description": "Secret containing OIDC client ID and OIDC client secret" + }, + "usernameClaim": { + "type": "string", + "default": "", + "title": "OIDC username claim", + "description": "The claim to be used as the username" + }, + "usernamePrefix": { + "type": "string", + "default": "", + "title": "OIDC username prefix", + "description": "Prefix that has to be used with the username obtained from the username claim" + }, + "groupClaim": { + "type": "string", + "default": "", + "title": "OIDC group claim", + "description": "Name of a custom OpenID Connect claim for specifying user groups" + }, + "groupPrefix": { + "type": "string", + "default": "", + "title": "OIDC group prefix", + "description": "All groups will be prefixed with this value to prevent conflicts" + }, + "logoutURL": { + "type": "string", + "default": "", + "title": "OIDC logout endpoint", + "description": "URL to your OIDC provider's logout endpoint" + }, + "secretName": { + "type": "string", + "default": "", + "title": "OIDC config based existing secret", + "description": "Must include providerURL, redirectURL, scopes, clientID/secret and logoutURL" + }, + "sessionDuration": { + "type": "string", + "default": "1h", + "title": "OIDC session duration", + "description": "Maximum OIDC session duration. Default value is 1 hour" + }, + "refreshTokenSupport": { + "type": "boolean", + "default": false, + "title": "OIDC Refresh Token support", + "description": "Enable OIDC Refresh Token support. Disabled by default." + } + } + }, + "openshift": { + "type": "object", + "title": "OpenShift OAuth server based authentication", + "description": "OpenShift OAuth server based authentication for K10 dashboard", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable OpenShift OAuth server based authentication", + "description": "Enable OpenShift OAuth server based authentication to access K10 dashboard" + }, + "serviceAccount": { + "type": "string", + "default": "", + "title": "Service account that represents an OAuth client", + "description": "Name of the service account that represents an OAuth client" + }, + "clientSecret": { + "type": "string", + "default": "", + "title": "Service account token", + "description": "The token corresponding to the service account" + }, + "clientSecretName": { + "type": "string", + "default": "", + "title": "Service account token secret", + "description": "The secret that contains the token corresponding to the service account" + }, + "dashboardURL": { + "type": "string", + "default": "", + "title": "K10 dashboard URL", + "description": "The URL used for accessing K10's dashboard" + }, + "openshiftURL": { + "type": "string", + "default": "", + "title": "OpenShift URL", + "description": "The URL for accessing OpenShift's API server" + }, + "insecureCA": { + "type": "boolean", + "default": false, + "title": "Disable SSL verification of connections to OpenShift", + "description": "Set true to turn off SSL verification of connections to OpenShift" + }, + "useServiceAccountCA": { + "type": "boolean", + "default": false, + "title": "use the CA certificate corresponding to the Service Account", + "description": "Usually found at ``/var/run/secrets/kubernetes.io/serviceaccount/ca.crt``" + }, + "secretName": { + "type": "string", + "default": "", + "title": "The Kubernetes Secret that contains OIDC settings", + "description": "Specify Kubernetes Secret that contains OIDC settings" + }, + "usernameClaim": { + "type": "string", + "default": "email", + "title": "Username claim", + "description": "The claim to be used as the username" + }, + "usernamePrefix": { + "type": "string", + "default": "", + "title": "Username prefix", + "description": "Prefix that has to be used with the username obtained from the username claim" + }, + "groupnameClaim": { + "type": "string", + "default": "groups", + "title": "custom OpenID Connect claim name for specifying user groups", + "description": "Name of a custom OpenID Connect claim for specifying user groups" + }, + "groupnamePrefix": { + "type": "string", + "default": "", + "title": "User group name prefix", + "description": "Prefix for user group name" + }, + "caCertsAutoExtraction": { + "type": "boolean", + "default": true, + "title": "Enable the OCP CA certificates automatic extraction", + "description": "Enable the OCP CA certificates automatic extraction to the K10 namespace" + } + } + }, + "ldap": { + "type": "object", + "title": "Active Directory/LDAP based authentication ", + "description": "Active Directory/LDAP based authentication for the K10 dashboard", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable Active Directory/LDAP based authentication", + "description": "Enable Active Directory/LDAP based authentication to access K10 dashboard" + }, + "restartPod": { + "type": "boolean", + "default": false, + "title": "force a restart of the authentication service pod", + "description": "force a restart of the authentication service pod (useful when updating authentication config)" + }, + "dashboardURL": { + "type": "string", + "default": "", + "title": "K10 dashboard URL", + "description": "The URL used for accessing K10's dashboard" + }, + "host": { + "type": "string", + "default": "", + "title": "Host and port of the AD/LDAP server", + "description": "Host and optional port of the AD/LDAP server in the form `host:port`" + }, + "insecureNoSSL": { + "type": "boolean", + "default": false, + "title": "Insecure AD/LDAP host", + "description": "Set if the AD/LDAP host is not using TLS" + }, + "insecureSkipVerifySSL": { + "type": "boolean", + "default": false, + "title": "Skip SSL verification of connections to the AD/LDAP host", + "description": "Turn off SSL verification of connections to the AD/LDAP host" + }, + "startTLS": { + "type": "boolean", + "default": false, + "title": "TLS protocol", + "description": "When set to true, ldap:// is used to connect to the server followed by creation of a TLS session. When set to false, ldaps:// is used." + }, + "bindDN": { + "type": "string", + "default": "", + "title": "Username for connecting to the AD/LDAP host", + "description": "The Distinguished Name(username) used for connecting to the AD/LDAP host" + }, + "bindPW": { + "type": "string", + "default": "", + "title": "The password for `bindDN`", + "description": "The password corresponding to the `bindDN` for connecting to the AD/LDAP host" + }, + "bindPWSecretName": { + "type": "string", + "default": "", + "title": "Secret name containing the password", + "description": "Secret name containing the password corresponding to the `bindDN` for connecting to the AD/LDAP host" + }, + "userSearch": { + "type": "object", + "title": "User search config", + "description": "AD/LDAP user search config", + "properties": { + "baseDN": { + "type": "string", + "default": "", + "title": "The base username to start the AD/LDAP search from", + "description": "The base Distinguished Name to start the AD/LDAP search from" + }, + "filter": { + "type": "string", + "default": "", + "title": "filter to apply when searching", + "description": "Optional filter to apply when searching the directory" + }, + "username": { + "type": "string", + "default": "", + "title": "Username to search in the directory", + "description": "Attribute used for comparing user entries when searching the directory" + }, + "idAttr": { + "type": "string", + "default": "", + "title": "Attribute in a user's entry that should map to the user ID field in a token", + "description": "AD/LDAP attribute in a user's entry that should map to the user ID field in a token" + }, + "emailAttr": { + "type": "string", + "default": "", + "title": "Attribute in a user's entry that should map to the email field in a token", + "description": "AD/LDAP attribute in a user's entry that should map to the email field in a token" + }, + "nameAttr": { + "type": "string", + "default": "", + "title": "Attribute in a user's entry that should map to the name field in a token", + "description": "Attribute in a user's entry that should map to the name field in a token" + }, + "preferredUsernameAttr": { + "type": "string", + "default": "", + "title": "Attribute in a user's entry that should map to the preferred_username field in a token", + "description": "AD/LDAP attribute in a user's entry that should map to the preferred_username field in a token" + } + } + }, + "groupSearch": { + "type": "object", + "title": "AD/LDAP group search config", + "description": "AD/LDAP group search config", + "properties": { + "baseDN": { + "type": "string", + "default": "", + "title": "The base Distinguished Name", + "description": "The base Distinguished Name to start the AD/LDAP group search from" + }, + "filter": { + "type": "string", + "default": "", + "title": "Search filter", + "description": "filter to apply when searching the directory for groups" + }, + "userMatchers": { + "type": "array", + "items": { + "type": "object", + "properties": { + "userAttr": { + "type": "string", + "default": "", + "title": "Attribute in the user's entry", + "description": "Attribute in the user's entry that must match the groupAttr when searching for groups" + }, + "groupAttr": { + "type": "string", + "default": "", + "title": "Attribute in the group's entry", + "description": "Attribute in the group's entry that must match the userAttr when searching for groups" + } + } + }, + "default": [], + "title": "List of field pairs that are used to match a user to a group", + "description": "List of field pairs that are used to match a user to a group" + }, + "nameAttr": { + "type": "string", + "default": "", + "title": "Attribute that represents a group's name in the directory", + "description": "The AD/LDAP attribute that represents a group's name in the directory" + } + } + }, + "secretName": { + "type": "string", + "default": "", + "title": "The Kubernetes Secret with OIDC settings", + "description": "The Kubernetes Secret that contains OIDC settings" + }, + "usernameClaim": { + "type": "string", + "default": "email", + "title": "Username claim", + "description": "The claim to be used as the username" + }, + "usernamePrefix": { + "type": "string", + "default": "", + "title": "Username prefix", + "description": "Prefix that has to be used with the username obtained from the username claim" + }, + "groupnameClaim": { + "type": "string", + "default": "groups", + "title": "Name of a custom OpenID Connect claim for specifying user groups", + "description": "Name of a custom OpenID Connect claim for specifying user groups" + }, + "groupnamePrefix": { + "type": "string", + "default": "", + "title": "Group name prefix", + "description": "Prefix for user group name" + } + } + }, + "k10AdminUsers": { + "type": "array", + "items": { + "type": "string" + }, + "default": [], + "title": "Admin users list", + "description": "A list of users who are granted admin level access to K10's dashboard" + }, + "k10AdminGroups": { + "type": "array", + "items": { + "type": "string" + }, + "default": [], + "title": "Admin groups list", + "description": "A list of groups whose members are granted admin level access to K10's dashboard" + } + } + }, + "optionalColocatedServices": { + "type": "object", + "title": "Optional Colocated services config", + "description": "Settings to enable optional colocated services", + "properties": { + "vbrintegrationapi": { + "title": "VBRIntegratipnAPI service", + "description": "Settings for VBRIntegratipnAPI service", + "type": "object", + "properties": { + "enabled": { + "title": "Enable VBRIntegratipnAPI service", + "description": "Set true to enable VBRIntegratipnAPI service", + "type": "boolean", + "default": true + } + } + } + } + }, + "cacertconfigmap": { + "type": "object", + "title": "CA Certificate ConfigMap", + "description": "ConfigMap containing a certificate for a trusted root certificate authority", + "properties": { + "name": { + "title": "Name of the configmap", + "description": "Name of the K8s ConfigMap containing a certificate for a trusted root certificate authority", + "type": "string", + "default": "" + } + } + }, + "apiservices": { + "type": "object", + "title": "Skip APIService objects creation", + "describe": "Skip APIService objects creation if already exists", + "properties": { + "deployed": { + "type": "boolean", + "default": true, + "title": "Whether APIService object are deployed", + "description": "Set true if APIService objects exists. Setting false will recreate the objects" + } + } + }, + "injectKanisterSidecar": { + "type": "object", + "title": "Kanister sidecar injection for workload pods", + "description": "Configure Kanister sidecar injection for workload pods", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable Kanister sidecar injection for workload pods", + "description": "Set true to enable Kanister sidecar injection for workload pods" + }, + "namespaceSelector": { + "type": "object", + "title": "namespaceSelector config", + "description": "Configure namespaceSelector for namespace containing the workloads to inject Kansiter Sidecar", + "properties": { + "matchLabels": { + "type": "object", + "default": {}, + "title": "namespaceSelector matchLabels", + "description": "Set of labels to select namespaces in which sidecar injection is enabled for workloads" + } + } + }, + "objectSelector": { + "type": "object", + "title": "objectSelector config", + "description": "Configure objectSelector for the workloads to inject Kansiter Sidecar", + "properties": { + "matchLabels": { + "type": "object", + "default": {}, + "title": "objectSelector matchLabels", + "description": "Set of labels to filter workload objects in which the sidecar is injected" + } + } + }, + "webhookServer": { + "type": "object", + "title": "Sidecar injector webhook server", + "description": "Configure sidecar injector webhook server", + "properties": { + "port": { + "type": "integer", + "default": 8080, + "title": "Mutating webhook server port number", + "description": "Port number on which the mutating webhook server accepts request" + } + } + } + } + }, + "kanisterPodCustomLabels": { + "type": "string", + "default": "", + "title": "Kanister pod custom labels", + "description": "Custom labels for pods managed by Kanister" + }, + "kanisterPodCustomAnnotations": { + "type": "string", + "default": "", + "title": "Kanister pod custom annotations", + "description": "Custom annotations added to pods managed by Kanister" + }, + "features": { + "type": "object", + "title": "Feature flags", + "description": "Feature flags to be set by K10", + "properties": { + "backgroundMaintenanceRun": { + "type": "boolean", + "default": true, + "title": "Background maintenance feature", + "description": "Enable background maintenance runs by the repositories service" + } + } + }, + "kanisterPodMetricSidecar": { + "type": "object", + "title": "Metric sidecar for ephemeral pods", + "description": "Sidecar container for gathering metrics from ephemeral pods", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable sidecar container", + "description": "Enable sidecar container for gathering metrics from ephemeral pods" + }, + "metricLifetime": { + "type": "string", + "default": "2m", + "title": "The period we check if there are metrics which should be removed", + "description": "The period we check if there are metrics which should be removed" + }, + "pushGatewayInterval": { + "type": "string", + "default": "30s", + "title": "Pushgateway metrics interval", + "description": "The interval of sending metrics into the Pushgateway" + }, + "resources": { + "type": "object", + "title": "Kanister pod metric sidecar resource config", + "description": "Configure resource requests and limits for kanister pod metric sidecar", + "properties": { + "requests": { + "type": "object", + "title": "Kanister pod metric sidecar resource requests", + "description": "Kanister pod metric sidecar resource requests configuration", + "properties": { + "memory": { + "type": "string", + "default": "", + "title": "Kanister pod metric sidecar memory request", + "description": "Kanister pod metric sidecar memory request", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "", + "title": "Kanister pod metric sidecars cpu request", + "description": "Kanister pod metric sidecars cpu request", + "examples": [ + "1" + ] + } + } + }, + "limits": { + "type": "object", + "title": "Kanister pod metric sidecar resource limits", + "description": "Kanister pod metric sidecar resource limits configuration", + "properties": { + "memory": { + "type": "string", + "default": "", + "title": "Kanister pod metric sidecars memory limit", + "description": "Kanister pod metric sidecars memory limit", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "", + "title": "Kanister pod metric sidecars cpu limit", + "description": "Kanister pod metric sidecars cpu limit", + "examples": [ + "1" + ] + } + } + } + } + } + } + }, + "genericStorageBackup": { + "type": "object", + "title": "Generic Storage backup activation config", + "properties": { + "token": { + "type": "string", + "title": "Generic volume snapshot activation token", + "description": "Token to enable generic volume snapshot", + "default": "" + } + } + }, + "genericVolumeSnapshot": { + "type": "object", + "title": "Generic Volume Snapshot restore pods config", + "description": "Resource configuration for Generic Volume Snapshot restore pods", + "properties": { + "resources": { + "type": "object", + "title": "Generic Volume Snapshot restore pod resource config", + "description": "Configure resource request and limits by Generic Volume Snapshot restore pods", + "properties": { + "requests": { + "type": "object", + "title": "Generic Volume Snapshot resource requests", + "description": "Generic Volume Snapshot resource requests configuration", + "properties": { + "memory": { + "type": "string", + "default": "", + "title": "Generic Volume Snapshot restore pods memory request", + "description": "Generic Volume Snapshot restore pods memory request", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "", + "title": "Generic Volume Snapshot restore pods cpu request", + "description": "Generic Volume Snapshot restore pods cpu request", + "examples": [ + "1" + ] + } + } + }, + "limits": { + "type": "object", + "title": "Generic Volume Snapshot resource limits", + "description": "Generic Volume Snapshot resource limits configuration", + "properties": { + "memory": { + "type": "string", + "default": "", + "title": "Generic Volume Snapshot restore pods memory limit", + "description": "Generic Volume Snapshot restore pods memory limit", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "", + "title": "Generic Volume Snapshot restore pods cpu limit", + "description": "Generic Volume Snapshot restore pods cpu limit", + "examples": [ + "1" + ] + } + } + } + } + } + } + }, + "garbagecollector": { + "type": "object", + "title": "garbage collection", + "description": "Configure garbage collection settings", + "properties": { + "daemonPeriod": { + "type": "integer", + "default": 21600, + "title": "Garbage collection period", + "description": "Set garbage collection period (in seconds)" + }, + "keepMaxActions": { + "type": "integer", + "default": 1000, + "title": "Max actions to keep", + "description": "Sets maximum actions to keep" + }, + "actions": { + "type": "object", + "title": "action collectors config", + "description": "Configure action garbage collectors", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable action collectors", + "description": "Set true to enable action collectors" + } + } + } + } + }, + "resources": { + "type": "object", + "default": {}, + "title": "K10 pods resource config", + "description": "Resource management for K10 pods" + }, + "datastore": { + "type": "object", + "properties": { + "parallelUploads": { + "type": "integer", + "default": 8, + "title": "Parallelism for data store uploads", + "description": "Specifies how many files can be uploaded in parallel to the data store" + }, + "parallelDownloads": { + "type": "integer", + "default": 8, + "title": "Parallelism for data store downloads", + "description": "Specifies how many files can be downloaded in parallel from the data store" + } + } + }, + "defaultPriorityClassName": { + "type": "string", + "default": "", + "title": "Default priorityClassName", + "description": "Set the default priorityClassName for all K10 pods" + }, + "priorityClassName": { + "type": "object", + "default": {}, + "title": "K10 pods priorityClassName config", + "description": "Set priorityClassName for specific K10 pods" + }, + "services": { + "type": "object", + "title": "K10 services config", + "description": "Settings for K10 services", + "properties": { + "executor": { + "type": "object", + "title": "executor service config", + "description": "Configuration for K10 executor service", + "properties": { + "hostNetwork": { + "type": "boolean", + "default": false, + "title": "Enable node network usage", + "description": "Whether the executor pods may use the node network" + }, + "workerCount": { + "type": "integer", + "default": 8, + "title": "Executor workers count", + "description": "Count of running executor workers" + }, + "maxConcurrentRestoreCsiSnapshots": { + "type": "integer", + "default": 3, + "title": "Concurrent restore CSI snapshots operations", + "description": "Limit of concurrent restore CSI snapshots operations per each restore action" + }, + "maxConcurrentRestoreGenericVolumeSnapshots": { + "type": "integer", + "default": 3, + "title": "Concurrent restore generic volume snapshots operations", + "description": "Limit of concurrent restore generic volume snapshots operations per each restore action" + }, + "maxConcurrentRestoreWorkloads": { + "type": "integer", + "default": 3, + "title": "Concurrent restore workloads operations", + "description": "Limit of concurrent restore workloads operations per each restore action" + } + } + }, + "dashboardbff": { + "type": "object", + "title": "dashboardbff service config", + "properties": { + "hostNetwork": { + "type": "boolean", + "default": false, + "title": "Enable node network usage", + "description": "Whether the dashboardbff pods may use the node network" + } + } + }, + "securityContext": { + "type": "object", + "title": "securityContext for K10 service containers", + "description": "Custom securityContext for K10 service containers", + "properties": { + "runAsUser": { + "type": "integer", + "default": 1000, + "title": "runAsUser ID", + "description": "User ID K10 service containers run as" + }, + "fsGroup": { + "type": "integer", + "default": 1000, + "title": "FSGroup ID", + "description": "FSGroup that owns K10 service container volumes" + }, + "runAsNonRoot": { + "type": "boolean", + "default": true, + "title": "RunAsNonRoot", + "description": "Indicates that K10 service containers should run as non-root user." + }, + "seccompProfile": { + "type": "object", + "title": "Seccomp Profile object", + "description": "Sets the Seccomp profile for K10 service containers", + "properties": { + "type": { + "type": "string", + "default": "RuntimeDefault", + "title": "Seccomp profile type", + "description": "Sets the Seccomp profile type for K10 service containers" + } + } + } + } + }, + "aggregatedapis": { + "type": "object", + "title": "K10 aggregatedapis service config", + "properties": { + "hostNetwork": { + "type": "boolean", + "default": false, + "title": "Enable node network usage", + "description": "Whether the aggregatedapis pods may use the node network" + } + } + } + } + }, + "siem": { + "type": "object", + "title": "siem", + "description": "siem settings", + "properties": { + "logging": { + "type": "object", + "title": "logging", + "description": "siem logging settings", + "properties": { + "cluster": { + "type": "object", + "title": "cluster", + "description": "In-cluster agent log slurping settings", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable in-cluster agent-based audit logging", + "description": "Enabled in-cluster agent-based audit logging for K10 events" + } + } + }, + "cloud": { + "type": "object", + "title": "cloud", + "description": "siem cloud logging settings", + "properties": { + "path": { + "type": "string", + "default": "k10audit/", + "title": "Directory path in cloud object storage for saving logs", + "description": "Directory path in cloud object storage for saving logs when writing K10 events" + }, + "awsS3": { + "type": "object", + "title": "awsS3", + "description": "AWS S3 log slurping settings", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable AWS S3 audit logging", + "description": "Enable AWS S3 audit logging for K10 events" + } + } + } + } + } + } + } + } + }, + "apigateway": { + "type": "object", + "title": "APIGateway", + "description": "APIGateway settings", + "properties": { + "serviceResolver": { + "type": "string", + "default": "dns", + "title": "Resolver used for service discovery", + "description": "The resolver used for service discovery in the API gateway", + "enum": [ + "dns", + "endpoint" + ] + } + } + }, + "limiter": { + "type": "object", + "title": "Limiter", + "description": "Limits set on several operations", + "properties": { + "concurrentSnapConversions": { + "type": "integer", + "default": 3, + "title": "Concurrent snapshot conversions", + "description": "Limit of concurrent snapshots to convert during export " + }, + "genericVolumeSnapshots": { + "type": "integer", + "default": 10, + "title": "Concurrent generic volume snapshot creation", + "description": "Limit of concurrent generic volume snapshot create operations" + }, + "genericVolumeCopies": { + "type": "integer", + "default": 10, + "title": "Concurrent generic volume snapshot copy", + "description": "Limit of concurrent generic volume snapshot copy operations" + }, + "genericVolumeRestores": { + "type": "integer", + "default": 10, + "title": "Concurrent generic volume snapshot restore", + "description": "Limit of concurrent generic volume snapshot restore operations" + }, + "csiSnapshots": { + "type": "integer", + "default": 10, + "title": "Concurrent CSI snapshot create", + "description": "Limit of concurrent CSI snapshot create operations" + }, + "providerSnapshots": { + "type": "integer", + "default": 10, + "title": "Concurrent cloud provider create", + "description": "Limit of concurrent cloud provider create operations" + }, + "imageCopies": { + "type": "integer", + "default": 10, + "title": "Concurrent image copy", + "description": "Limit of concurrent image copy operations" + } + } + }, + "gateway": { + "type": "object", + "title": "Gateway config", + "description": "Configure Gateway service", + "properties": { + "insecureDisableSSLVerify": { + "type": "boolean", + "default": false, + "title": "Disable SSL verification for gateway pods", + "description": "Whether to disable SSL verification for gateway pods" + }, + "exposeAdminPort": { + "type": "boolean", + "default": true, + "title": "Expose Admin port", + "description": "Whether to expose Admin port for gateway service" + }, + "service": { + "type": "object", + "title": "gateway service config", + "properties": { + "externalPort": { + "type": "integer", + "default": 80, + "title": "externalPort for the gateway service", + "description": "Override default 80 externalPort for the gateway service" + } + } + }, + "resources": { + "type": "object", + "title": "Gateway pod resource config", + "description": "Configure resource request and limits by Gateway pod", + "properties": { + "requests": { + "type": "object", + "title": "Gateway resource requests", + "description": "Gateway resource requests configuration", + "properties": { + "memory": { + "type": "string", + "default": "300Mi", + "title": "Gateway pod memory request", + "description": "Gateway pod memory request", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "200m", + "title": "Gateway pod cpu request", + "description": "Gateway pod cpu request", + "examples": [ + "1" + ] + } + } + }, + "limits": { + "type": "object", + "title": "Gateway resource limits", + "description": "Gateway resource limits configuration", + "properties": { + "memory": { + "type": "string", + "default": "1Gi", + "title": "Gateway pod memory limit", + "description": "Gateway pod memory limit", + "examples": [ + "1Gi" + ] + }, + "cpu": { + "type": "string", + "default": "1000m", + "title": "Gateway pod cpu limit", + "description": "Gateway pod cpu limit", + "examples": [ + "1" + ] + } + } + } + } + } + } + }, + "kanister": { + "type": "object", + "title": "Kanister config", + "description": "Configuration for Kanister service", + "properties": { + "backupTimeout": { + "type": "integer", + "default": 45, + "title": "Timeout on Kanister backup operations", + "description": "Timeout on Kanister backup operations in mins" + }, + "restoreTimeout": { + "type": "integer", + "default": 600, + "title": "Timeout for Kanister restore operations", + "description": "Timeout for Kanister restore operations in mins" + }, + "deleteTimeout": { + "type": "integer", + "default": 45, + "title": "Timeout for Kanister delete operations", + "description": "Timeout for Kanister delete operations in mins" + }, + "hookTimeout": { + "type": "integer", + "default": 20, + "title": "Timeout for Kanister pre-hook and post-hook operations", + "description": "Timeout for Kanister pre-hook and post-hook operations in minutes" + }, + "checkRepoTimeout": { + "type": "integer", + "default": 20, + "title": "Timeout for Kanister checkRepo operations", + "description": "Specify timeout to set on Kanister checkRepo operations in minutes" + }, + "statsTimeout": { + "type": "integer", + "default": 20, + "title": "Timeout for Kanister stats operations", + "description": "Timeout for Kanister stats operations in minutes" + }, + "efsPostRestoreTimeout": { + "type": "integer", + "default": 45, + "title": "Timeout for Kanister efsPostRestore operations", + "description": "Timeout for Kanister efsPostRestore operations in minutes" + }, + "podReadyWaitTimeout": { + "type": "integer", + "default": 15, + "title": "Timeout for Kanister tooling pods to be ready", + "description": "Timeout for Kanister tooling pods to be ready during operations in minutes" + }, + "managedDataServicesBlueprintsEnabled": { + "type": "boolean", + "default": true, + "title": "Enable built-in Kanister Blueprints for data services", + "description": "Whether to enable built-in Kanister Blueprints for data services such as Crunchy Data Postgres Operator and K8ssandra" + } + } + }, + "awsConfig": { + "type": "object", + "title": "AWS config", + "description": "AWS config", + "properties": { + "assumeRoleDuration": { + "type": "string", + "default": "", + "title": "Duration of a session token generated by AWS for an IAM role", + "description": "The minimum value is 15 minutes, and the maximum value is determined by the maximum session duration setting for that IAM role. For documentation on how to view and edit the maximum session duration for an IAM role, refer to https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use.html#id_roles_use_view-role-max-session. The value accepts a number followed by a single character, 'm' (for minutes) or 'h' (for hours). Examples include: 60m or 2h" + }, + "efsBackupVaultName": { + "type": "string", + "default": "k10vault", + "title": "the AWS EFS backup vault name", + "description": "Set the AWS EFS backup vault name" + } + } + }, + "azure": { + "type": "object", + "title": "Azure config", + "description": "Azure config", + "properties": { + "useDefaultMSI": { + "type": "boolean", + "default": false, + "title": "Use the default Managed Identity", + "description": "Set to true - profile does not need a secret, Default Managed Identity will be used" + } + } + }, + "google": { + "type": "object", + "title": "Google config", + "description": "Google auth config", + "properties": { + "workloadIdentityFederation": { + "type": "object", + "title": "Google Workload Identity Federation config", + "description": "config for Google Workload Identity Federation", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable Google Workload Identity Federation (GWIF) for K10", + "description": "Set to true - Google Workload Identity Federation is enabled for K10" + }, + "idp": { + "type": "object", + "title": "Identity Provider config", + "description": "Identity Provider config", + "properties": { + "type": { + "type": "string", + "default": "", + "title": "Type of the Identity Provider for GWIF", + "description": "Set the type of IdP for GWIF" + }, + "aud": { + "type": "string", + "default": "", + "title": "The audience that ID token is intended for", + "description": "Set the name of the audience that ID token is intended for" + } + } + } + } + } + } + }, + "grafana": { + "type": "object", + "title": "Grafana config", + "description": "Settings for Grafana service", + "properties": { + "enabled": { + "type": "boolean", + "default": true, + "title": "Enable Grafana service", + "description": "Deploy Grafana service. If false Grafana will not be available" + }, + "external": { + "type": "object", + "title": "Configuration related to externally installed Grafana instance", + "description": "If Grafana instance that gets installed with K10 is disabled using grafana.enabled=false, this field can be used to configure externally installed Grafana instance.", + "properties": { + "url": { + "type": "string", + "default": "", + "title": "URL of externally installed Grafana instance", + "description": "If Grafana instance that gets installed with K10 is disabled using grafana.enabled=false, this field can be used to specify URL of externally installed Grafana instance." + } + } + } + } + }, + "encryption": { + "type": "object", + "title": "Encryption config", + "description": "Encryption config", + "properties": { + "primaryKey": { + "type": "object", + "title": "primaryKey for encrypting of K10 primary key", + "description": "primaryKey is used for enabling encryption of K10 primary key", + "properties": { + "awsCmkKeyId": { + "type": "string", + "default": "", + "title": "The AWS CMK key ID for encrypting K10 Primary Key", + "description": "Ensures AWS CMK is used for encrypting K10 primary key" + }, + "vaultTransitKeyName": { + "type": "string", + "default": "", + "title": "Vault transit Key Name", + "description": "Vault Transit key name for Vault integration" + }, + "vaultTransitPath": { + "type": "string", + "default": "", + "title": "Vault transit path", + "description": "Vault transit path for Vault integration" + } + } + } + } + }, + "vmWare": { + "type": "object", + "title": "VMWare integration config", + "properties": { + "taskTimeoutMin": { + "type": "integer", + "default": 60, + "title": "the timeout for VMWare operations", + "description": "the timeout for VMWare operations in minutes" + } + } + }, + "vault": { + "type": "object", + "title": "Vault config", + "description": "Vault integration configuration", + "properties": { + "secretName": { + "type": "string", + "default": "", + "title": "Vault secret name", + "description": "Vault secret name" + }, + "address": { + "type": "string", + "default": "http://vault.vault.svc:8200", + "title": "Vault address", + "description": "Specify Vault endpoint" + }, + "role": { + "type": "string", + "default": "", + "title": "Vault Service Account Role", + "description": "Role that was bound to the service account name and namespace from cluster" + }, + "serviceAccountTokenPath": { + "type": "string", + "default": "", + "title": "Token path for Vault Service Account Role", + "description": "Default: '/var/run/secrets/kubernetes.io/serviceaccount/token'" + } + } + }, + "kubeVirtVMs": { + "type": "object", + "properties": { + "snapshot": { + "type": "object", + "properties": { + "unfreezeTimeout": { + "type": "string", + "title": "Unfreeze timeout for Virtual Machines", + "description": "Time within which K10 is expected to complete the Virtual Machine's backup and thaw the Virtual Machine.", + "default": "5m" + } + } + } + } + }, + "excludedApps": { + "type": "array", + "items": { + "type": "string" + }, + "default": [ + "kube-system", + "kube-ingress", + "kube-node-lease", + "kube-public", + "kube-rook-ceph" + ], + "title": "List of applications to be excluded", + "description": "List of applications to be excluded from the dashboard & compliance considerations" + }, + "reporting": { + "type": "object", + "properties": { + "pdfReports": { + "title": "Enable PDF reports", + "description": "Enable download of PDF reports in the Dashboard", + "type": "boolean", + "default": false + } + } + }, + "logging": { + "type": "object", + "properties": { + "internal": { + "title": "Enable internal logging service", + "description": "Enable use of internal logging service", + "type": "boolean", + "default": true + }, + "fluentbit_endpoint": { + "title": "Use external fluentbit endpoint", + "description": "Specify a fluentbit endpoint to collect logs, cannot be used with the internal logging service (logging.internal=true)", + "type": "string", + "default": "" + } + } + }, + "maxJobWaitDuration": { + "type": "string", + "default": "", + "title": "Maximum duration for jobs in minutes", + "description": "Set a maximum duration of waiting for child jobs. If the execution of the subordinate jobs exceeds this value, the parent job will be canceled. If no value is set, a default of 10 hours will be used" + }, + "forceRootInKanisterHooks": { + "type": "boolean", + "default": true, + "title": "Run Kanister Hooks as root", + "description": "Forces Kanister Execution Hooks to run with root privileges" + }, + "ephemeralPVCOverhead": { + "type": "string", + "default": "0.1", + "title": "Storage overhead for ephemeral PVCs", + "description": "Set the percentage increase for the ephemeral Persistent Volume Claim's storage request, e.g. pvc size = (file raw size) * (1 + `ephemeralPVCOverhead`)" + }, + "kastenDisasterRecovery": { + "type": "object", + "properties": { + "quickMode": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "description": "Enables K10 Quick Disaster Recovery feature, with ability to restore necessary K10 resources and exported restore points of applications.", + "title": "Enable K10 Quick Disaster Recovery." + } + } + } + } + }, + "fips": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "description": "Enables K10 FIPS (Federal Information Processing Standard) mode of operation.", + "title": "Enable K10 FIPS mode of operation." + } + } + }, + "workerPodCRDs": { + "type": "object", + "properties": { + "enabled": { + "type": "boolean", + "default": false, + "title": "Enable ActionPodSpec and ActionPodSpecBinding CRDs", + "description": "Enables the use of ActionPodSpec and ActionPodSpecBinding for granular resource configuration of temporary Pods associated with Kasten Actions" + }, + "resourcesRequests": { + "type": "object", + "title": "Maximum resource requests", + "description": "Specifies the cluster-wide, maximum values for resource requests which may be used in any ActionPodSpec", + "properties": { + "maxMemory": { + "type": "string", + "default": "", + "title": "Maximum memory request", + "description": "Specifies the cluster-wide, maximum value for memory resource requests which may be used in any ActionPodSpec", + "examples": [ + "1Gi" + ] + }, + "maxCPU": { + "type": "string", + "default": "", + "title": "Maximum cpu request", + "description": "Specifies the cluster-wide, maximum value for CPU resource requests which may be used in any ActionPodSpec", + "examples": [ + "1" + ] + } + } + }, + "defaultActionPodSpec": { + "type": "string", + "default": "", + "title": "Default ActionPodSpec name", + "description": "The name of ActionPodSpec that will be used by default for worker pod resources. if empty, the default APS is omitted" + } + } + } + } +} diff --git a/charts/kasten/k10/7.0.1101/values.yaml b/charts/kasten/k10/7.0.1101/values.yaml new file mode 100644 index 0000000000..3c6b2fc162 --- /dev/null +++ b/charts/kasten/k10/7.0.1101/values.yaml @@ -0,0 +1,545 @@ +# Default values for k10. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +rbac: + create: true +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is derived using the release and chart names. + name: "" + +scc: + create: false + priority: 0 + +networkPolicy: + create: true + +global: + # These are the default values for picking k10 images. They can be overridden + # to specify a particular registy and tag. + image: + registry: gcr.io/kasten-images + tag: '' + pullPolicy: Always + airgapped: + repository: '' + persistence: + mountPath: "/mnt/k10state" + enabled: true + ## If defined, storageClassName: + ## If set to "-", storageClassName: "", which disables dynamic provisioning + ## If undefined (the default) or set to null, no storageClassName spec is + ## set, choosing the default provisioner. (gp2 on AWS, standard on + ## GKE, AWS & OpenStack) + ## + storageClass: "" + accessMode: ReadWriteOnce + size: 20Gi + metering: + size: 2Gi + catalog: + size: "" + jobs: + size: "" + logging: + size: "" + grafana: + # Default value is set to 5Gi. This is the same as the default value + # from previous releases <= 4.5.1 where the Grafana sub chart used to + # reference grafana.persistence.size instead of the global values. + # Since the size remains the same across upgrades, the Grafana PVC + # is not deleted and recreated which means no Grafana data is lost + # while upgrading from <= 4.5.1 + size: 5Gi + podLabels: {} + podAnnotations: {} + ## Set it to true while generating helm operator + rhMarketPlace: false + ## these values should not be provided us, these are to be used by + ## red hat marketplace + images: + aggregatedapis: '' + auth: '' + bloblifecyclemanager: '' + catalog: '' + configmap-reload: '' + controllermanager: '' + crypto: '' + dashboardbff: '' + datamover: '' + dex: '' + emissary: '' + events: '' + executor: '' + frontend: '' + gateway: '' + grafana: '' + init: '' + jobs: '' + kanister-tools: '' + kanister: '' + k10tools: '' + logging: '' + metering: '' + paygo_daemonset: '' + prometheus: '' + repositories: '' + state: '' + upgrade: '' + vbrintegrationapi: '' + garbagecollector: '' + metric-sidecar: '' + imagePullSecret: '' + prometheus: + external: + host: '' #FQDN of prometheus-service + port: '' + baseURL: '' + network: + enable_ipv6: false + +## OpenShift route configuration. +route: + enabled: false + # Host name for the route + host: "" + # Default path for the route + path: "" + + annotations: {} + # kubernetes.io/tls-acme: "true" + # haproxy.router.openshift.io/disable_cookies: "true" + # haproxy.router.openshift.io/balance: roundrobin + + labels: {} + # key: value + + # TLS configuration + tls: + enabled: false + # What to do in case of an insecure traffic edge termination + insecureEdgeTerminationPolicy: "Redirect" + # Where this TLS configuration should terminate + termination: "edge" + +dexImage: + registry: ghcr.io + repository: dexidp + image: dex + +kanisterToolsImage: + registry: ghcr.io + repository: kanisterio + image: kanister-tools + pullPolicy: Always + +ingress: + create: false + annotations: {} + name: "" + tls: + enabled: false + secretName: "" #TLS secret name + class: "" #Ingress controller type + host: "" #ingress object host name + urlPath: "" #url path for k10 gateway + pathType: "ImplementationSpecific" + defaultBackend: + service: + enabled: false + name: "" + port: + name: "" + number: 0 + resource: + enabled: false + apiGroup: "" + kind: "" + name: "" + +eula: + accept: false #true value if EULA accepted + +license: "" #base64 encoded string provided by Kasten + +cluster: + domainName: "" + +multicluster: + enabled: true + primary: + create: false + name: "" + ingressURL: "" + +prometheus: + rbac: + create: false + server: + # UID and groupid are from prometheus helm chart + enabled: true + securityContext: + runAsUser: 65534 + runAsNonRoot: true + runAsGroup: 65534 + fsGroup: 65534 + retention: 30d + persistentVolume: + storageClass: "" + fullnameOverride: prometheus-server + baseURL: /k10/prometheus/ + prefixURL: /k10/prometheus + +jaeger: + enabled: false + agentDNS: "" + +service: + externalPort: 8000 + internalPort: 8000 + aggregatedApiPort: 10250 + gatewayAdminPort: 8877 + +secrets: + awsAccessKeyId: '' + awsSecretAccessKey: '' + awsIamRole: '' + awsClientSecretName: '' + googleApiKey: '' + googleProjectId: '' + googleClientSecretName: '' + dockerConfig: '' + dockerConfigPath: '' + azureTenantId: '' + azureClientId: '' + azureClientSecret: '' + azureClientSecretName: '' + azureResourceGroup: '' + azureSubscriptionID: '' + azureResourceMgrEndpoint: '' + azureADEndpoint: '' + azureADResourceID: '' + microsoftEntraIDEndpoint: '' + microsoftEntraIDResourceID: '' + azureCloudEnvID: '' + apiTlsCrt: '' + apiTlsKey: '' + tlsSecret: '' + vsphereEndpoint: '' + vsphereUsername: '' + vspherePassword: '' + vsphereClientSecretName: '' + +metering: + reportingKey: "" #[base64-encoded key] + consumerId: "" #project: + awsRegion: '' + awsMarketPlaceIamRole: '' + awsMarketplace: false # AWS cloud metering license mode + awsManagedLicense: false # AWS managed license mode + licenseConfigSecretName: '' # AWS managed license config secret for non-eks clusters + serviceAccount: + create: false + name: "" + mode: '' # controls metric and license reporting (set to `airgap` for private-network installs) + redhatMarketplacePayg: false # Redhat cloud metering license mode + reportCollectionPeriod: 1800 # metric report collection period in seconds + reportPushPeriod: 3600 # metric report push period in seconds + promoID: '' # sets the K10 promotion ID + +clusterName: '' +executorReplicas: 3 +logLevel: info + +externalGateway: + create: false + # Any standard service annotations + annotations: {} + # Host and domain name for the K10 API server + fqdn: + name: "" + #Supported types route53-mapper, external-dns + type: "" + # ARN for the AWS ACM SSL certificate used in the K10 API server (load balancer) + awsSSLCertARN: '' + +auth: + groupAllowList: [] +# - "group1" +# - "group2" + basicAuth: + enabled: false + secretName: "" #htpasswd based existing secret + htpasswd: "" #htpasswd string, which will be used for basic auth + tokenAuth: + enabled: false + oidcAuth: + enabled: false + providerURL: "" #URL to your OIDC provider + redirectURL: "" #URL to the K10 gateway service + scopes: "" #Space separated OIDC scopes required for userinfo. Example: "profile email" + prompt: "select_account" #The prompt type to be requested with the OIDC provider. Default is select_account. + clientID: "" #ClientID given by the OIDC provider for K10 + clientSecret: "" #ClientSecret given by the OIDC provider for K10 + clientSecretName: "" #The Kubernetes Secret that contains ClientID and ClientSecret given by the OIDC provider for K10 + usernameClaim: "" #Claim to be used as the username + usernamePrefix: "" #Prefix that has to be used with the username obtained from the username claim + groupClaim: "" #Name of a custom OpenID Connect claim for specifying user groups + groupPrefix: "" #All groups will be prefixed with this value to prevent conflicts. + logoutURL: "" #URL to your OIDC provider's logout endpoint + #OIDC config based existing secret. + #Must include providerURL, redirectURL, scopes, clientID/secret and logoutURL. + secretName: "" + sessionDuration: "1h" #Maximum OIDC session duration. Default value is 1 hour + refreshTokenSupport: false #Enable Refresh Token support. Disabled by default + openshift: + enabled: false + serviceAccount: "" #service account used as the OAuth client + clientSecret: "" #The token from the service account + clientSecretName: "" #The secret with the token from the service account + dashboardURL: "" #The URL for accessing K10's dashboard + openshiftURL: "" #The URL of the Openshift API server + insecureCA: false + useServiceAccountCA: false + secretName: "" # The Kubernetes Secret that contains OIDC settings + usernameClaim: "email" + usernamePrefix: "" + groupnameClaim: "groups" + groupnamePrefix: "" + caCertsAutoExtraction: true # Configures if K10 should automatically extract CA certificates from the OCP cluster. + ldap: + enabled: false + restartPod: false # Enable this value to force a restart of the authentication service pod + dashboardURL: "" #The URL for accessing K10's dashboard + host: "" + insecureNoSSL: false + insecureSkipVerifySSL: false + startTLS: false + bindDN: "" + bindPW: "" + bindPWSecretName: "" + userSearch: + baseDN: "" + filter: "" + username: "" + idAttr: "" + emailAttr: "" + nameAttr: "" + preferredUsernameAttr: "" + groupSearch: + baseDN: "" + filter: "" + userMatchers: [] +# - userAttr: +# groupAttr: + nameAttr: "" + secretName: "" # The Kubernetes Secret that contains OIDC settings + usernameClaim: "email" + usernamePrefix: "" + groupnameClaim: "groups" + groupnamePrefix: "" + k10AdminUsers: [] + k10AdminGroups: [] + +optionalColocatedServices: + vbrintegrationapi: + enabled: true + +cacertconfigmap: + name: "" #Name of the configmap + +apiservices: + deployed: true # If false APIService objects will not be deployed + +injectKanisterSidecar: + enabled: false + namespaceSelector: + matchLabels: {} + # Set objectSelector to filter workloads + objectSelector: + matchLabels: {} + webhookServer: + port: 8080 # should not conflict with config server port (8000) + +genericStorageBackup: + token: "" + +kanisterPodCustomLabels : "" + +kanisterPodCustomAnnotations : "" + +features: + backgroundMaintenanceRun: true # Key must be deleted to deactivate. Setting to false will not work. + +kanisterPodMetricSidecar: + enabled: true + metricLifetime: "2m" + pushGatewayInterval: "30s" + resources: + requests: + memory: "" + cpu: "" + limits: + memory: "" + cpu: "" + +genericVolumeSnapshot: + resources: + requests: + memory: "" + cpu: "" + limits: + memory: "" + cpu: "" + +garbagecollector: + daemonPeriod: 21600 + keepMaxActions: 1000 + actions: + enabled: false + +resources: {} + +defaultPriorityClassName: "" +priorityClassName: {} + +services: + executor: + hostNetwork: false + workerCount: 8 + maxConcurrentRestoreCsiSnapshots: 3 + maxConcurrentRestoreGenericVolumeSnapshots: 3 + maxConcurrentRestoreWorkloads: 3 + dashboardbff: + hostNetwork: false + securityContext: + runAsUser: 1000 # Will override any USER instruction that a container image set for running the entrypoint and command. + fsGroup: 1000 + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + aggregatedapis: + hostNetwork: false + +siem: + logging: + cluster: + enabled: true + cloud: + path: k10audit/ + awsS3: + enabled: true + +apigateway: + serviceResolver: dns + +limiter: + concurrentSnapConversions: 3 + genericVolumeSnapshots: 10 + genericVolumeCopies: 10 + genericVolumeRestores: 10 + csiSnapshots: 10 + providerSnapshots: 10 + imageCopies: 10 + +gateway: + insecureDisableSSLVerify: false + exposeAdminPort: true + service: + externalPort: 80 + resources: + requests: + memory: 300Mi + cpu: 200m + limits: + memory: 1Gi + cpu: 1000m + +kanister: + backupTimeout: 45 + restoreTimeout: 600 + deleteTimeout: 45 + hookTimeout: 20 + checkRepoTimeout: 20 + statsTimeout: 20 + efsPostRestoreTimeout: 45 + podReadyWaitTimeout: 15 + managedDataServicesBlueprintsEnabled: true + +awsConfig: + assumeRoleDuration: "" + efsBackupVaultName: "k10vault" + +excludedApps: ["kube-system", "kube-ingress", "kube-node-lease", "kube-public", "kube-rook-ceph"] + +grafana: + enabled: true + external: + url: "" # can be used to configure the URL of externally installed Grafana instance. If it's provided, grafana.enabled must be false. + +encryption: + primaryKey: # primaryKey is used for enabling encryption of K10 primary key + awsCmkKeyId: '' # Ensures AWS CMK is used for encrypting K10 primary key + vaultTransitKeyName: '' + vaultTransitPath: '' + +vmWare: + taskTimeoutMin: 60 + +azure: + useDefaultMSI: false + +google: + workloadIdentityFederation: + enabled: false + idp: + type: "" + aud: "" + +vault: + role: "" # Role that was bound to the service account name and namespace from cluster + serviceAccountTokenPath: "" # This will default to /var/run/secrets/kubernetes.io/serviceaccount/token within the code if left blank + address: "http://vault.vault.svc:8200" # Address for dev mode in cluster vault server in vault namespace + secretName: "" # Ensures backward compatibility for now. We can remove once we tell all customers this is deprecated. + # This is how the token can be passed into default if K8S auth mode fails for whatever reason. + +kubeVirtVMs: + snapshot: + unfreezeTimeout: "5m" + +reporting: + pdfReports: false + +logging: + internal: true + # Used to set an external fluentbit endpoint. 'logging.internal' must be set to false. + fluentbit_endpoint: "" + +maxJobWaitDuration: "" + +forceRootInKanisterHooks: true + +ephemeralPVCOverhead: "0.1" + +datastore: + parallelUploads: 8 + parallelDownloads: 8 + +kastenDisasterRecovery: + quickMode: + enabled: false + +fips: + enabled: false + +workerPodCRDs: + enabled: false + resourcesRequests: + maxCPU: "" + maxMemory: "" + defaultActionPodSpec: "" + diff --git a/charts/new-relic/nri-bundle/5.0.94/.helmignore b/charts/new-relic/nri-bundle/5.0.94/.helmignore new file mode 100644 index 0000000000..50af031725 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/Chart.lock new file mode 100644 index 0000000000..3f62955d6b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/Chart.lock @@ -0,0 +1,39 @@ +dependencies: +- name: newrelic-infrastructure + repository: https://newrelic.github.io/nri-kubernetes + version: 3.34.6 +- name: nri-prometheus + repository: https://newrelic.github.io/nri-prometheus + version: 2.1.19 +- name: newrelic-prometheus-agent + repository: https://newrelic.github.io/newrelic-prometheus-configurator + version: 1.14.4 +- name: nri-metadata-injection + repository: https://newrelic.github.io/k8s-metadata-injection + version: 4.21.2 +- name: newrelic-k8s-metrics-adapter + repository: https://newrelic.github.io/newrelic-k8s-metrics-adapter + version: 1.11.4 +- name: kube-state-metrics + repository: https://prometheus-community.github.io/helm-charts + version: 5.12.1 +- name: nri-kube-events + repository: https://newrelic.github.io/nri-kube-events + version: 3.10.8 +- name: newrelic-logging + repository: https://newrelic.github.io/helm-charts + version: 1.23.0 +- name: newrelic-pixie + repository: https://newrelic.github.io/helm-charts + version: 2.1.4 +- name: k8s-agents-operator + repository: https://newrelic.github.io/k8s-agents-operator + version: 0.13.0 +- name: pixie-operator-chart + repository: https://pixie-operator-charts.storage.googleapis.com + version: 0.1.6 +- name: newrelic-infra-operator + repository: https://newrelic.github.io/newrelic-infra-operator + version: 2.11.4 +digest: sha256:3134efd351e6aa2f701d8bd3a1205dfa0b807136ba5c63ba79e5c87da45a4b4c +generated: "2024-10-07T13:42:59.811951434Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/Chart.yaml new file mode 100644 index 0000000000..9229dd2b0f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/Chart.yaml @@ -0,0 +1,85 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: New Relic + catalog.cattle.io/release-name: nri-bundle +apiVersion: v2 +dependencies: +- condition: infrastructure.enabled,newrelic-infrastructure.enabled + name: newrelic-infrastructure + repository: https://newrelic.github.io/nri-kubernetes + version: 3.34.6 +- condition: prometheus.enabled,nri-prometheus.enabled + name: nri-prometheus + repository: https://newrelic.github.io/nri-prometheus + version: 2.1.19 +- condition: newrelic-prometheus-agent.enabled + name: newrelic-prometheus-agent + repository: https://newrelic.github.io/newrelic-prometheus-configurator + version: 1.14.4 +- condition: webhook.enabled,nri-metadata-injection.enabled + name: nri-metadata-injection + repository: https://newrelic.github.io/k8s-metadata-injection + version: 4.21.2 +- condition: metrics-adapter.enabled,newrelic-k8s-metrics-adapter.enabled + name: newrelic-k8s-metrics-adapter + repository: https://newrelic.github.io/newrelic-k8s-metrics-adapter + version: 1.11.4 +- condition: ksm.enabled,kube-state-metrics.enabled + name: kube-state-metrics + repository: https://prometheus-community.github.io/helm-charts + version: 5.12.1 +- condition: kubeEvents.enabled,nri-kube-events.enabled + name: nri-kube-events + repository: https://newrelic.github.io/nri-kube-events + version: 3.10.8 +- condition: logging.enabled,newrelic-logging.enabled + name: newrelic-logging + repository: https://newrelic.github.io/helm-charts + version: 1.23.0 +- condition: newrelic-pixie.enabled + name: newrelic-pixie + repository: https://newrelic.github.io/helm-charts + version: 2.1.4 +- condition: k8s-agents-operator.enabled + name: k8s-agents-operator + repository: https://newrelic.github.io/k8s-agents-operator + version: 0.13.0 +- alias: pixie-chart + condition: pixie-chart.enabled + name: pixie-operator-chart + repository: https://pixie-operator-charts.storage.googleapis.com + version: 0.1.6 +- condition: newrelic-infra-operator.enabled + name: newrelic-infra-operator + repository: https://newrelic.github.io/newrelic-infra-operator + version: 2.11.4 +description: Groups together the individual charts for the New Relic Kubernetes solution + for a more comfortable deployment. +home: https://github.com/newrelic/helm-charts +icon: file://assets/icons/nri-bundle.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: nri-bundle +sources: +- https://github.com/newrelic/nri-bundle/ +- https://github.com/newrelic/nri-bundle/tree/master/charts/nri-bundle +- https://github.com/newrelic/nri-kubernetes/tree/master/charts/newrelic-infrastructure +- https://github.com/newrelic/nri-prometheus/tree/master/charts/nri-prometheus +- https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent +- https://github.com/newrelic/k8s-metadata-injection/tree/master/charts/nri-metadata-injection +- https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/master/charts/newrelic-k8s-metrics-adapter +- https://github.com/newrelic/nri-kube-events/tree/master/charts/nri-kube-events +- https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging +- https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie +- https://github.com/newrelic/newrelic-infra-operator/tree/master/charts/newrelic-infra-operator +- https://github.com/newrelic/k8s-agents-operator/tree/master/charts/k8s-agents-operator +version: 5.0.94 diff --git a/charts/new-relic/nri-bundle/5.0.94/README.md b/charts/new-relic/nri-bundle/5.0.94/README.md new file mode 100644 index 0000000000..3fcc97d2b7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/README.md @@ -0,0 +1,200 @@ +# nri-bundle + +Groups together the individual charts for the New Relic Kubernetes solution for a more comfortable deployment. + +**Homepage:** + +## Bundled charts + +This chart does not deploy anything by itself but has many charts as dependencies. This allows you to easily install and upgrade the New Relic +Kubernetes Integration using only one chart. + +In case you need more information about each component this chart installs, or you are an advanced user that want to install each component separately, +here is a list of components that this chart installs and where you can find more information about them: + +| Component | Installed by default? | Description | +|------------------------------|-----------------------|-------------| +| [newrelic-infrastructure](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) | Yes | Sends metrics about nodes, cluster objects (e.g. Deployments, Pods), and the control plane to New Relic. | +| [nri-metadata-injection](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) | Yes | Enriches New Relic-instrumented applications (APM) with Kubernetes information. | +| [kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) | | Required for `newrelic-infrastructure` to gather cluster-level metrics. | +| [nri-kube-events](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) | | Reports Kubernetes events to New Relic. | +| [newrelic-infra-operator](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) | | (Beta) Used with Fargate or serverless environments to inject `newrelic-infrastructure` as a sidecar instead of the usual DaemonSet. | +| [newrelic-k8s-metrics-adapter](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) | | (Beta) Provides a source of data for Horizontal Pod Autoscalers (HPA) based on a NRQL query from New Relic. | +| [newrelic-logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) | | Sends logs for Kubernetes components and workloads running on the cluster to New Relic. | +| [nri-prometheus](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) | | Sends metrics from applications exposing Prometheus metrics to New Relic. | +| [newrelic-prometheus-configurator](https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent) | | Configures instances of Prometheus in Agent mode to send metrics to the New Relic Prometheus endpoint. | +| [newrelic-pixie](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) | | Connects to the Pixie API and enables the New Relic plugin in Pixie. The plugin allows you to export data from Pixie to New Relic for long-term data retention. | +| [Pixie](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) | | Is an open source observability tool for Kubernetes applications that uses eBPF to automatically capture telemetry data without the need for manual instrumentation. | +| [k8s-agents-operator](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) | | (Preview) Streamlines full-stack observability for Kubernetes environments by automating APM instrumentation alongside Kubernetes agent deployment. | + +## Configure components + +It is possible to configure settings for the individual charts this chart groups by specifying values for them under a key using the name of the chart, +as specified in [helm documentation](https://helm.sh/docs/chart_template_guide/subcharts_and_globals). + +For example, by adding the following to the `values.yml` file: + +```yaml +# Configuration settings for the newrelic-infrastructure chart +newrelic-infrastructure: + # Any key defined in the values.yml file for the newrelic-infrastructure chart can be configured here: + # https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml + + verboseLog: false + + resources: + limits: + memory: 512M +``` + +It is possible to override any entry of the [`newrelic-infrastructure`](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) +chart, as defined in their [`values.yml` file](https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml). + +The same approach can be followed to update any of the subcharts. + +After making these changes to the `values.yml` file, or a custom values file, make sure to apply them using: + +``` +$ helm upgrade --reuse-values -f values.yaml [RELEASE] newrelic/nri-bundle +``` + +Where `[RELEASE]` is the name of the helm release, e.g. `newrelic-bundle`. + +## Monitor on host integrations + +If you wish to monitor services running on Kubernetes you can provide integrations +configuration under `integrations_config` that it will passed down to the `newrelic-infrastructure` chart. + +You just need to create a new entry where the "name" is the filename of the configuration file and the data is the content of +the integration configuration. The name must end in ".yaml" as this will be the +filename generated and the Infrastructure agent only looks for YAML files. + +The data part is the actual integration configuration as described in the spec here: +https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 + +In the following example you can see how to monitor a Redis integration with autodiscovery + +```yaml +newrelic-infrastructure: + integrations: + nri-redis-sampleapp: + discovery: + command: + exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 + match: + label.app: sampleapp + integrations: + - name: nri-redis + env: + # using the discovered IP as the hostname address + HOSTNAME: ${discovery.ip} + PORT: 6379 + labels: + env: test +``` + +## Bring your own KSM + +New Relic Kubernetes Integration requires an instance of kube-state-metrics (KSM) to be running in the cluster, which this chart pulls as a dependency. If you are already running or want to run your own KSM instance, you will need to make some small adjustments as described below. + +### Bring your own KSM + +If you already have one KSM instance running, you can point `nri-kubernetes` to your instance: + +```yaml +kube-state-metrics: + # Disable bundled KSM. + enabled: false +newrelic-infrastructure: + ksm: + config: + # Selector for your pre-installed KSM Service. You may need to adjust this to fit your existing installation. + selector: "app.kubernetes.io/name=kube-state-metrics" + # Alternatively, you can specify a fixed URL where KSM is available. Doing so will bypass autodiscovery. + #staticUrl: http://ksm.ksm.svc.cluster.local:8080/metrics +``` + +### Run KSM alongside a different version + +If you need to run a different instance of KSM in your cluster, you can still run a separate instance for the Kubernetes Integration to work as intended: + +```yaml +kube-state-metrics: + # Enable bundled KSM. + enabled: true + prometheusScrape: false + customLabels: + # Label unique to this KSM instance. + newrelic.com/custom-ksm: "true" +newrelic-infrastructure: + ksm: + config: + # Use label above as a selector. + selector: "newrelic.com/custom-ksm=true" +``` + +For more information on supported KSM version visit the [requirements documentation](https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements#reqs) + +## Values managed globally + +Some of the subchart implement the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +At the time of writing this document, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this library and +honors global options as described below. + +Note, the value table below is automatically generated from `values.yaml` by `helm-docs`. If you need to add new fields or update existing fields, please update the `values.yaml` and then run `helm-docs` to update this value table. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| global | object | See [`values.yaml`](values.yaml) | change the behaviour globally to all the supported helm charts. See [user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) for further information. | +| global.affinity | object | `{}` | Sets pod/node affinities | +| global.cluster | string | `""` | The cluster name for the Kubernetes cluster. | +| global.containerSecurityContext | object | `{}` | Sets security context (at container level) | +| global.customAttributes | object | `{}` | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.customSecretLicenseKey | string | `""` | Key in the Secret object where the license key is stored | +| global.customSecretName | string | `""` | Name of the Secret object where the license key is stored | +| global.dnsConfig | object | `{}` | Sets pod's dnsConfig | +| global.fargate | bool | false | Must be set to `true` when deploying in an EKS Fargate environment | +| global.hostNetwork | bool | false | Sets pod's hostNetwork | +| global.images.pullSecrets | list | `[]` | Set secrets to be able to fetch images | +| global.images.registry | string | `""` | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.insightsKey | string | `""` | The license key for your New Relic Account. This will be preferred configuration option if both `insightsKey` and `customSecret` are specified. | +| global.labels | object | `{}` | Additional labels for chart objects | +| global.licenseKey | string | `""` | The license key for your New Relic Account. This will be preferred configuration option if both `licenseKey` and `customSecret` are specified. | +| global.lowDataMode | bool | false | Reduces number of metrics sent in order to reduce costs | +| global.nodeSelector | object | `{}` | Sets pod's node selector | +| global.nrStaging | bool | false | Send the metrics to the staging backend. Requires a valid staging license key | +| global.podLabels | object | `{}` | Additional labels for chart pods | +| global.podSecurityContext | object | `{}` | Sets security context (at pod level) | +| global.priorityClassName | string | `""` | Sets pod's priorityClassName | +| global.privileged | bool | false | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | | +| global.proxy | string | `""` | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.serviceAccount.annotations | object | `{}` | Add these annotations to the service account we create | +| global.serviceAccount.create | string | `nil` | Configures if the service account should be created or not | +| global.serviceAccount.name | string | `nil` | Change the name of the service account. This is honored if you disable on this chart the creation of the service account so you can use your own | +| global.tolerations | list | `[]` | Sets pod's tolerations to node taints | +| global.verboseLog | bool | false | Sets the debug logs to this integration or all integrations if it is set globally | +| k8s-agents-operator.enabled | bool | `false` | Install the [`k8s-agents-operator` chart](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) | +| kube-state-metrics.enabled | bool | `false` | Install the [`kube-state-metrics` chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) from the stable helm charts repository. This is mandatory if `infrastructure.enabled` is set to `true` and the user does not provide its own instance of KSM version >=1.8 and <=2.0. Note, kube-state-metrics v2+ disables labels/annotations metrics by default. You can enable the target labels/annotations metrics to be monitored by using the metricLabelsAllowlist/metricAnnotationsAllowList options described [here](https://github.com/prometheus-community/helm-charts/blob/159cd8e4fb89b8b107dcc100287504bb91bf30e0/charts/kube-state-metrics/values.yaml#L274) in your Kubernetes clusters. | +| newrelic-infra-operator.enabled | bool | `false` | Install the [`newrelic-infra-operator` chart](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) (Beta) | +| newrelic-infrastructure.enabled | bool | `true` | Install the [`newrelic-infrastructure` chart](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) | +| newrelic-k8s-metrics-adapter.enabled | bool | `false` | Install the [`newrelic-k8s-metrics-adapter.` chart](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) (Beta) | +| newrelic-logging.enabled | bool | `false` | Install the [`newrelic-logging` chart](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) | +| newrelic-pixie.enabled | bool | `false` | Install the [`newrelic-pixie`](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) | +| newrelic-prometheus-agent.enabled | bool | `false` | Install the [`newrelic-prometheus-agent` chart](https://github.com/newrelic/newrelic-prometheus-configurator/tree/main/charts/newrelic-prometheus-agent) | +| nri-kube-events.enabled | bool | `false` | Install the [`nri-kube-events` chart](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) | +| nri-metadata-injection.enabled | bool | `true` | Install the [`nri-metadata-injection` chart](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) | +| nri-prometheus.enabled | bool | `false` | Install the [`nri-prometheus` chart](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) | +| pixie-chart.enabled | bool | `false` | Install the [`pixie-chart` chart](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) | + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) diff --git a/charts/new-relic/nri-bundle/5.0.94/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/README.md.gotmpl new file mode 100644 index 0000000000..269c4925a2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/README.md.gotmpl @@ -0,0 +1,166 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +## Bundled charts + +This chart does not deploy anything by itself but has many charts as dependencies. This allows you to easily install and upgrade the New Relic +Kubernetes Integration using only one chart. + +In case you need more information about each component this chart installs, or you are an advanced user that want to install each component separately, +here is a list of components that this chart installs and where you can find more information about them: + +| Component | Installed by default? | Description | +|------------------------------|-----------------------|-------------| +| [newrelic-infrastructure](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) | Yes | Sends metrics about nodes, cluster objects (e.g. Deployments, Pods), and the control plane to New Relic. | +| [nri-metadata-injection](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) | Yes | Enriches New Relic-instrumented applications (APM) with Kubernetes information. | +| [kube-state-metrics](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) | | Required for `newrelic-infrastructure` to gather cluster-level metrics. | +| [nri-kube-events](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) | | Reports Kubernetes events to New Relic. | +| [newrelic-infra-operator](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) | | (Beta) Used with Fargate or serverless environments to inject `newrelic-infrastructure` as a sidecar instead of the usual DaemonSet. | +| [newrelic-k8s-metrics-adapter](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) | | (Beta) Provides a source of data for Horizontal Pod Autoscalers (HPA) based on a NRQL query from New Relic. | +| [newrelic-logging](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) | | Sends logs for Kubernetes components and workloads running on the cluster to New Relic. | +| [nri-prometheus](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) | | Sends metrics from applications exposing Prometheus metrics to New Relic. | +| [newrelic-prometheus-configurator](https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent) | | Configures instances of Prometheus in Agent mode to send metrics to the New Relic Prometheus endpoint. | +| [newrelic-pixie](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) | | Connects to the Pixie API and enables the New Relic plugin in Pixie. The plugin allows you to export data from Pixie to New Relic for long-term data retention. | +| [Pixie](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) | | Is an open source observability tool for Kubernetes applications that uses eBPF to automatically capture telemetry data without the need for manual instrumentation. | +| [k8s-agents-operator](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) | | (Preview) Streamlines full-stack observability for Kubernetes environments by automating APM instrumentation alongside Kubernetes agent deployment. | + +## Configure components + +It is possible to configure settings for the individual charts this chart groups by specifying values for them under a key using the name of the chart, +as specified in [helm documentation](https://helm.sh/docs/chart_template_guide/subcharts_and_globals). + +For example, by adding the following to the `values.yml` file: + +```yaml +# Configuration settings for the newrelic-infrastructure chart +newrelic-infrastructure: + # Any key defined in the values.yml file for the newrelic-infrastructure chart can be configured here: + # https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml + + verboseLog: false + + resources: + limits: + memory: 512M +``` + +It is possible to override any entry of the [`newrelic-infrastructure`](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) +chart, as defined in their [`values.yml` file](https://github.com/newrelic/nri-kubernetes/blob/main/charts/newrelic-infrastructure/values.yaml). + +The same approach can be followed to update any of the subcharts. + +After making these changes to the `values.yml` file, or a custom values file, make sure to apply them using: + +``` +$ helm upgrade --reuse-values -f values.yaml [RELEASE] newrelic/nri-bundle +``` + +Where `[RELEASE]` is the name of the helm release, e.g. `newrelic-bundle`. + + +## Monitor on host integrations + +If you wish to monitor services running on Kubernetes you can provide integrations +configuration under `integrations_config` that it will passed down to the `newrelic-infrastructure` chart. + +You just need to create a new entry where the "name" is the filename of the configuration file and the data is the content of +the integration configuration. The name must end in ".yaml" as this will be the +filename generated and the Infrastructure agent only looks for YAML files. + +The data part is the actual integration configuration as described in the spec here: +https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 + +In the following example you can see how to monitor a Redis integration with autodiscovery + +```yaml +newrelic-infrastructure: + integrations: + nri-redis-sampleapp: + discovery: + command: + exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 + match: + label.app: sampleapp + integrations: + - name: nri-redis + env: + # using the discovered IP as the hostname address + HOSTNAME: ${discovery.ip} + PORT: 6379 + labels: + env: test +``` + +## Bring your own KSM + +New Relic Kubernetes Integration requires an instance of kube-state-metrics (KSM) to be running in the cluster, which this chart pulls as a dependency. If you are already running or want to run your own KSM instance, you will need to make some small adjustments as described below. + +### Bring your own KSM + +If you already have one KSM instance running, you can point `nri-kubernetes` to your instance: + +```yaml +kube-state-metrics: + # Disable bundled KSM. + enabled: false +newrelic-infrastructure: + ksm: + config: + # Selector for your pre-installed KSM Service. You may need to adjust this to fit your existing installation. + selector: "app.kubernetes.io/name=kube-state-metrics" + # Alternatively, you can specify a fixed URL where KSM is available. Doing so will bypass autodiscovery. + #staticUrl: http://ksm.ksm.svc.cluster.local:8080/metrics +``` + +### Run KSM alongside a different version + +If you need to run a different instance of KSM in your cluster, you can still run a separate instance for the Kubernetes Integration to work as intended: + +```yaml +kube-state-metrics: + # Enable bundled KSM. + enabled: true + prometheusScrape: false + customLabels: + # Label unique to this KSM instance. + newrelic.com/custom-ksm: "true" +newrelic-infrastructure: + ksm: + config: + # Use label above as a selector. + selector: "newrelic.com/custom-ksm=true" +``` + +For more information on supported KSM version visit the [requirements documentation](https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements#reqs) + +## Values managed globally + +Some of the subchart implement the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +At the time of writing this document, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this library and +honors global options as described below. + +Note, the value table below is automatically generated from `values.yaml` by `helm-docs`. If you need to add new fields or update existing fields, please update the `values.yaml` and then run `helm-docs` to update this value table. + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/app-readme.md b/charts/new-relic/nri-bundle/5.0.94/app-readme.md new file mode 100644 index 0000000000..61e5507873 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/app-readme.md @@ -0,0 +1,5 @@ +# New Relic Kubernetes Integration + +New Relic's Kubernetes integration gives you full observability into the health and performance of your environment, no matter whether you run Kubernetes on-premises or in the cloud. With our [cluster explorer](https://docs.newrelic.com/docs/integrations/kubernetes-integration/cluster-explorer/kubernetes-cluster-explorer), you can cut through layers of complexity to see how your cluster is performing, from the heights of the control plane down to applications running on a single pod. + +You can see the power of the Kubernetes integration in the [cluster explorer](https://docs.newrelic.com/docs/integrations/kubernetes-integration/cluster-explorer/kubernetes-cluster-explorer), where the full picture of a cluster is made available on a single screen: nodes and pods are visualized according to their health and performance, with pending and alerting nodes in the innermost circles. [Predefined alert conditions](https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/kubernetes-integration-predefined-alert-policy) help you troubleshoot issues right from the start. Clicking each node reveals its status and how each app is performing. \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/Chart.yaml new file mode 100644 index 0000000000..13b99710bd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/Chart.yaml @@ -0,0 +1,16 @@ +apiVersion: v2 +appVersion: 0.13.0 +description: A Helm chart for the Kubernetes Agents Operator +home: https://github.com/newrelic/k8s-agents-operator/blob/main/charts/k8s-agents-operator/README.md +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: k8s-agents-operator +sources: +- https://github.com/newrelic/k8s-agents-operator +type: application +version: 0.13.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md new file mode 100644 index 0000000000..5c1da2d0a7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md @@ -0,0 +1,204 @@ +# k8s-agents-operator + +![Version: 0.13.0](https://img.shields.io/badge/Version-0.13.0-informational?style=flat-square) ![Type: application](https://img.shields.io/badge/Type-application-informational?style=flat-square) ![AppVersion: 0.13.0](https://img.shields.io/badge/AppVersion-0.13.0-informational?style=flat-square) + +A Helm chart for the Kubernetes Agents Operator + +**Homepage:** + +## Prerequisites + +[Helm](https://helm.sh) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs) to get started. + +## Installation + +### Requirements + +Add the `k8s-agents-operator` Helm chart repository: +```shell +helm repo add k8s-agents-operator https://newrelic.github.io/k8s-agents-operator +``` + +### Instrumentation + +Install the [`k8s-agents-operator`](https://github.com/newrelic/k8s-agents-operator) Helm chart: +```shell +helm upgrade --install k8s-agents-operator k8s-agents-operator/k8s-agents-operator \ + --namespace newrelic \ + --create-namespace \ + --values your-custom-values.yaml +``` + +### Monitored namespaces + +For each namespace you want the operator to be instrumented, create a secret containing a valid New Relic ingest license key: +```shell +kubectl create secret generic newrelic-key-secret \ + --namespace my-monitored-namespace \ + --from-literal=new_relic_license_key= +``` + +Similarly, for each namespace you need to instrument create the `Instrumentation` custom resource, specifying which APM agents you want to instrument. All available APM agent docker images and corresponding tags are listed on DockerHub: +* [Java](https://hub.docker.com/repository/docker/newrelic/newrelic-java-init/general) +* [Node](https://hub.docker.com/repository/docker/newrelic/newrelic-node-init/general) +* [Python](https://hub.docker.com/repository/docker/newrelic/newrelic-python-init/general) +* [.NET](https://hub.docker.com/repository/docker/newrelic/newrelic-dotnet-init/general) +* [Ruby](https://hub.docker.com/repository/docker/newrelic/newrelic-ruby-init/general) + +```yaml +apiVersion: newrelic.com/v1alpha1 +kind: Instrumentation +metadata: + labels: + app.kubernetes.io/name: instrumentation + app.kubernetes.io/created-by: k8s-agents-operator + name: newrelic-instrumentation +spec: + java: + image: newrelic/newrelic-java-init:latest + # env: + # Example New Relic agent supported environment variables + # - name: NEW_RELIC_LABELS + # value: "environment:auto-injection" + # Example overriding the appName configuration + # - name: NEW_RELIC_POD_NAME + # valueFrom: + # fieldRef: + # fieldPath: metadata.name + # - name: NEW_RELIC_APP_NAME + # value: "$(NEW_RELIC_LABELS)-$(NEW_RELIC_POD_NAME)" + nodejs: + image: newrelic/newrelic-node-init:latest + python: + image: newrelic/newrelic-python-init:latest + dotnet: + image: newrelic/newrelic-dotnet-init:latest + ruby: + image: newrelic/newrelic-ruby-init:latest +``` +In the example above, we show how you can configure the agent settings globally using environment variables. See each agent's configuration documentation for available configuration options: +* [Java](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/) +* [Node](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/) +* [Python](https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/) +* [.NET](https://docs.newrelic.com/docs/apm/agents/net-agent/configuration/net-agent-configuration/) +* [Ruby](https://docs.newrelic.com/docs/apm/agents/ruby-agent/configuration/ruby-agent-configuration/) + +Global agent settings can be overridden in your deployment manifest if a different configuration is required. + +### Annotations + +The `k8s-agents-operator` looks for language-specific annotations when your pods are being scheduled to know which applications you want to monitor. + +Below are the currently supported annotations: +```yaml +instrumentation.newrelic.com/inject-java: "true" +instrumentation.newrelic.com/inject-nodejs: "true" +instrumentation.newrelic.com/inject-python: "true" +instrumentation.newrelic.com/inject-dotnet: "true" +instrumentation.newrelic.com/inject-ruby: "true" +``` + +Example deployment with annotation to instrument the Java agent: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: spring-petclinic +spec: + selector: + matchLabels: + app: spring-petclinic + replicas: 1 + template: + metadata: + labels: + app: spring-petclinic + annotations: + instrumentation.newrelic.com/inject-java: "true" + spec: + containers: + - name: spring-petclinic + image: ghcr.io/pavolloffay/spring-petclinic:latest + ports: + - containerPort: 8080 + env: + - name: NEW_RELIC_APP_NAME + value: spring-petclinic-demo +``` + +### cert-manager + +The K8s Agents Operator supports the use of [`cert-manager`](https://github.com/cert-manager/cert-manager) if preferred. + +Install the [`cert-manager`](https://github.com/cert-manager/cert-manager) Helm chart: +```shell +helm install cert-manager jetstack/cert-manager \ + --namespace cert-manager \ + --create-namespace \ + --set crds.enabled=true +``` + +In your `values.yaml` file, set `admissionWebhooks.autoGenerateCert.enabled: false` and `admissionWebhooks.certManager.enabled: true`. Then install the chart as normal. + +## Available Chart Releases + +To see the available charts: +```shell +helm search repo k8s-agents-operator +``` + +If you want to see a list of all available charts and releases, check [index.yaml](https://newrelic.github.io/k8s-agents-operator/index.yaml). + +## Source Code + +* + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| admissionWebhooks | object | `{"autoGenerateCert":{"certPeriodDays":365,"enabled":true,"recreate":true},"caFile":"","certFile":"","certManager":{"enabled":false},"create":true,"keyFile":""}` | Admission webhooks make sure only requests with correctly formatted rules will get into the Operator | +| admissionWebhooks.autoGenerateCert.certPeriodDays | int | `365` | Cert validity period time in days. | +| admissionWebhooks.autoGenerateCert.enabled | bool | `true` | If true and certManager.enabled is false, Helm will automatically create a self-signed cert and secret for you. | +| admissionWebhooks.autoGenerateCert.recreate | bool | `true` | If set to true, new webhook key/certificate is generated on helm upgrade. | +| admissionWebhooks.caFile | string | `""` | Path to the CA cert. | +| admissionWebhooks.certFile | string | `""` | Path to your own PEM-encoded certificate. | +| admissionWebhooks.certManager.enabled | bool | `false` | If true and autoGenerateCert.enabled is false, cert-manager will create a self-signed cert and secret for you. | +| admissionWebhooks.keyFile | string | `""` | Path to your own PEM-encoded private key. | +| controllerManager.kubeRbacProxy.image.repository | string | `"gcr.io/kubebuilder/kube-rbac-proxy"` | | +| controllerManager.kubeRbacProxy.image.tag | string | `"v0.14.0"` | | +| controllerManager.kubeRbacProxy.resources.limits.cpu | string | `"500m"` | | +| controllerManager.kubeRbacProxy.resources.limits.memory | string | `"128Mi"` | | +| controllerManager.kubeRbacProxy.resources.requests.cpu | string | `"5m"` | | +| controllerManager.kubeRbacProxy.resources.requests.memory | string | `"64Mi"` | | +| controllerManager.manager.image.pullPolicy | string | `nil` | | +| controllerManager.manager.image.repository | string | `"newrelic/k8s-agents-operator"` | | +| controllerManager.manager.image.tag | string | `nil` | | +| controllerManager.manager.leaderElection | object | `{"enabled":true}` | Enable leader election mechanism for protecting against split brain if multiple operator pods/replicas are started | +| controllerManager.manager.resources.requests.cpu | string | `"100m"` | | +| controllerManager.manager.resources.requests.memory | string | `"64Mi"` | | +| controllerManager.manager.serviceAccount.create | bool | `true` | | +| controllerManager.replicas | int | `1` | | +| kubernetesClusterDomain | string | `"cluster.local"` | | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| metricsService.ports[0].name | string | `"https"` | | +| metricsService.ports[0].port | int | `8443` | | +| metricsService.ports[0].protocol | string | `"TCP"` | | +| metricsService.ports[0].targetPort | string | `"https"` | | +| metricsService.type | string | `"ClusterIP"` | | +| securityContext | object | `{"fsGroup":65532,"runAsGroup":65532,"runAsNonRoot":true,"runAsUser":65532}` | SecurityContext holds pod-level security attributes and common container settings | +| webhookService.ports[0].port | int | `443` | | +| webhookService.ports[0].protocol | string | `"TCP"` | | +| webhookService.ports[0].targetPort | int | `9443` | | +| webhookService.type | string | `"ClusterIP"` | | + +## Maintainers + +| Name | Email | Url | +| ---- | ------ | --- | +| juanjjaramillo | | | +| csongnr | | | +| dbudziwojskiNR | | | + +---------------------------------------------- +Autogenerated from chart metadata using [helm-docs v1.13.1](https://github.com/norwoodj/helm-docs/releases/v1.13.1) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md.gotmpl new file mode 100644 index 0000000000..5e583b703e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/README.md.gotmpl @@ -0,0 +1,162 @@ +{{ template "chart.header" . }} + +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.badgesSection" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +## Prerequisites + +[Helm](https://helm.sh) must be installed to use the charts. Please refer to Helm's [documentation](https://helm.sh/docs) to get started. + +## Installation + +### Requirements + +Add the `k8s-agents-operator` Helm chart repository: +```shell +helm repo add k8s-agents-operator https://newrelic.github.io/k8s-agents-operator +``` + +### Instrumentation + +Install the [`k8s-agents-operator`](https://github.com/newrelic/k8s-agents-operator) Helm chart: +```shell +helm upgrade --install k8s-agents-operator k8s-agents-operator/k8s-agents-operator \ + --namespace newrelic \ + --create-namespace \ + --values your-custom-values.yaml +``` + +### Monitored namespaces + +For each namespace you want the operator to be instrumented, create a secret containing a valid New Relic ingest license key: +```shell +kubectl create secret generic newrelic-key-secret \ + --namespace my-monitored-namespace \ + --from-literal=new_relic_license_key= +``` + +Similarly, for each namespace you need to instrument create the `Instrumentation` custom resource, specifying which APM agents you want to instrument. All available APM agent docker images and corresponding tags are listed on DockerHub: +* [Java](https://hub.docker.com/repository/docker/newrelic/newrelic-java-init/general) +* [Node](https://hub.docker.com/repository/docker/newrelic/newrelic-node-init/general) +* [Python](https://hub.docker.com/repository/docker/newrelic/newrelic-python-init/general) +* [.NET](https://hub.docker.com/repository/docker/newrelic/newrelic-dotnet-init/general) +* [Ruby](https://hub.docker.com/repository/docker/newrelic/newrelic-ruby-init/general) + +```yaml +apiVersion: newrelic.com/v1alpha1 +kind: Instrumentation +metadata: + labels: + app.kubernetes.io/name: instrumentation + app.kubernetes.io/created-by: k8s-agents-operator + name: newrelic-instrumentation +spec: + java: + image: newrelic/newrelic-java-init:latest + # env: + # Example New Relic agent supported environment variables + # - name: NEW_RELIC_LABELS + # value: "environment:auto-injection" + # Example overriding the appName configuration + # - name: NEW_RELIC_POD_NAME + # valueFrom: + # fieldRef: + # fieldPath: metadata.name + # - name: NEW_RELIC_APP_NAME + # value: "$(NEW_RELIC_LABELS)-$(NEW_RELIC_POD_NAME)" + nodejs: + image: newrelic/newrelic-node-init:latest + python: + image: newrelic/newrelic-python-init:latest + dotnet: + image: newrelic/newrelic-dotnet-init:latest + ruby: + image: newrelic/newrelic-ruby-init:latest +``` +In the example above, we show how you can configure the agent settings globally using environment variables. See each agent's configuration documentation for available configuration options: +* [Java](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/) +* [Node](https://docs.newrelic.com/docs/apm/agents/nodejs-agent/installation-configuration/nodejs-agent-configuration/) +* [Python](https://docs.newrelic.com/docs/apm/agents/python-agent/configuration/python-agent-configuration/) +* [.NET](https://docs.newrelic.com/docs/apm/agents/net-agent/configuration/net-agent-configuration/) +* [Ruby](https://docs.newrelic.com/docs/apm/agents/ruby-agent/configuration/ruby-agent-configuration/) + +Global agent settings can be overridden in your deployment manifest if a different configuration is required. + +### Annotations + +The `k8s-agents-operator` looks for language-specific annotations when your pods are being scheduled to know which applications you want to monitor. + +Below are the currently supported annotations: +```yaml +instrumentation.newrelic.com/inject-java: "true" +instrumentation.newrelic.com/inject-nodejs: "true" +instrumentation.newrelic.com/inject-python: "true" +instrumentation.newrelic.com/inject-dotnet: "true" +instrumentation.newrelic.com/inject-ruby: "true" +``` + +Example deployment with annotation to instrument the Java agent: +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: spring-petclinic +spec: + selector: + matchLabels: + app: spring-petclinic + replicas: 1 + template: + metadata: + labels: + app: spring-petclinic + annotations: + instrumentation.newrelic.com/inject-java: "true" + spec: + containers: + - name: spring-petclinic + image: ghcr.io/pavolloffay/spring-petclinic:latest + ports: + - containerPort: 8080 + env: + - name: NEW_RELIC_APP_NAME + value: spring-petclinic-demo +``` + +### cert-manager + +The K8s Agents Operator supports the use of [`cert-manager`](https://github.com/cert-manager/cert-manager) if preferred. + +Install the [`cert-manager`](https://github.com/cert-manager/cert-manager) Helm chart: +```shell +helm install cert-manager jetstack/cert-manager \ + --namespace cert-manager \ + --create-namespace \ + --set crds.enabled=true +``` + +In your `values.yaml` file, set `admissionWebhooks.autoGenerateCert.enabled: false` and `admissionWebhooks.certManager.enabled: true`. Then install the chart as normal. + +## Available Chart Releases + +To see the available charts: +```shell +helm search repo k8s-agents-operator +``` + +If you want to see a list of all available charts and releases, check [index.yaml](https://newrelic.github.io/k8s-agents-operator/index.yaml). + +{{ template "chart.sourcesSection" . }} + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +{{ template "chart.maintainersSection" . }} + +{{ template "helm-docs.versionFooter" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/NOTES.txt new file mode 100644 index 0000000000..b330f475d0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/NOTES.txt @@ -0,0 +1,36 @@ +This project is currently in preview. +Issues and contributions should be reported to the project's GitHub. +{{- if (include "k8s-agents-operator.areValuesValid" .) }} +===================================== + + ******** + **************** + ********** **********, + &&&**** ****/((( + &&&&&&& (((((( + &&&&&&&&&& (((((( + &&&&&&&& (((((( + &&&&& (((((( + &&&&& (((((((( + &&&&& .(((((((((( + &&&&&(((((((( + &&&(((, + +Your deployment of the New Relic Agent Operator is complete. +You can check on the progress of this by running the following command: + +kubectl get deployments -o wide -w --namespace {{ .Release.Namespace }} {{ template "k8s-agents-operator.fullname" . }} + +WARNING: This deployment will be incomplete until you configure your Instrumentation custom resource definition. +===================================== + +Please visit https://github.com/newrelic/k8s-agents-operator for instructions on how to create & configure the +Instrumentation custom resource definition required by the Operator. +{{- else }} + +############################################################################## +#### ERROR: You did not set a license key. #### +############################################################################## + +This deployment will be incomplete until you get your ingest license key from New Relic. +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/_helpers.tpl new file mode 100644 index 0000000000..f6ed6d04ed --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/_helpers.tpl @@ -0,0 +1,121 @@ +{{/* +Expand the name of the chart. +*/}} +{{- define "k8s-agents-operator.name" -}} +{{- .Chart.Name | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "k8s-agents-operator.fullname" -}} +{{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "k8s-agents-operator.chart" -}} +{{- printf "%s" .Chart.Name | replace "+" "_" | trunc 63 | trimSuffix "-" }} +{{- end }} + +{{/* +Common labels +*/}} +{{- define "k8s-agents-operator.labels" -}} +helm.sh/chart: {{ include "k8s-agents-operator.chart" . }} +{{ include "k8s-agents-operator.selectorLabels" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "k8s-agents-operator.selectorLabels" -}} +app.kubernetes.io/name: {{ include "k8s-agents-operator.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} + +{{/* +Create the name of the service account to use +*/}} +{{- define "k8s-agents-operator.serviceAccountName" -}} +{{- if .Values.controllerManager.manager.serviceAccount.create }} +{{- default (include "k8s-agents-operator.name" .) .Values.controllerManager.manager.serviceAccount.name }} +{{- else }} +{{- default "default" .Values.controllerManager.manager.serviceAccount.name }} +{{- end }} +{{- end }} + +{{/* +Return the licenseKey +*/}} +{{- define "k8s-agents-operator.licenseKey" -}} +{{- if .Values.global}} + {{- if .Values.global.licenseKey }} + {{- .Values.global.licenseKey -}} + {{- else -}} + {{- .Values.licenseKey | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.licenseKey | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Returns if the template should render, it checks if the required values are set. +*/}} +{{- define "k8s-agents-operator.areValuesValid" -}} +{{- $licenseKey := include "k8s-agents-operator.licenseKey" . -}} +{{- and (or $licenseKey)}} +{{- end -}} + +{{/* +Controller manager service certificate's secret. +*/}} +{{- define "k8s-agents-operator.certificateSecret" -}} +{{- printf "%s-controller-manager-service-cert" (include "k8s-agents-operator.fullname" .) | trunc 63 | trimSuffix "-" -}} +{{- end }} + +{{/* +Return certificate and CA for Webhooks. +It handles variants when a cert has to be generated by Helm, +a cert is loaded from an existing secret or is provided via `.Values` +*/}} +{{- define "k8s-agents-operator.webhookCert" -}} +{{- $caCert := "" }} +{{- $clientCert := "" }} +{{- $clientKey := "" }} +{{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} + {{- $prevSecret := (lookup "v1" "Secret" .Release.Namespace (include "k8s-agents-operator.certificateSecret" . )) }} + {{- if and (not .Values.admissionWebhooks.autoGenerateCert.recreate) $prevSecret }} + {{- $clientCert = index $prevSecret "data" "tls.crt" }} + {{- $clientKey = index $prevSecret "data" "tls.key" }} + {{- $caCert = index $prevSecret "data" "ca.crt" }} + {{- if not $caCert }} + {{- $prevHook := (lookup "admissionregistration.k8s.io/v1" "MutatingWebhookConfiguration" .Release.Namespace (print (include "k8s-agents-operator.fullname" . ) "-mutation")) }} + {{- if not (eq (toString $prevHook) "") }} + {{- $caCert = (first $prevHook.webhooks).clientConfig.caBundle }} + {{- end }} + {{- end }} + {{- else }} + {{- $certValidity := int .Values.admissionWebhooks.autoGenerateCert.certPeriodDays | default 365 }} + {{- $ca := genCA "k8s-agents-operator-operator-ca" $certValidity }} + {{- $domain1 := printf "%s-webhook-service.%s.svc" (include "k8s-agents-operator.fullname" .) $.Release.Namespace }} + {{- $domain2 := printf "%s-webhook-service.%s.svc.%s" (include "k8s-agents-operator.fullname" .) $.Release.Namespace $.Values.kubernetesClusterDomain }} + {{- $domains := list $domain1 $domain2 }} + {{- $cert := genSignedCert (include "k8s-agents-operator.fullname" .) nil $domains $certValidity $ca }} + {{- $clientCert = b64enc $cert.Cert }} + {{- $clientKey = b64enc $cert.Key }} + {{- $caCert = b64enc $ca.Cert }} + {{- end }} +{{- else }} + {{- $clientCert = .Files.Get .Values.admissionWebhooks.certFile | b64enc }} + {{- $clientKey = .Files.Get .Values.admissionWebhooks.keyFile | b64enc }} + {{- $caCert = .Files.Get .Values.admissionWebhooks.caFile | b64enc }} +{{- end }} +{{- $result := dict "clientCert" $clientCert "clientKey" $clientKey "caCert" $caCert }} +{{- $result | toYaml }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/certmanager.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/certmanager.yaml new file mode 100644 index 0000000000..ccef9f25bc --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/certmanager.yaml @@ -0,0 +1,30 @@ +{{- if and .Values.admissionWebhooks.create .Values.admissionWebhooks.certManager.enabled }} +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-serving-cert + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + dnsNames: + - '{{ template "k8s-agents-operator.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc' + - '{{ template "k8s-agents-operator.fullname" . }}-webhook-service.{{ .Release.Namespace }}.svc.{{ .Values.kubernetesClusterDomain }}' + issuerRef: + kind: Issuer + name: '{{ template "k8s-agents-operator.fullname" . }}-selfsigned-issuer' + secretName: {{ template "k8s-agents-operator.certificateSecret" . }} + subject: + organizationalUnits: + - k8s-agents-operator +--- +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-selfsigned-issuer + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + selfSigned: {} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/deployment.yaml new file mode 100644 index 0000000000..8cccd259e3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/deployment.yaml @@ -0,0 +1,91 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ template "k8s-agents-operator.serviceAccountName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ template "k8s-agents-operator.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + replicas: {{ .Values.controllerManager.replicas }} + selector: + matchLabels: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 6 }} + template: + metadata: + labels: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 8 }} + spec: + containers: + - args: + - --metrics-addr=127.0.0.1:8080 + {{- if .Values.controllerManager.manager.leaderElection.enabled }} + - --enable-leader-election + {{- end }} + - --zap-log-level=info + - --zap-time-encoding=rfc3339nano + env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + - name: ENABLE_WEBHOOKS + value: "true" + image: {{ .Values.controllerManager.manager.image.repository }}:{{ .Values.controllerManager.manager.image.tag | default .Chart.AppVersion }} + imagePullPolicy: {{ .Values.controllerManager.manager.image.pullPolicy | default "Always" }} + livenessProbe: + httpGet: + path: /healthz + port: 8081 + initialDelaySeconds: 15 + periodSeconds: 20 + name: manager + ports: + - containerPort: 9443 + name: webhook-server + protocol: TCP + readinessProbe: + httpGet: + path: /readyz + port: 8081 + initialDelaySeconds: 5 + periodSeconds: 10 + resources: {{- toYaml .Values.controllerManager.manager.resources | nindent 10 }} + volumeMounts: + - mountPath: /tmp/k8s-webhook-server/serving-certs + name: cert + readOnly: true + - args: + - --secure-listen-address=0.0.0.0:8443 + - --upstream=http://127.0.0.1:8080/ + - --logtostderr=true + - --v=0 + env: + - name: KUBERNETES_CLUSTER_DOMAIN + value: {{ quote .Values.kubernetesClusterDomain }} + image: {{ .Values.controllerManager.kubeRbacProxy.image.repository }}:{{ .Values.controllerManager.kubeRbacProxy.image.tag | default .Chart.AppVersion }} + name: kube-rbac-proxy + ports: + - containerPort: 8443 + name: https + protocol: TCP + resources: {{- toYaml .Values.controllerManager.kubeRbacProxy.resources | nindent 10 }} + serviceAccountName: {{ template "k8s-agents-operator.serviceAccountName" . }} + terminationGracePeriodSeconds: 10 + {{- if or .Values.admissionWebhooks.create (include "k8s-agents-operator.certificateSecret" . ) }} + volumes: + - name: cert + secret: + defaultMode: 420 + secretName: {{ template "k8s-agents-operator.certificateSecret" . }} + {{- end }} + securityContext: + {{- toYaml .Values.securityContext | nindent 8 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/instrumentation-crd.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/instrumentation-crd.yaml new file mode 100644 index 0000000000..ae81414fb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/instrumentation-crd.yaml @@ -0,0 +1,1150 @@ +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + name: instrumentations.newrelic.com + annotations: + controller-gen.kubebuilder.io/version: v0.11.3 + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + group: newrelic.com + names: + kind: Instrumentation + listKind: InstrumentationList + plural: instrumentations + shortNames: + - nragent + - nragents + singular: instrumentation + scope: Namespaced + versions: + - additionalPrinterColumns: + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: Instrumentation is the Schema for the instrumentations API + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: InstrumentationSpec defines the desired state of Instrumentation + properties: + dotnet: + description: DotNet defines configuration for dotnet auto-instrumentation. + properties: + env: + description: Env defines DotNet specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with DotNet agent and + auto-instrumentation. + type: string + type: object + env: + description: 'Env defines common env vars. There are four layers for + env vars'' definitions and the precedence order is: `original container + env vars` > `language specific env vars` > `common env vars` > `instrument + spec configs'' vars`. If the former var had been defined, then the + other vars would be ignored.' + items: + description: EnvVar represents an environment variable present in + a Container. + properties: + name: + description: Name of the environment variable. Must be a C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded using + the previously defined environment variables in the container + and any service environment variables. If a variable cannot + be resolved, the reference in the input string will be unchanged. + Double $$ are reduced to a single $, which allows for escaping + the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the + string literal "$(VAR_NAME)". Escaped references will never + be expanded, regardless of whether the variable exists or + not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. Cannot + be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, `metadata.annotations['''']`, + spec.nodeName, spec.serviceAccountName, status.hostIP, + status.podIP, status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath is + written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified + API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the exposed + resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must + be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + exporter: + description: Exporter defines exporter configuration. + properties: + endpoint: + description: Endpoint is address of the collector with OTLP endpoint. + type: string + type: object + go: + description: Go defines configuration for Go auto-instrumentation. + When using Go auto-instrumentation you must provide a value for + the OTEL_GO_AUTO_TARGET_EXE env var via the Instrumentation env + vars or via the instrumentation.opentelemetry.io/otel-go-auto-target-exe + pod annotation. Failure to set this value causes instrumentation + injection to abort, leaving the original pod unchanged. + properties: + env: + description: 'Env defines Go specific env vars. There are four + layers for env vars'' definitions and the precedence order is: + `original container env vars` > `language specific env vars` + > `common env vars` > `instrument spec configs'' vars`. If the + former var had been defined, then the other vars would be ignored.' + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with Go SDK and auto-instrumentation. + type: string + resourceRequirements: + description: Resources describes the compute resource requirements. + properties: + claims: + description: "Claims lists the names of resources, defined + in spec.resourceClaims, that are used by this container. + \n This is an alpha field and requires enabling the DynamicResourceAllocation + feature gate. \n This field is immutable. It can only be + set for containers." + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: Name must match the name of one entry in + pod.spec.resourceClaims of the Pod where this field + is used. It makes that resource available inside a + container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: 'Limits describes the maximum amount of compute + resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: 'Requests describes the minimum amount of compute + resources required. If Requests is omitted for a container, + it defaults to Limits if that is explicitly specified, otherwise + to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + type: object + volumeLimitSize: + anyOf: + - type: integer + - type: string + description: VolumeSizeLimit defines size limit for volume used + for auto-instrumentation. The default size is 200Mi. + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + type: object + java: + description: Java defines configuration for java auto-instrumentation. + properties: + env: + description: Env defines java specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with javaagent auto-instrumentation + JAR. + type: string + type: object + nodejs: + description: NodeJS defines configuration for nodejs auto-instrumentation. + properties: + env: + description: Env defines nodejs specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with NodeJS agent and + auto-instrumentation. + type: string + type: object + php: + description: Php defines configuration for php auto-instrumentation. + properties: + env: + description: Env defines Php specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with Php agent and auto-instrumentation. + type: string + type: object + propagators: + description: Propagators defines inter-process context propagation + configuration. Values in this list will be set in the OTEL_PROPAGATORS + env var. Enum=tracecontext;none + items: + description: Propagator represents the propagation type. + enum: + - tracecontext + - none + type: string + type: array + python: + description: Python defines configuration for python auto-instrumentation. + properties: + env: + description: Env defines python specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with Python agent and + auto-instrumentation. + type: string + type: object + ruby: + description: Ruby defines configuration for ruby auto-instrumentation. + properties: + env: + description: Env defines Ruby specific env vars. If the former + var had been defined, then the other vars would be ignored. + items: + description: EnvVar represents an environment variable present + in a Container. + properties: + name: + description: Name of the environment variable. Must be a + C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded + using the previously defined environment variables in + the container and any service environment variables. If + a variable cannot be resolved, the reference in the input + string will be unchanged. Double $$ are reduced to a single + $, which allows for escaping the $(VAR_NAME) syntax: i.e. + "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". + Escaped references will never be expanded, regardless + of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. + Cannot be used if value is not empty. + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the ConfigMap or its + key must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, + metadata.namespace, `metadata.labels['''']`, + `metadata.annotations['''']`, spec.nodeName, + spec.serviceAccountName, status.hostIP, status.podIP, + status.podIPs.' + properties: + apiVersion: + description: Version of the schema the FieldPath + is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the + specified API version. + type: string + required: + - fieldPath + type: object + x-kubernetes-map-type: atomic + resourceFieldRef: + description: 'Selects a resource of the container: only + resources limits and requests (limits.cpu, limits.memory, + limits.ephemeral-storage, requests.cpu, requests.memory + and requests.ephemeral-storage) are currently supported.' + properties: + containerName: + description: 'Container name: required for volumes, + optional for env vars' + type: string + divisor: + anyOf: + - type: integer + - type: string + description: Specifies the output format of the + exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + required: + - resource + type: object + x-kubernetes-map-type: atomic + secretKeyRef: + description: Selects a key of a secret in the pod's + namespace + properties: + key: + description: The key of the secret to select from. Must + be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names + TODO: Add other useful fields. apiVersion, kind, + uid?' + type: string + optional: + description: Specify whether the Secret or its key + must be defined + type: boolean + required: + - key + type: object + x-kubernetes-map-type: atomic + type: object + required: + - name + type: object + type: array + image: + description: Image is a container image with Ruby agent and + auto-instrumentation. + type: string + type: object + resource: + description: Resource defines the configuration for the resource attributes, + as defined by the OpenTelemetry specification. + properties: + addK8sUIDAttributes: + description: AddK8sUIDAttributes defines whether K8s UID attributes + should be collected (e.g. k8s.deployment.uid). + type: boolean + resourceAttributes: + additionalProperties: + type: string + description: 'Attributes defines attributes that are added to + the resource. For example environment: dev' + type: object + type: object + sampler: + description: Sampler defines sampling configuration. + properties: + argument: + description: Argument defines sampler argument. The value depends + on the sampler type. For instance for parentbased_traceidratio + sampler type it is a number in range [0..1] e.g. 0.25. The value + will be set in the OTEL_TRACES_SAMPLER_ARG env var. + type: string + type: + description: Type defines sampler type. The value will be set + in the OTEL_TRACES_SAMPLER env var. The value can be for instance + parentbased_always_on, parentbased_always_off, parentbased_traceidratio... + enum: + - always_on + - always_off + - traceidratio + - parentbased_always_on + - parentbased_always_off + - parentbased_traceidratio + type: string + type: object + type: object + status: + description: InstrumentationStatus defines the observed state of Instrumentation + type: object + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: null + storedVersions: null \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/leader-election-rbac.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/leader-election-rbac.yaml new file mode 100644 index 0000000000..5a42149205 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/leader-election-rbac.yaml @@ -0,0 +1,51 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-leader-election-role + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +rules: +- apiGroups: + - "" + resources: + - configmaps + verbs: + - get + - list + - watch + - create + - update + - patch + - delete +- apiGroups: + - "" + resources: + - configmaps/status + verbs: + - get + - update + - patch +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-leader-election-rolebinding + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: '{{ template "k8s-agents-operator.fullname" . }}-leader-election-role' +subjects: +- kind: ServiceAccount + name: '{{ template "k8s-agents-operator.serviceAccountName" . }}' + namespace: '{{ .Release.Namespace }}' diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/manager-rbac.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/manager-rbac.yaml new file mode 100644 index 0000000000..062b5821b7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/manager-rbac.yaml @@ -0,0 +1,77 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-manager-role + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +rules: +- apiGroups: + - "" + resources: + - events + verbs: + - create + - patch +- apiGroups: + - "" + resources: + - namespaces + verbs: + - list + - watch +- apiGroups: + - apps + resources: + - replicasets + verbs: + - get + - list + - watch +- apiGroups: + - coordination.k8s.io + resources: + - leases + verbs: + - create + - get + - list + - update +- apiGroups: + - newrelic.com + resources: + - instrumentations + verbs: + - get + - list + - patch + - update + - watch +- apiGroups: + - route.openshift.io + resources: + - routes + - routes/custom-host + verbs: + - create + - delete + - get + - list + - patch + - update + - watch +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-manager-rolebinding + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: '{{ template "k8s-agents-operator.fullname" . }}-manager-role' +subjects: +- kind: ServiceAccount + name: '{{ template "k8s-agents-operator.serviceAccountName" . }}' + namespace: '{{ .Release.Namespace }}' diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/newrelic_license_secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/newrelic_license_secret.yaml new file mode 100644 index 0000000000..00734c4cb3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/newrelic_license_secret.yaml @@ -0,0 +1,14 @@ +{{- $licenseKey := include "k8s-agents-operator.licenseKey" . -}} +{{- if $licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + name: "newrelic-key-secret" + namespace: {{ .Release.Namespace }} + labels: + app.kubernetes.io/name: {{ include "k8s-agents-operator.chart" . }} + app.kubernetes.io/instance: {{ .Release.Name }} +type: Opaque +data: + new_relic_license_key: {{ $licenseKey | b64enc }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/proxy-rbac.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/proxy-rbac.yaml new file mode 100644 index 0000000000..2a9ea8667b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/proxy-rbac.yaml @@ -0,0 +1,35 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-proxy-role + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +rules: +- apiGroups: + - authentication.k8s.io + resources: + - tokenreviews + verbs: + - create +- apiGroups: + - authorization.k8s.io + resources: + - subjectaccessreviews + verbs: + - create +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-proxy-rolebinding + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: '{{ template "k8s-agents-operator.fullname" . }}-proxy-role' +subjects: +- kind: ServiceAccount + name: '{{ template "k8s-agents-operator.serviceAccountName" . }}' + namespace: '{{ .Release.Namespace }}' diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/reader-rbac.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/reader-rbac.yaml new file mode 100644 index 0000000000..4aa1d446e1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/reader-rbac.yaml @@ -0,0 +1,11 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-metrics-reader + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +rules: +- nonResourceURLs: + - /metrics + verbs: + - get diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/service.yaml new file mode 100644 index 0000000000..cfdc9d2b12 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/service.yaml @@ -0,0 +1,15 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "k8s-agents-operator.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + type: {{ .Values.metricsService.type }} + selector: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 4 }} + ports: + {{- .Values.metricsService.ports | toYaml | nindent 2 -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-configuration.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-configuration.yaml new file mode 100644 index 0000000000..152ec79dc9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-configuration.yaml @@ -0,0 +1,134 @@ +{{- $tls := fromYaml (include "k8s-agents-operator.webhookCert" .) }} +{{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: {{ template "k8s-agents-operator.certificateSecret" . }} + annotations: + "helm.sh/hook": "pre-install,pre-upgrade" + "helm.sh/hook-delete-policy": "before-hook-creation" + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} + app.kubernetes.io/component: webhook + namespace: {{ .Release.Namespace }} +data: + tls.crt: {{ $tls.clientCert }} + tls.key: {{ $tls.clientKey }} + ca.crt: {{ $tls.caCert }} +{{- end }} +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-mutation + {{- if .Values.admissionWebhooks.certManager.enabled }} + annotations: + cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ template "k8s-agents-operator.fullname" . }}-serving-cert + {{- end }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + {{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} + caBundle: {{ $tls.caCert }} + {{- end }} + service: + name: '{{ template "k8s-agents-operator.fullname" . }}-webhook-service' + namespace: '{{ .Release.Namespace }}' + path: /mutate-newrelic-com-v1alpha1-instrumentation + failurePolicy: Fail + name: instrumentation.kb.io + rules: + - apiGroups: + - newrelic.com + apiVersions: + - v1alpha1 + operations: + - CREATE + - UPDATE + resources: + - instrumentations + sideEffects: None +- admissionReviewVersions: + - v1 + clientConfig: + {{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} + caBundle: {{ $tls.caCert }} + {{- end }} + service: + name: '{{ template "k8s-agents-operator.fullname" . }}-webhook-service' + namespace: '{{ .Release.Namespace }}' + path: /mutate-v1-pod + failurePolicy: Ignore + name: mpod.kb.io + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + - UPDATE + resources: + - pods + sideEffects: None +--- +apiVersion: admissionregistration.k8s.io/v1 +kind: ValidatingWebhookConfiguration +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-validation + {{- if .Values.admissionWebhooks.certManager.enabled }} + annotations: + cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ template "k8s-agents-operator.fullname" . }}-serving-cert + {{- end }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +webhooks: +- admissionReviewVersions: + - v1 + clientConfig: + {{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} + caBundle: {{ $tls.caCert }} + {{- end }} + service: + name: '{{ template "k8s-agents-operator.fullname" . }}-webhook-service' + namespace: '{{ .Release.Namespace }}' + path: /validate-newrelic-com-v1alpha1-instrumentation + failurePolicy: Fail + name: vinstrumentationcreateupdate.kb.io + rules: + - apiGroups: + - newrelic.com + apiVersions: + - v1alpha1 + operations: + - CREATE + - UPDATE + resources: + - instrumentations + sideEffects: None +- admissionReviewVersions: + - v1 + clientConfig: + {{- if .Values.admissionWebhooks.autoGenerateCert.enabled }} + caBundle: {{ $tls.caCert }} + {{- end }} + service: + name: '{{ template "k8s-agents-operator.fullname" . }}-webhook-service' + namespace: '{{ .Release.Namespace }}' + path: /validate-newrelic-com-v1alpha1-instrumentation + failurePolicy: Ignore + name: vinstrumentationdelete.kb.io + rules: + - apiGroups: + - newrelic.com + apiVersions: + - v1alpha1 + operations: + - DELETE + resources: + - instrumentations + sideEffects: None diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-service.yaml new file mode 100644 index 0000000000..e5598e8f72 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/templates/webhook-service.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "k8s-agents-operator.fullname" . }}-webhook-service + namespace: {{ .Release.Namespace }} + labels: + {{- include "k8s-agents-operator.labels" . | nindent 4 }} +spec: + type: {{ .Values.webhookService.type }} + selector: + control-plane: controller-manager + {{- include "k8s-agents-operator.labels" . | nindent 4 }} + ports: + {{- .Values.webhookService.ports | toYaml | nindent 2 -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/cert_manager_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/cert_manager_test.yaml new file mode 100644 index 0000000000..1de2019212 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/cert_manager_test.yaml @@ -0,0 +1,85 @@ +suite: cert-manager +templates: + - templates/certmanager.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: creates cert-manager resources if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - hasDocuments: + count: 2 + - it: creates Issuer if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - equal: + path: kind + value: Issuer + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-selfsigned-issuer + - exists: + path: spec.selfSigned + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-selfsigned-issuer + - it: creates Certificate in default domain if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - equal: + path: kind + value: Certificate + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-serving-cert + - equal: + path: spec.dnsNames + value: + - my-release-k8s-agents-operator-webhook-service.my-namespace.svc + - my-release-k8s-agents-operator-webhook-service.my-namespace.svc.cluster.local + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-serving-cert + - it: creates Certificate in custom domain if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + kubernetesClusterDomain: kubey.test + asserts: + - equal: + path: kind + value: Certificate + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-serving-cert + - equal: + path: spec.dnsNames + value: + - my-release-k8s-agents-operator-webhook-service.my-namespace.svc + - my-release-k8s-agents-operator-webhook-service.my-namespace.svc.kubey.test + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-serving-cert diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/webhook_ssl_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/webhook_ssl_test.yaml new file mode 100644 index 0000000000..9343a43a43 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/tests/webhook_ssl_test.yaml @@ -0,0 +1,176 @@ +suite: webhook ssl +templates: + - templates/webhook-configuration.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: creates ssl certificate secret by default + set: + licenseKey: us-whatever + asserts: + - hasDocuments: + count: 3 + - containsDocument: + kind: Secret + apiVersion: v1 + name: my-release-k8s-agents-operator-controller-manager-service-cert + namespace: my-namespace + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-controller-manager-service-cert + - exists: + path: data["tls.crt"] + template: templates/webhook-configuration.yaml + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-controller-manager-service-cert + - exists: + path: data["tls.key"] + template: templates/webhook-configuration.yaml + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-controller-manager-service-cert + - exists: + path: data["ca.crt"] + template: templates/webhook-configuration.yaml + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-controller-manager-service-cert + - it: does not inject cert-manager annotations into MutatingWebhook by default + set: + licenseKey: us-whatever + asserts: + - notExists: + path: metadata.annotations["cert-manager.io/inject-ca-from"] + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - it: does not inject cert-manager annotations into ValidatingWebhook by default + set: + licenseKey: us-whatever + asserts: + - notExists: + path: metadata.annotations["cert-manager.io/inject-ca-from"] + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-validation + - it: does inject caBundle into MutatingWebhook clientConfigs by default + set: + licenseKey: us-whatever + asserts: + - lengthEqual: + path: webhooks + count: 2 + - exists: + path: webhooks[0].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - exists: + path: webhooks[1].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - it: does inject caBundle into ValidatingWebhook clientConfigs by default + set: + licenseKey: us-whatever + asserts: + - lengthEqual: + path: webhooks + count: 2 + - exists: + path: webhooks[0].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - exists: + path: webhooks[1].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-validation + - it: does not creates ssl certificate secret if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - hasDocuments: + count: 2 + - it: injects cert-manager annotations into MutatingWebhook if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - equal: + path: metadata.annotations["cert-manager.io/inject-ca-from"] + value: my-namespace/my-release-k8s-agents-operator-serving-cert + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - it: injects cert-manager annotations into ValidatingWebhook if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - equal: + path: metadata.annotations["cert-manager.io/inject-ca-from"] + value: my-namespace/my-release-k8s-agents-operator-serving-cert + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-validation + - it: does not inject caBundle into MutatingWebhook clientConfigs if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - lengthEqual: + path: webhooks + count: 2 + - notExists: + path: webhooks[0].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - notExists: + path: webhooks[1].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - it: does not inject caBundle into ValidatingWebhook clientConfigs if cert-manager enabled and auto cert disabled + set: + licenseKey: us-whatever + admissionWebhooks: + autoGenerateCert: + enabled: false + certManager: + enabled: true + asserts: + - lengthEqual: + path: webhooks + count: 2 + - notExists: + path: webhooks[0].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-mutation + - notExists: + path: webhooks[1].clientConfig.caBundle + documentSelector: + path: metadata.name + value: my-release-k8s-agents-operator-validation diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/values.yaml new file mode 100644 index 0000000000..f28979778a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/k8s-agents-operator/values.yaml @@ -0,0 +1,93 @@ +# Default values for k8s-agents-operator. +# This is a YAML-formatted file. +# Declare variables to be passed into your templates. + +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" + +controllerManager: + replicas: 1 + + kubeRbacProxy: + image: + repository: gcr.io/kubebuilder/kube-rbac-proxy + tag: v0.14.0 + resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 5m + memory: 64Mi + + manager: + image: + repository: newrelic/k8s-agents-operator + tag: + pullPolicy: + resources: + requests: + cpu: 100m + memory: 64Mi + serviceAccount: + create: true + # -- Source: https://docs.openshift.com/container-platform/4.10/operators/operator_sdk/osdk-leader-election.html + # -- Enable leader election mechanism for protecting against split brain if multiple operator pods/replicas are started + leaderElection: + enabled: true + +kubernetesClusterDomain: cluster.local + +metricsService: + ports: + - name: https + port: 8443 + protocol: TCP + targetPort: https + type: ClusterIP + +webhookService: + ports: + - port: 443 + protocol: TCP + targetPort: 9443 + type: ClusterIP + +# -- Source: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/ +# -- SecurityContext holds pod-level security attributes and common container settings +securityContext: + runAsGroup: 65532 + runAsNonRoot: true + runAsUser: 65532 + fsGroup: 65532 + +# -- Admission webhooks make sure only requests with correctly formatted rules will get into the Operator +admissionWebhooks: + create: true + + ## TLS Certificate Option 1: Use Helm to automatically generate self-signed certificate. + ## certManager must be disabled and autoGenerateCert must be enabled. + autoGenerateCert: + # -- If true and certManager.enabled is false, Helm will automatically create a self-signed cert and secret for you. + enabled: true + # -- If set to true, new webhook key/certificate is generated on helm upgrade. + recreate: true + # -- Cert validity period time in days. + certPeriodDays: 365 + + ## TLS Certificate Option 2: Use certManager to generate self-signed certificate. + certManager: + # -- If true and autoGenerateCert.enabled is false, cert-manager will create a self-signed cert and secret for you. + enabled: false + + ## TLS Certificate Option 3: Use your own self-signed certificate. + ## certManager and autoGenerateCert must be disabled and certFile, keyFile, and caFile must be set. + ## The chart reads the contents of the file paths with the helm .Files.Get function. + ## Refer to this doc https://helm.sh/docs/chart_template_guide/accessing_files/ to understand + ## limitations of file paths accessible to the chart. + # -- Path to your own PEM-encoded certificate. + certFile: "" + # -- Path to your own PEM-encoded private key. + keyFile: "" + # -- Path to the CA cert. + caFile: "" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/.helmignore new file mode 100644 index 0000000000..f0c1319444 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/.helmignore @@ -0,0 +1,21 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/Chart.yaml new file mode 100644 index 0000000000..a86cd07e97 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/Chart.yaml @@ -0,0 +1,26 @@ +annotations: + artifacthub.io/license: Apache-2.0 + artifacthub.io/links: | + - name: Chart Source + url: https://github.com/prometheus-community/helm-charts +apiVersion: v2 +appVersion: 2.10.0 +description: Install kube-state-metrics to generate and expose cluster-level metrics +home: https://github.com/kubernetes/kube-state-metrics/ +keywords: +- metric +- monitoring +- prometheus +- kubernetes +maintainers: +- email: tariq.ibrahim@mulesoft.com + name: tariq1890 +- email: manuel@rueg.eu + name: mrueg +- email: david@0xdc.me + name: dotdc +name: kube-state-metrics +sources: +- https://github.com/kubernetes/kube-state-metrics/ +type: application +version: 5.12.1 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/README.md new file mode 100644 index 0000000000..843be89e69 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/README.md @@ -0,0 +1,85 @@ +# kube-state-metrics Helm Chart + +Installs the [kube-state-metrics agent](https://github.com/kubernetes/kube-state-metrics). + +## Get Repository Info + +```console +helm repo add prometheus-community https://prometheus-community.github.io/helm-charts +helm repo update +``` + +_See [helm repo](https://helm.sh/docs/helm/helm_repo/) for command documentation._ + + +## Install Chart + +```console +helm install [RELEASE_NAME] prometheus-community/kube-state-metrics [flags] +``` + +_See [configuration](#configuration) below._ + +_See [helm install](https://helm.sh/docs/helm/helm_install/) for command documentation._ + +## Uninstall Chart + +```console +helm uninstall [RELEASE_NAME] +``` + +This removes all the Kubernetes components associated with the chart and deletes the release. + +_See [helm uninstall](https://helm.sh/docs/helm/helm_uninstall/) for command documentation._ + +## Upgrading Chart + +```console +helm upgrade [RELEASE_NAME] prometheus-community/kube-state-metrics [flags] +``` + +_See [helm upgrade](https://helm.sh/docs/helm/helm_upgrade/) for command documentation._ + +### Migrating from stable/kube-state-metrics and kubernetes/kube-state-metrics + +You can upgrade in-place: + +1. [get repository info](#get-repository-info) +1. [upgrade](#upgrading-chart) your existing release name using the new chart repository + +## Upgrading to v3.0.0 + +v3.0.0 includes kube-state-metrics v2.0, see the [changelog](https://github.com/kubernetes/kube-state-metrics/blob/release-2.0/CHANGELOG.md) for major changes on the application-side. + +The upgraded chart now the following changes: + +* Dropped support for helm v2 (helm v3 or later is required) +* collectors key was renamed to resources +* namespace key was renamed to namespaces + +## Configuration + +See [Customizing the Chart Before Installing](https://helm.sh/docs/intro/using_helm/#customizing-the-chart-before-installing). To see all configurable options with detailed comments: + +```console +helm show values prometheus-community/kube-state-metrics +``` + +### kube-rbac-proxy + +You can enable `kube-state-metrics` endpoint protection using `kube-rbac-proxy`. By setting `kubeRBACProxy.enabled: true`, this chart will deploy one RBAC proxy container per endpoint (metrics & telemetry). +To authorize access, authenticate your requests (via a `ServiceAccount` for example) with a `ClusterRole` attached such as: + +```yaml +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: kube-state-metrics-read +rules: + - apiGroups: [ "" ] + resources: ["services/kube-state-metrics"] + verbs: + - get +``` + +See [kube-rbac-proxy examples](https://github.com/brancz/kube-rbac-proxy/tree/master/examples/resource-attributes) for more details. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/NOTES.txt new file mode 100644 index 0000000000..3589c24ec3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/NOTES.txt @@ -0,0 +1,23 @@ +kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. +The exposed metrics can be found here: +https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics + +The metrics are exported on the HTTP endpoint /metrics on the listening port. +In your case, {{ template "kube-state-metrics.fullname" . }}.{{ template "kube-state-metrics.namespace" . }}.svc.cluster.local:{{ .Values.service.port }}/metrics + +They are served either as plaintext or protobuf depending on the Accept header. +They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. + +{{- if .Values.kubeRBACProxy.enabled}} + +kube-rbac-proxy endpoint protections is enabled: +- Metrics endpoints are now HTTPS +- Ensure that the client authenticates the requests (e.g. via service account) with the following role permissions: +``` +rules: + - apiGroups: [ "" ] + resources: ["services/{{ template "kube-state-metrics.fullname" . }}"] + verbs: + - get +``` +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/_helpers.tpl new file mode 100644 index 0000000000..a4358c87a1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/_helpers.tpl @@ -0,0 +1,156 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "kube-state-metrics.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "kube-state-metrics.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create the name of the service account to use +*/}} +{{- define "kube-state-metrics.serviceAccountName" -}} +{{- if .Values.serviceAccount.create -}} + {{ default (include "kube-state-metrics.fullname" .) .Values.serviceAccount.name }} +{{- else -}} + {{ default "default" .Values.serviceAccount.name }} +{{- end -}} +{{- end -}} + +{{/* +Allow the release namespace to be overridden for multi-namespace deployments in combined charts +*/}} +{{- define "kube-state-metrics.namespace" -}} + {{- if .Values.namespaceOverride -}} + {{- .Values.namespaceOverride -}} + {{- else -}} + {{- .Release.Namespace -}} + {{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "kube-state-metrics.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Generate basic labels +*/}} +{{- define "kube-state-metrics.labels" }} +helm.sh/chart: {{ template "kube-state-metrics.chart" . }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +app.kubernetes.io/component: metrics +app.kubernetes.io/part-of: {{ template "kube-state-metrics.name" . }} +{{- include "kube-state-metrics.selectorLabels" . }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +{{- if .Values.customLabels }} +{{ toYaml .Values.customLabels }} +{{- end }} +{{- if .Values.releaseLabel }} +release: {{ .Release.Name }} +{{- end }} +{{- end }} + +{{/* +Selector labels +*/}} +{{- define "kube-state-metrics.selectorLabels" }} +{{- if .Values.selectorOverride }} +{{ toYaml .Values.selectorOverride }} +{{- else }} +app.kubernetes.io/name: {{ include "kube-state-metrics.name" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- end }} +{{- end }} + +{{/* Sets default scrape limits for servicemonitor */}} +{{- define "servicemonitor.scrapeLimits" -}} +{{- with .sampleLimit }} +sampleLimit: {{ . }} +{{- end }} +{{- with .targetLimit }} +targetLimit: {{ . }} +{{- end }} +{{- with .labelLimit }} +labelLimit: {{ . }} +{{- end }} +{{- with .labelNameLengthLimit }} +labelNameLengthLimit: {{ . }} +{{- end }} +{{- with .labelValueLengthLimit }} +labelValueLengthLimit: {{ . }} +{{- end }} +{{- end -}} + +{{/* +Formats imagePullSecrets. Input is (dict "Values" .Values "imagePullSecrets" .{specific imagePullSecrets}) +*/}} +{{- define "kube-state-metrics.imagePullSecrets" -}} +{{- range (concat .Values.global.imagePullSecrets .imagePullSecrets) }} + {{- if eq (typeOf .) "map[string]interface {}" }} +- {{ toYaml . | trim }} + {{- else }} +- name: {{ . }} + {{- end }} +{{- end }} +{{- end -}} + +{{/* +The image to use for kube-state-metrics +*/}} +{{- define "kube-state-metrics.image" -}} +{{- if .Values.image.sha }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }} +{{- else }} +{{- printf "%s/%s:%s@%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) .Values.image.sha }} +{{- end }} +{{- else }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- else }} +{{- printf "%s/%s:%s" .Values.image.registry .Values.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.image.tag) }} +{{- end }} +{{- end }} +{{- end }} + +{{/* +The image to use for kubeRBACProxy +*/}} +{{- define "kubeRBACProxy.image" -}} +{{- if .Values.kubeRBACProxy.image.sha }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s@%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }} +{{- else }} +{{- printf "%s/%s:%s@%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) .Values.kubeRBACProxy.image.sha }} +{{- end }} +{{- else }} +{{- if .Values.global.imageRegistry }} +{{- printf "%s/%s:%s" .Values.global.imageRegistry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }} +{{- else }} +{{- printf "%s/%s:%s" .Values.kubeRBACProxy.image.registry .Values.kubeRBACProxy.image.repository (default (printf "v%s" .Chart.AppVersion) .Values.kubeRBACProxy.image.tag) }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml new file mode 100644 index 0000000000..025cd47a88 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/ciliumnetworkpolicy.yaml @@ -0,0 +1,33 @@ +{{- if and .Values.networkPolicy.enabled (eq .Values.networkPolicy.flavor "cilium") }} +apiVersion: cilium.io/v2 +kind: CiliumNetworkPolicy +metadata: + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +spec: + endpointSelector: + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + egress: + {{- if and .Values.networkPolicy.cilium .Values.networkPolicy.cilium.kubeApiServerSelector }} + {{ toYaml .Values.networkPolicy.cilium.kubeApiServerSelector | nindent 6 }} + {{- else }} + - toEntities: + - kube-apiserver + {{- end }} + ingress: + - toPorts: + - ports: + - port: {{ .Values.service.port | quote }} + protocol: TCP + {{- if .Values.selfMonitor.enabled }} + - port: {{ .Values.selfMonitor.telemetryPort | default 8081 | quote }} + protocol: TCP + {{ end }} +{{ end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..cf9f628d04 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/clusterrolebinding.yaml @@ -0,0 +1,20 @@ +{{- if and .Values.rbac.create .Values.rbac.useClusterRole -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole +{{- if .Values.rbac.useExistingRole }} + name: {{ .Values.rbac.useExistingRole }} +{{- else }} + name: {{ template "kube-state-metrics.fullname" . }} +{{- end }} +subjects: +- kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/crs-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/crs-configmap.yaml new file mode 100644 index 0000000000..d38a75a51d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/crs-configmap.yaml @@ -0,0 +1,16 @@ +{{- if .Values.customResourceState.enabled}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-customresourcestate-config + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} +data: + config.yaml: | + {{- toYaml .Values.customResourceState.config | nindent 4 }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/deployment.yaml new file mode 100644 index 0000000000..31aa610181 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/deployment.yaml @@ -0,0 +1,279 @@ +apiVersion: apps/v1 +{{- if .Values.autosharding.enabled }} +kind: StatefulSet +{{- else }} +kind: Deployment +{{- end }} +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: +{{ toYaml .Values.annotations | indent 4 }} + {{- end }} +spec: + selector: + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + replicas: {{ .Values.replicas }} + revisionHistoryLimit: {{ .Values.revisionHistoryLimit }} + {{- if .Values.autosharding.enabled }} + serviceName: {{ template "kube-state-metrics.fullname" . }} + volumeClaimTemplates: [] + {{- end }} + template: + metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 8 }} + {{- if .Values.podAnnotations }} + annotations: +{{ toYaml .Values.podAnnotations | indent 8 }} + {{- end }} + spec: + hostNetwork: {{ .Values.hostNetwork }} + serviceAccountName: {{ template "kube-state-metrics.serviceAccountName" . }} + {{- if .Values.securityContext.enabled }} + securityContext: {{- omit .Values.securityContext "enabled" | toYaml | nindent 8 }} + {{- end }} + {{- if .Values.priorityClassName }} + priorityClassName: {{ .Values.priorityClassName }} + {{- end }} + containers: + {{- $httpPort := ternary 9090 (.Values.service.port | default 8080) .Values.kubeRBACProxy.enabled}} + {{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}} + - name: {{ template "kube-state-metrics.name" . }} + {{- if .Values.autosharding.enabled }} + env: + - name: POD_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: POD_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + {{- end }} + args: + {{- if .Values.extraArgs }} + {{- .Values.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --port={{ $httpPort }} + {{- if .Values.collectors }} + - --resources={{ .Values.collectors | join "," }} + {{- end }} + {{- if .Values.metricLabelsAllowlist }} + - --metric-labels-allowlist={{ .Values.metricLabelsAllowlist | join "," }} + {{- end }} + {{- if .Values.metricAnnotationsAllowList }} + - --metric-annotations-allowlist={{ .Values.metricAnnotationsAllowList | join "," }} + {{- end }} + {{- if .Values.metricAllowlist }} + - --metric-allowlist={{ .Values.metricAllowlist | join "," }} + {{- end }} + {{- if .Values.metricDenylist }} + - --metric-denylist={{ .Values.metricDenylist | join "," }} + {{- end }} + {{- $namespaces := list }} + {{- if .Values.namespaces }} + {{- range $ns := join "," .Values.namespaces | split "," }} + {{- $namespaces = append $namespaces (tpl $ns $) }} + {{- end }} + {{- end }} + {{- if .Values.releaseNamespace }} + {{- $namespaces = append $namespaces ( include "kube-state-metrics.namespace" . ) }} + {{- end }} + {{- if $namespaces }} + - --namespaces={{ $namespaces | mustUniq | join "," }} + {{- end }} + {{- if .Values.namespacesDenylist }} + - --namespaces-denylist={{ tpl (.Values.namespacesDenylist | join ",") $ }} + {{- end }} + {{- if .Values.autosharding.enabled }} + - --pod=$(POD_NAME) + - --pod-namespace=$(POD_NAMESPACE) + {{- end }} + {{- if .Values.kubeconfig.enabled }} + - --kubeconfig=/opt/k8s/.kube/config + {{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - --telemetry-host=127.0.0.1 + - --telemetry-port={{ $telemetryPort }} + {{- else }} + {{- if .Values.selfMonitor.telemetryHost }} + - --telemetry-host={{ .Values.selfMonitor.telemetryHost }} + {{- end }} + {{- if .Values.selfMonitor.telemetryPort }} + - --telemetry-port={{ $telemetryPort }} + {{- end }} + {{- if .Values.customResourceState.enabled }} + - --custom-resource-state-config-file=/etc/customresourcestate/config.yaml + {{- end }} + {{- end }} + {{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumeMounts) }} + volumeMounts: + {{- if .Values.kubeconfig.enabled }} + - name: kubeconfig + mountPath: /opt/k8s/.kube/ + readOnly: true + {{- end }} + {{- if .Values.customResourceState.enabled }} + - name: customresourcestate-config + mountPath: /etc/customresourcestate + readOnly: true + {{- end }} + {{- if .Values.volumeMounts }} +{{ toYaml .Values.volumeMounts | indent 8 }} + {{- end }} + {{- end }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + image: {{ include "kube-state-metrics.image" . }} + {{- if eq .Values.kubeRBACProxy.enabled false }} + ports: + - containerPort: {{ .Values.service.port | default 8080}} + name: "http" + {{- if .Values.selfMonitor.enabled }} + - containerPort: {{ $telemetryPort }} + name: "metrics" + {{- end }} + {{- end }} + livenessProbe: + httpGet: + path: /healthz + port: {{ $httpPort }} + initialDelaySeconds: 5 + timeoutSeconds: 5 + readinessProbe: + httpGet: + path: / + port: {{ $httpPort }} + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.resources }} + resources: +{{ toYaml .Values.resources | indent 10 }} +{{- end }} +{{- if .Values.containerSecurityContext }} + securityContext: +{{ toYaml .Values.containerSecurityContext | indent 10 }} +{{- end }} + {{- if .Values.kubeRBACProxy.enabled }} + - name: kube-rbac-proxy-http + args: + {{- if .Values.kubeRBACProxy.extraArgs }} + {{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --secure-listen-address=:{{ .Values.service.port | default 8080}} + - --upstream=http://127.0.0.1:{{ $httpPort }}/ + - --proxy-endpoints-port=8888 + - --config-file=/etc/kube-rbac-proxy-config/config-file.yaml + volumeMounts: + - name: kube-rbac-proxy-config + mountPath: /etc/kube-rbac-proxy-config + {{- with .Values.kubeRBACProxy.volumeMounts }} + {{- toYaml . | nindent 10 }} + {{- end }} + imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }} + image: {{ include "kubeRBACProxy.image" . }} + ports: + - containerPort: {{ .Values.service.port | default 8080}} + name: "http" + - containerPort: 8888 + name: "http-healthz" + readinessProbe: + httpGet: + scheme: HTTPS + port: 8888 + path: healthz + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.kubeRBACProxy.resources }} + resources: +{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }} +{{- end }} +{{- if .Values.kubeRBACProxy.containerSecurityContext }} + securityContext: +{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }} +{{- end }} + {{- if .Values.selfMonitor.enabled }} + - name: kube-rbac-proxy-telemetry + args: + {{- if .Values.kubeRBACProxy.extraArgs }} + {{- .Values.kubeRBACProxy.extraArgs | toYaml | nindent 8 }} + {{- end }} + - --secure-listen-address=:{{ .Values.selfMonitor.telemetryPort | default 8081 }} + - --upstream=http://127.0.0.1:{{ $telemetryPort }}/ + - --proxy-endpoints-port=8889 + - --config-file=/etc/kube-rbac-proxy-config/config-file.yaml + volumeMounts: + - name: kube-rbac-proxy-config + mountPath: /etc/kube-rbac-proxy-config + {{- with .Values.kubeRBACProxy.volumeMounts }} + {{- toYaml . | nindent 10 }} + {{- end }} + imagePullPolicy: {{ .Values.kubeRBACProxy.image.pullPolicy }} + image: {{ include "kubeRBACProxy.image" . }} + ports: + - containerPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + name: "metrics" + - containerPort: 8889 + name: "metrics-healthz" + readinessProbe: + httpGet: + scheme: HTTPS + port: 8889 + path: healthz + initialDelaySeconds: 5 + timeoutSeconds: 5 + {{- if .Values.kubeRBACProxy.resources }} + resources: +{{ toYaml .Values.kubeRBACProxy.resources | indent 10 }} +{{- end }} +{{- if .Values.kubeRBACProxy.containerSecurityContext }} + securityContext: +{{ toYaml .Values.kubeRBACProxy.containerSecurityContext | indent 10 }} +{{- end }} + {{- end }} + {{- end }} +{{- if or .Values.imagePullSecrets .Values.global.imagePullSecrets }} + imagePullSecrets: + {{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.imagePullSecrets) | indent 8 }} + {{- end }} + {{- if .Values.affinity }} + affinity: +{{ toYaml .Values.affinity | indent 8 }} + {{- end }} + {{- if .Values.nodeSelector }} + nodeSelector: +{{ toYaml .Values.nodeSelector | indent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: +{{ toYaml .Values.tolerations | indent 8 }} + {{- end }} + {{- if .Values.topologySpreadConstraints }} + topologySpreadConstraints: +{{ toYaml .Values.topologySpreadConstraints | indent 8 }} + {{- end }} + {{- if or (.Values.kubeconfig.enabled) (.Values.customResourceState.enabled) (.Values.volumes) (.Values.kubeRBACProxy.enabled) }} + volumes: + {{- if .Values.kubeconfig.enabled}} + - name: kubeconfig + secret: + secretName: {{ template "kube-state-metrics.fullname" . }}-kubeconfig + {{- end }} + {{- if .Values.kubeRBACProxy.enabled}} + - name: kube-rbac-proxy-config + configMap: + name: {{ template "kube-state-metrics.fullname" . }}-rbac-config + {{- end }} + {{- if .Values.customResourceState.enabled}} + - name: customresourcestate-config + configMap: + name: {{ template "kube-state-metrics.fullname" . }}-customresourcestate-config + {{- end }} + {{- if .Values.volumes }} +{{ toYaml .Values.volumes | indent 8 }} + {{- end }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/extra-manifests.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/extra-manifests.yaml new file mode 100644 index 0000000000..567f7bf329 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/extra-manifests.yaml @@ -0,0 +1,4 @@ +{{ range .Values.extraManifests }} +--- +{{ tpl (toYaml .) $ }} +{{ end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/kubeconfig-secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/kubeconfig-secret.yaml new file mode 100644 index 0000000000..6af0084502 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/kubeconfig-secret.yaml @@ -0,0 +1,12 @@ +{{- if .Values.kubeconfig.enabled -}} +apiVersion: v1 +kind: Secret +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-kubeconfig + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +type: Opaque +data: + config: '{{ .Values.kubeconfig.secret }}' +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/networkpolicy.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/networkpolicy.yaml new file mode 100644 index 0000000000..309b38ec54 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/networkpolicy.yaml @@ -0,0 +1,43 @@ +{{- if and .Values.networkPolicy.enabled (eq .Values.networkPolicy.flavor "kubernetes") }} +kind: NetworkPolicy +apiVersion: networking.k8s.io/v1 +metadata: + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +spec: + {{- if .Values.networkPolicy.egress }} + ## Deny all egress by default + egress: + {{- toYaml .Values.networkPolicy.egress | nindent 4 }} + {{- end }} + ingress: + {{- if .Values.networkPolicy.ingress }} + {{- toYaml .Values.networkPolicy.ingress | nindent 4 }} + {{- else }} + ## Allow ingress on default ports by default + - ports: + - port: {{ .Values.service.port | default 8080 }} + protocol: TCP + {{- if .Values.selfMonitor.enabled }} + {{- $telemetryPort := ternary 9091 (.Values.selfMonitor.telemetryPort | default 8081) .Values.kubeRBACProxy.enabled}} + - port: {{ $telemetryPort }} + protocol: TCP + {{- end }} + {{- end }} + podSelector: + {{- if .Values.networkPolicy.podSelector }} + {{- toYaml .Values.networkPolicy.podSelector | nindent 4 }} + {{- else }} + matchLabels: + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + {{- end }} + policyTypes: + - Ingress + - Egress +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/pdb.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/pdb.yaml new file mode 100644 index 0000000000..3771b511de --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/pdb.yaml @@ -0,0 +1,18 @@ +{{- if .Values.podDisruptionBudget -}} +{{ if $.Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" -}} +apiVersion: policy/v1 +{{- else -}} +apiVersion: policy/v1beta1 +{{- end }} +kind: PodDisruptionBudget +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +spec: + selector: + matchLabels: + app.kubernetes.io/name: {{ template "kube-state-metrics.name" . }} +{{ toYaml .Values.podDisruptionBudget | indent 2 }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/podsecuritypolicy.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..8905e113e8 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/podsecuritypolicy.yaml @@ -0,0 +1,39 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +{{- if .Values.podSecurityPolicy.annotations }} + annotations: +{{ toYaml .Values.podSecurityPolicy.annotations | indent 4 }} +{{- end }} +spec: + privileged: false + volumes: + - 'secret' +{{- if .Values.podSecurityPolicy.additionalVolumes }} +{{ toYaml .Values.podSecurityPolicy.additionalVolumes | indent 4 }} +{{- end }} + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + rule: 'MustRunAsNonRoot' + seLinux: + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 1 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrole.yaml new file mode 100644 index 0000000000..654e4a3d57 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrole.yaml @@ -0,0 +1,19 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: psp-{{ template "kube-state-metrics.fullname" . }} +rules: +{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values.kubeTargetVersionOverride }} +{{- if semverCompare "> 1.15.0-0" $kubeTargetVersion }} +- apiGroups: ['policy'] +{{- else }} +- apiGroups: ['extensions'] +{{- end }} + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ template "kube-state-metrics.fullname" . }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml new file mode 100644 index 0000000000..5b62a18bdf --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/psp-clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if and .Values.podSecurityPolicy.enabled (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: psp-{{ template "kube-state-metrics.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: psp-{{ template "kube-state-metrics.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rbac-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rbac-configmap.yaml new file mode 100644 index 0000000000..671dc9d660 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rbac-configmap.yaml @@ -0,0 +1,22 @@ +{{- if .Values.kubeRBACProxy.enabled}} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ template "kube-state-metrics.fullname" . }}-rbac-config + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- if .Values.annotations }} + annotations: + {{ toYaml .Values.annotations | nindent 4 }} + {{- end }} +data: + config-file.yaml: |+ + authorization: + resourceAttributes: + namespace: {{ template "kube-state-metrics.namespace" . }} + apiVersion: v1 + resource: services + subresource: {{ template "kube-state-metrics.fullname" . }} + name: {{ template "kube-state-metrics.fullname" . }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/role.yaml new file mode 100644 index 0000000000..d33687f2d1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/role.yaml @@ -0,0 +1,212 @@ +{{- if and (eq .Values.rbac.create true) (not .Values.rbac.useExistingRole) -}} +{{- range (ternary (join "," .Values.namespaces | split "," ) (list "") (eq $.Values.rbac.useClusterRole false)) }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +{{- if eq $.Values.rbac.useClusterRole false }} +kind: Role +{{- else }} +kind: ClusterRole +{{- end }} +metadata: + labels: + {{- include "kube-state-metrics.labels" $ | indent 4 }} + name: {{ template "kube-state-metrics.fullname" $ }} +{{- if eq $.Values.rbac.useClusterRole false }} + namespace: {{ . }} +{{- end }} +rules: +{{ if has "certificatesigningrequests" $.Values.collectors }} +- apiGroups: ["certificates.k8s.io"] + resources: + - certificatesigningrequests + verbs: ["list", "watch"] +{{ end -}} +{{ if has "configmaps" $.Values.collectors }} +- apiGroups: [""] + resources: + - configmaps + verbs: ["list", "watch"] +{{ end -}} +{{ if has "cronjobs" $.Values.collectors }} +- apiGroups: ["batch"] + resources: + - cronjobs + verbs: ["list", "watch"] +{{ end -}} +{{ if has "daemonsets" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - daemonsets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "deployments" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - deployments + verbs: ["list", "watch"] +{{ end -}} +{{ if has "endpoints" $.Values.collectors }} +- apiGroups: [""] + resources: + - endpoints + verbs: ["list", "watch"] +{{ end -}} +{{ if has "endpointslices" $.Values.collectors }} +- apiGroups: ["discovery.k8s.io"] + resources: + - endpointslices + verbs: ["list", "watch"] +{{ end -}} +{{ if has "horizontalpodautoscalers" $.Values.collectors }} +- apiGroups: ["autoscaling"] + resources: + - horizontalpodautoscalers + verbs: ["list", "watch"] +{{ end -}} +{{ if has "ingresses" $.Values.collectors }} +- apiGroups: ["extensions", "networking.k8s.io"] + resources: + - ingresses + verbs: ["list", "watch"] +{{ end -}} +{{ if has "jobs" $.Values.collectors }} +- apiGroups: ["batch"] + resources: + - jobs + verbs: ["list", "watch"] +{{ end -}} +{{ if has "leases" $.Values.collectors }} +- apiGroups: ["coordination.k8s.io"] + resources: + - leases + verbs: ["list", "watch"] +{{ end -}} +{{ if has "limitranges" $.Values.collectors }} +- apiGroups: [""] + resources: + - limitranges + verbs: ["list", "watch"] +{{ end -}} +{{ if has "mutatingwebhookconfigurations" $.Values.collectors }} +- apiGroups: ["admissionregistration.k8s.io"] + resources: + - mutatingwebhookconfigurations + verbs: ["list", "watch"] +{{ end -}} +{{ if has "namespaces" $.Values.collectors }} +- apiGroups: [""] + resources: + - namespaces + verbs: ["list", "watch"] +{{ end -}} +{{ if has "networkpolicies" $.Values.collectors }} +- apiGroups: ["networking.k8s.io"] + resources: + - networkpolicies + verbs: ["list", "watch"] +{{ end -}} +{{ if has "nodes" $.Values.collectors }} +- apiGroups: [""] + resources: + - nodes + verbs: ["list", "watch"] +{{ end -}} +{{ if has "persistentvolumeclaims" $.Values.collectors }} +- apiGroups: [""] + resources: + - persistentvolumeclaims + verbs: ["list", "watch"] +{{ end -}} +{{ if has "persistentvolumes" $.Values.collectors }} +- apiGroups: [""] + resources: + - persistentvolumes + verbs: ["list", "watch"] +{{ end -}} +{{ if has "poddisruptionbudgets" $.Values.collectors }} +- apiGroups: ["policy"] + resources: + - poddisruptionbudgets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "pods" $.Values.collectors }} +- apiGroups: [""] + resources: + - pods + verbs: ["list", "watch"] +{{ end -}} +{{ if has "replicasets" $.Values.collectors }} +- apiGroups: ["extensions", "apps"] + resources: + - replicasets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "replicationcontrollers" $.Values.collectors }} +- apiGroups: [""] + resources: + - replicationcontrollers + verbs: ["list", "watch"] +{{ end -}} +{{ if has "resourcequotas" $.Values.collectors }} +- apiGroups: [""] + resources: + - resourcequotas + verbs: ["list", "watch"] +{{ end -}} +{{ if has "secrets" $.Values.collectors }} +- apiGroups: [""] + resources: + - secrets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "services" $.Values.collectors }} +- apiGroups: [""] + resources: + - services + verbs: ["list", "watch"] +{{ end -}} +{{ if has "statefulsets" $.Values.collectors }} +- apiGroups: ["apps"] + resources: + - statefulsets + verbs: ["list", "watch"] +{{ end -}} +{{ if has "storageclasses" $.Values.collectors }} +- apiGroups: ["storage.k8s.io"] + resources: + - storageclasses + verbs: ["list", "watch"] +{{ end -}} +{{ if has "validatingwebhookconfigurations" $.Values.collectors }} +- apiGroups: ["admissionregistration.k8s.io"] + resources: + - validatingwebhookconfigurations + verbs: ["list", "watch"] +{{ end -}} +{{ if has "volumeattachments" $.Values.collectors }} +- apiGroups: ["storage.k8s.io"] + resources: + - volumeattachments + verbs: ["list", "watch"] +{{ end -}} +{{- if $.Values.kubeRBACProxy.enabled }} +- apiGroups: ["authentication.k8s.io"] + resources: + - tokenreviews + verbs: ["create"] +- apiGroups: ["authorization.k8s.io"] + resources: + - subjectaccessreviews + verbs: ["create"] +{{- end }} +{{- if $.Values.customResourceState.enabled }} +- apiGroups: ["apiextensions.k8s.io"] + resources: + - customresourcedefinitions + verbs: ["list", "watch"] +{{- end }} +{{ if $.Values.rbac.extraRules }} +{{ toYaml $.Values.rbac.extraRules }} +{{ end }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rolebinding.yaml new file mode 100644 index 0000000000..330651b73f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/rolebinding.yaml @@ -0,0 +1,24 @@ +{{- if and (eq .Values.rbac.create true) (eq .Values.rbac.useClusterRole false) -}} +{{- range (join "," $.Values.namespaces) | split "," }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + {{- include "kube-state-metrics.labels" $ | indent 4 }} + name: {{ template "kube-state-metrics.fullname" $ }} + namespace: {{ . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role +{{- if (not $.Values.rbac.useExistingRole) }} + name: {{ template "kube-state-metrics.fullname" $ }} +{{- else }} + name: {{ $.Values.rbac.useExistingRole }} +{{- end }} +subjects: +- kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" $ }} + namespace: {{ template "kube-state-metrics.namespace" $ }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/service.yaml new file mode 100644 index 0000000000..6c486a662a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/service.yaml @@ -0,0 +1,49 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + annotations: + {{- if .Values.prometheusScrape }} + prometheus.io/scrape: '{{ .Values.prometheusScrape }}' + {{- end }} + {{- if .Values.service.annotations }} + {{- toYaml .Values.service.annotations | nindent 4 }} + {{- end }} +spec: + type: "{{ .Values.service.type }}" + ports: + - name: "http" + protocol: TCP + port: {{ .Values.service.port | default 8080}} + {{- if .Values.service.nodePort }} + nodePort: {{ .Values.service.nodePort }} + {{- end }} + targetPort: {{ .Values.service.port | default 8080}} + {{ if .Values.selfMonitor.enabled }} + - name: "metrics" + protocol: TCP + port: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + targetPort: {{ .Values.selfMonitor.telemetryPort | default 8081 }} + {{- if .Values.selfMonitor.telemetryNodePort }} + nodePort: {{ .Values.selfMonitor.telemetryNodePort }} + {{- end }} + {{ end }} +{{- if .Values.service.loadBalancerIP }} + loadBalancerIP: "{{ .Values.service.loadBalancerIP }}" +{{- end }} +{{- if .Values.service.loadBalancerSourceRanges }} + loadBalancerSourceRanges: + {{- range $cidr := .Values.service.loadBalancerSourceRanges }} + - {{ $cidr }} + {{- end }} +{{- end }} +{{- if .Values.autosharding.enabled }} + clusterIP: None +{{- else if .Values.service.clusterIP }} + clusterIP: "{{ .Values.service.clusterIP }}" +{{- end }} + selector: + {{- include "kube-state-metrics.selectorLabels" . | indent 4 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/serviceaccount.yaml new file mode 100644 index 0000000000..a7ff4dd3d7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/serviceaccount.yaml @@ -0,0 +1,15 @@ +{{- if .Values.serviceAccount.create -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- if .Values.serviceAccount.annotations }} + annotations: +{{ toYaml .Values.serviceAccount.annotations | indent 4 }} +{{- end }} +imagePullSecrets: + {{- include "kube-state-metrics.imagePullSecrets" (dict "Values" .Values "imagePullSecrets" .Values.serviceAccount.imagePullSecrets) | indent 2 }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/servicemonitor.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/servicemonitor.yaml new file mode 100644 index 0000000000..79a07a6555 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/servicemonitor.yaml @@ -0,0 +1,114 @@ +{{- if .Values.prometheus.monitor.enabled }} +apiVersion: monitoring.coreos.com/v1 +kind: ServiceMonitor +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} + {{- with .Values.prometheus.monitor.additionalLabels }} + {{- toYaml . | nindent 4 }} + {{- end }} + {{- with .Values.prometheus.monitor.annotations }} + annotations: + {{- toYaml . | nindent 4 }} + {{- end }} +spec: + jobLabel: {{ default "app.kubernetes.io/name" .Values.prometheus.monitor.jobLabel }} + {{- with .Values.prometheus.monitor.targetLabels }} + targetLabels: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + {{- with .Values.prometheus.monitor.podTargetLabels }} + podTargetLabels: + {{- toYaml . | trim | nindent 4 }} + {{- end }} + {{- include "servicemonitor.scrapeLimits" .Values.prometheus.monitor | indent 2 }} + {{- if .Values.prometheus.monitor.namespaceSelector }} + namespaceSelector: + matchNames: + {{- with .Values.prometheus.monitor.namespaceSelector }} + {{- toYaml . | nindent 6 }} + {{- end }} + {{- end }} + selector: + matchLabels: + {{- with .Values.prometheus.monitor.selectorOverride }} + {{- toYaml . | nindent 6 }} + {{- else }} + {{- include "kube-state-metrics.selectorLabels" . | indent 6 }} + {{- end }} + endpoints: + - port: http + {{- if .Values.prometheus.monitor.interval }} + interval: {{ .Values.prometheus.monitor.interval }} + {{- end }} + {{- if .Values.prometheus.monitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.prometheus.monitor.scrapeTimeout }} + {{- end }} + {{- if .Values.prometheus.monitor.proxyUrl }} + proxyUrl: {{ .Values.prometheus.monitor.proxyUrl}} + {{- end }} + {{- if .Values.prometheus.monitor.honorLabels }} + honorLabels: true + {{- end }} + {{- if .Values.prometheus.monitor.metricRelabelings }} + metricRelabelings: + {{- toYaml .Values.prometheus.monitor.metricRelabelings | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.relabelings }} + relabelings: + {{- toYaml .Values.prometheus.monitor.relabelings | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.scheme }} + scheme: {{ .Values.prometheus.monitor.scheme }} + {{- end }} + {{- if .Values.prometheus.monitor.tlsConfig }} + tlsConfig: + {{- toYaml .Values.prometheus.monitor.tlsConfig | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.bearerTokenFile }} + bearerTokenFile: {{ .Values.prometheus.monitor.bearerTokenFile }} + {{- end }} + {{- with .Values.prometheus.monitor.bearerTokenSecret }} + bearerTokenSecret: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.selfMonitor.enabled }} + - port: metrics + {{- if .Values.prometheus.monitor.interval }} + interval: {{ .Values.prometheus.monitor.interval }} + {{- end }} + {{- if .Values.prometheus.monitor.scrapeTimeout }} + scrapeTimeout: {{ .Values.prometheus.monitor.scrapeTimeout }} + {{- end }} + {{- if .Values.prometheus.monitor.proxyUrl }} + proxyUrl: {{ .Values.prometheus.monitor.proxyUrl}} + {{- end }} + {{- if .Values.prometheus.monitor.honorLabels }} + honorLabels: true + {{- end }} + {{- if .Values.prometheus.monitor.metricRelabelings }} + metricRelabelings: + {{- toYaml .Values.prometheus.monitor.metricRelabelings | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.relabelings }} + relabelings: + {{- toYaml .Values.prometheus.monitor.relabelings | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.scheme }} + scheme: {{ .Values.prometheus.monitor.scheme }} + {{- end }} + {{- if .Values.prometheus.monitor.tlsConfig }} + tlsConfig: + {{- toYaml .Values.prometheus.monitor.tlsConfig | nindent 8 }} + {{- end }} + {{- if .Values.prometheus.monitor.bearerTokenFile }} + bearerTokenFile: {{ .Values.prometheus.monitor.bearerTokenFile }} + {{- end }} + {{- with .Values.prometheus.monitor.bearerTokenSecret }} + bearerTokenSecret: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-role.yaml new file mode 100644 index 0000000000..489de147c1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-role.yaml @@ -0,0 +1,26 @@ +{{- if and .Values.autosharding.enabled .Values.rbac.create -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +rules: +- apiGroups: + - "" + resources: + - pods + verbs: + - get +- apiGroups: + - apps + resourceNames: + - {{ template "kube-state-metrics.fullname" . }} + resources: + - statefulsets + verbs: + - get + - list + - watch +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml new file mode 100644 index 0000000000..73b37a4f64 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/stsdiscovery-rolebinding.yaml @@ -0,0 +1,17 @@ +{{- if and .Values.autosharding.enabled .Values.rbac.create -}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: stsdiscovery-{{ template "kube-state-metrics.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "kube-state-metrics.serviceAccountName" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml new file mode 100644 index 0000000000..f46305b517 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/templates/verticalpodautoscaler.yaml @@ -0,0 +1,44 @@ +{{- if and (.Capabilities.APIVersions.Has "autoscaling.k8s.io/v1") (.Values.verticalPodAutoscaler.enabled) }} +apiVersion: autoscaling.k8s.io/v1 +kind: VerticalPodAutoscaler +metadata: + name: {{ template "kube-state-metrics.fullname" . }} + namespace: {{ template "kube-state-metrics.namespace" . }} + labels: + {{- include "kube-state-metrics.labels" . | indent 4 }} +spec: + {{- with .Values.verticalPodAutoscaler.recommenders }} + recommenders: + {{- toYaml . | nindent 4 }} + {{- end }} + resourcePolicy: + containerPolicies: + - containerName: {{ template "kube-state-metrics.name" . }} + {{- with .Values.verticalPodAutoscaler.controlledResources }} + controlledResources: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.controlledValues }} + controlledValues: {{ .Values.verticalPodAutoscaler.controlledValues }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.maxAllowed }} + maxAllowed: + {{ toYaml .Values.verticalPodAutoscaler.maxAllowed | nindent 8 }} + {{- end }} + {{- if .Values.verticalPodAutoscaler.minAllowed }} + minAllowed: + {{ toYaml .Values.verticalPodAutoscaler.minAllowed | nindent 8 }} + {{- end }} + targetRef: + apiVersion: apps/v1 + {{- if .Values.autosharding.enabled }} + kind: StatefulSet + {{- else }} + kind: Deployment + {{- end }} + name: {{ template "kube-state-metrics.fullname" . }} + {{- with .Values.verticalPodAutoscaler.updatePolicy }} + updatePolicy: + {{- toYaml . | nindent 4 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/values.yaml new file mode 100644 index 0000000000..00eabab6ab --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/kube-state-metrics/values.yaml @@ -0,0 +1,441 @@ +# Default values for kube-state-metrics. +prometheusScrape: true +image: + registry: registry.k8s.io + repository: kube-state-metrics/kube-state-metrics + # If unset use v + .Charts.appVersion + tag: "" + sha: "" + pullPolicy: IfNotPresent + +imagePullSecrets: [] +# - name: "image-pull-secret" + +global: + # To help compatibility with other charts which use global.imagePullSecrets. + # Allow either an array of {name: pullSecret} maps (k8s-style), or an array of strings (more common helm-style). + # global: + # imagePullSecrets: + # - name: pullSecret1 + # - name: pullSecret2 + # or + # global: + # imagePullSecrets: + # - pullSecret1 + # - pullSecret2 + imagePullSecrets: [] + # + # Allow parent charts to override registry hostname + imageRegistry: "" + +# If set to true, this will deploy kube-state-metrics as a StatefulSet and the data +# will be automatically sharded across <.Values.replicas> pods using the built-in +# autodiscovery feature: https://github.com/kubernetes/kube-state-metrics#automated-sharding +# This is an experimental feature and there are no stability guarantees. +autosharding: + enabled: false + +replicas: 1 + +# Number of old history to retain to allow rollback +# Default Kubernetes value is set to 10 +revisionHistoryLimit: 10 + +# List of additional cli arguments to configure kube-state-metrics +# for example: --enable-gzip-encoding, --log-file, etc. +# all the possible args can be found here: https://github.com/kubernetes/kube-state-metrics/blob/master/docs/cli-arguments.md +extraArgs: [] + +service: + port: 8080 + # Default to clusterIP for backward compatibility + type: ClusterIP + nodePort: 0 + loadBalancerIP: "" + # Only allow access to the loadBalancerIP from these IPs + loadBalancerSourceRanges: [] + clusterIP: "" + annotations: {} + +## Additional labels to add to all resources +customLabels: {} + # app: kube-state-metrics + +## Override selector labels +selectorOverride: {} + +## set to true to add the release label so scraping of the servicemonitor with kube-prometheus-stack works out of the box +releaseLabel: false + +hostNetwork: false + +rbac: + # If true, create & use RBAC resources + create: true + + # Set to a rolename to use existing role - skipping role creating - but still doing serviceaccount and rolebinding to it, rolename set here. + # useExistingRole: your-existing-role + + # If set to false - Run without Cluteradmin privs needed - ONLY works if namespace is also set (if useExistingRole is set this name is used as ClusterRole or Role to bind to) + useClusterRole: true + + # Add permissions for CustomResources' apiGroups in Role/ClusterRole. Should be used in conjunction with Custom Resource State Metrics configuration + # Example: + # - apiGroups: ["monitoring.coreos.com"] + # resources: ["prometheuses"] + # verbs: ["list", "watch"] + extraRules: [] + +# Configure kube-rbac-proxy. When enabled, creates one kube-rbac-proxy container per exposed HTTP endpoint (metrics and telemetry if enabled). +# The requests are served through the same service but requests are then HTTPS. +kubeRBACProxy: + enabled: false + image: + registry: quay.io + repository: brancz/kube-rbac-proxy + tag: v0.14.0 + sha: "" + pullPolicy: IfNotPresent + + # List of additional cli arguments to configure kube-rbac-prxy + # for example: --tls-cipher-suites, --log-file, etc. + # all the possible args can be found here: https://github.com/brancz/kube-rbac-proxy#usage + extraArgs: [] + + ## Specify security settings for a Container + ## Allows overrides and additional options compared to (Pod) securityContext + ## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container + containerSecurityContext: {} + + resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 64Mi + # requests: + # cpu: 10m + # memory: 32Mi + + ## volumeMounts enables mounting custom volumes in rbac-proxy containers + ## Useful for TLS certificates and keys + volumeMounts: [] + # - mountPath: /etc/tls + # name: kube-rbac-proxy-tls + # readOnly: true + +serviceAccount: + # Specifies whether a ServiceAccount should be created, require rbac true + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + # Reference to one or more secrets to be used when pulling images + # ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ + imagePullSecrets: [] + # ServiceAccount annotations. + # Use case: AWS EKS IAM roles for service accounts + # ref: https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html + annotations: {} + +prometheus: + monitor: + enabled: false + annotations: {} + additionalLabels: {} + namespace: "" + namespaceSelector: [] + jobLabel: "" + targetLabels: [] + podTargetLabels: [] + interval: "" + ## SampleLimit defines per-scrape limit on number of scraped samples that will be accepted. + ## + sampleLimit: 0 + + ## TargetLimit defines a limit on the number of scraped targets that will be accepted. + ## + targetLimit: 0 + + ## Per-scrape limit on number of labels that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelLimit: 0 + + ## Per-scrape limit on length of labels name that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelNameLengthLimit: 0 + + ## Per-scrape limit on length of labels value that will be accepted for a sample. Only valid in Prometheus versions 2.27.0 and newer. + ## + labelValueLengthLimit: 0 + scrapeTimeout: "" + proxyUrl: "" + selectorOverride: {} + honorLabels: false + metricRelabelings: [] + relabelings: [] + scheme: "" + ## File to read bearer token for scraping targets + bearerTokenFile: "" + ## Secret to mount to read bearer token for scraping targets. The secret needs + ## to be in the same namespace as the service monitor and accessible by the + ## Prometheus Operator + bearerTokenSecret: {} + # name: secret-name + # key: key-name + tlsConfig: {} + +## Specify if a Pod Security Policy for kube-state-metrics must be created +## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/ +## +podSecurityPolicy: + enabled: false + annotations: {} + ## Specify pod annotations + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp + ## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl + ## + # seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*' + # seccomp.security.alpha.kubernetes.io/defaultProfileName: 'docker/default' + # apparmor.security.beta.kubernetes.io/defaultProfileName: 'runtime/default' + + additionalVolumes: [] + +## Configure network policy for kube-state-metrics +networkPolicy: + enabled: false + # networkPolicy.flavor -- Flavor of the network policy to use. + # Can be: + # * kubernetes for networking.k8s.io/v1/NetworkPolicy + # * cilium for cilium.io/v2/CiliumNetworkPolicy + flavor: kubernetes + + ## Configure the cilium network policy kube-apiserver selector + # cilium: + # kubeApiServerSelector: + # - toEntities: + # - kube-apiserver + + # egress: + # - {} + # ingress: + # - {} + # podSelector: + # matchLabels: + # app.kubernetes.io/name: kube-state-metrics + +securityContext: + enabled: true + runAsGroup: 65534 + runAsUser: 65534 + fsGroup: 65534 + runAsNonRoot: true + seccompProfile: + type: RuntimeDefault + +## Specify security settings for a Container +## Allows overrides and additional options compared to (Pod) securityContext +## Ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container +containerSecurityContext: + allowPrivilegeEscalation: false + capabilities: + drop: + - ALL + +## Node labels for pod assignment +## Ref: https://kubernetes.io/docs/user-guide/node-selection/ +nodeSelector: {} + +## Affinity settings for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ +affinity: {} + +## Tolerations for pod assignment +## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ +tolerations: [] + +## Topology spread constraints for pod assignment +## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ +topologySpreadConstraints: [] + +# Annotations to be added to the deployment/statefulset +annotations: {} + +# Annotations to be added to the pod +podAnnotations: {} + +## Assign a PriorityClassName to pods if set +# priorityClassName: "" + +# Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ +podDisruptionBudget: {} + +# Comma-separated list of metrics to be exposed. +# This list comprises of exact metric names and/or regex patterns. +# The allowlist and denylist are mutually exclusive. +metricAllowlist: [] + +# Comma-separated list of metrics not to be enabled. +# This list comprises of exact metric names and/or regex patterns. +# The allowlist and denylist are mutually exclusive. +metricDenylist: [] + +# Comma-separated list of additional Kubernetes label keys that will be used in the resource's +# labels metric. By default the metric contains only name and namespace labels. +# To include additional labels, provide a list of resource names in their plural form and Kubernetes +# label keys you would like to allow for them (Example: '=namespaces=[k8s-label-1,k8s-label-n,...],pods=[app],...)'. +# A single '*' can be provided per resource instead to allow any labels, but that has +# severe performance implications (Example: '=pods=[*]'). +metricLabelsAllowlist: [] + # - namespaces=[k8s-label-1,k8s-label-n] + +# Comma-separated list of Kubernetes annotations keys that will be used in the resource' +# labels metric. By default the metric contains only name and namespace labels. +# To include additional annotations provide a list of resource names in their plural form and Kubernetes +# annotation keys you would like to allow for them (Example: '=namespaces=[kubernetes.io/team,...],pods=[kubernetes.io/team],...)'. +# A single '*' can be provided per resource instead to allow any annotations, but that has +# severe performance implications (Example: '=pods=[*]'). +metricAnnotationsAllowList: [] + # - pods=[k8s-annotation-1,k8s-annotation-n] + +# Available collectors for kube-state-metrics. +# By default, all available resources are enabled, comment out to disable. +collectors: + - certificatesigningrequests + - configmaps + - cronjobs + - daemonsets + - deployments + - endpoints + - horizontalpodautoscalers + - ingresses + - jobs + - leases + - limitranges + - mutatingwebhookconfigurations + - namespaces + - networkpolicies + - nodes + - persistentvolumeclaims + - persistentvolumes + - poddisruptionbudgets + - pods + - replicasets + - replicationcontrollers + - resourcequotas + - secrets + - services + - statefulsets + - storageclasses + - validatingwebhookconfigurations + - volumeattachments + +# Enabling kubeconfig will pass the --kubeconfig argument to the container +kubeconfig: + enabled: false + # base64 encoded kube-config file + secret: + +# Enabling support for customResourceState, will create a configMap including your config that will be read from kube-state-metrics +customResourceState: + enabled: false + # Add (Cluster)Role permissions to list/watch the customResources defined in the config to rbac.extraRules + config: {} + +# Enable only the release namespace for collecting resources. By default all namespaces are collected. +# If releaseNamespace and namespaces are both set a merged list will be collected. +releaseNamespace: false + +# Comma-separated list(string) or yaml list of namespaces to be enabled for collecting resources. By default all namespaces are collected. +namespaces: "" + +# Comma-separated list of namespaces not to be enabled. If namespaces and namespaces-denylist are both set, +# only namespaces that are excluded in namespaces-denylist will be used. +namespacesDenylist: "" + +## Override the deployment namespace +## +namespaceOverride: "" + +resources: {} + # We usually recommend not to specify default resources and to leave this as a conscious + # choice for the user. This also increases chances charts run on environments with little + # resources, such as Minikube. If you do want to specify resources, uncomment the following + # lines, adjust them as necessary, and remove the curly braces after 'resources:'. + # limits: + # cpu: 100m + # memory: 64Mi + # requests: + # cpu: 10m + # memory: 32Mi + +## Provide a k8s version to define apiGroups for podSecurityPolicy Cluster Role. +## For example: kubeTargetVersionOverride: 1.14.9 +## +kubeTargetVersionOverride: "" + +# Enable self metrics configuration for service and Service Monitor +# Default values for telemetry configuration can be overridden +# If you set telemetryNodePort, you must also set service.type to NodePort +selfMonitor: + enabled: false + # telemetryHost: 0.0.0.0 + # telemetryPort: 8081 + # telemetryNodePort: 0 + +# Enable vertical pod autoscaler support for kube-state-metrics +verticalPodAutoscaler: + enabled: false + + # Recommender responsible for generating recommendation for the object. + # List should be empty (then the default recommender will generate the recommendation) + # or contain exactly one recommender. + # recommenders: [] + # - name: custom-recommender-performance + + # List of resources that the vertical pod autoscaler can control. Defaults to cpu and memory + controlledResources: [] + # Specifies which resource values should be controlled: RequestsOnly or RequestsAndLimits. + # controlledValues: RequestsAndLimits + + # Define the max allowed resources for the pod + maxAllowed: {} + # cpu: 200m + # memory: 100Mi + # Define the min allowed resources for the pod + minAllowed: {} + # cpu: 200m + # memory: 100Mi + + # updatePolicy: + # Specifies minimal number of replicas which need to be alive for VPA Updater to attempt pod eviction + # minReplicas: 1 + # Specifies whether recommended updates are applied when a Pod is started and whether recommended updates + # are applied during the life of a Pod. Possible values are "Off", "Initial", "Recreate", and "Auto". + # updateMode: Auto + +# volumeMounts are used to add custom volume mounts to deployment. +# See example below +volumeMounts: [] +# - mountPath: /etc/config +# name: config-volume + +# volumes are used to add custom volumes to deployment +# See example below +volumes: [] +# - configMap: +# name: cm-for-volume +# name: config-volume + +# Extra manifests to deploy as an array +extraManifests: [] + # - apiVersion: v1 + # kind: ConfigMap + # metadata: + # labels: + # name: prometheus-extra + # data: + # extra-data: "value" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/.helmignore new file mode 100644 index 0000000000..f62b5519e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/.helmignore @@ -0,0 +1 @@ +templates/admission-webhooks/job-patch/README.md diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.lock new file mode 100644 index 0000000000..9efaa36a61 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T22:48:07.029709954Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.yaml new file mode 100644 index 0000000000..18b406f840 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/Chart.yaml @@ -0,0 +1,35 @@ +apiVersion: v2 +appVersion: 0.19.4 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic Infrastructure Kubernetes Operator. +home: https://hub.docker.com/r/newrelic/newrelic-infra-operator +icon: https://newrelic.com/themes/custom/curio/assets/mediakit/new_relic_logo_vertical.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: alvarocabanas + url: https://github.com/alvarocabanas +- name: carlossscastro + url: https://github.com/carlossscastro +- name: sigilioso + url: https://github.com/sigilioso +- name: gsanchezgavier + url: https://github.com/gsanchezgavier +- name: kang-makes + url: https://github.com/kang-makes +- name: marcsanmi + url: https://github.com/marcsanmi +- name: paologallinaharbur + url: https://github.com/paologallinaharbur +- name: roobre + url: https://github.com/roobre +name: newrelic-infra-operator +sources: +- https://github.com/newrelic/newrelic-infra-operator +- https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator +version: 2.11.4 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md new file mode 100644 index 0000000000..05e8a8d48b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md @@ -0,0 +1,114 @@ +# newrelic-infra-operator + +A Helm chart to deploy the New Relic Infrastructure Kubernetes Operator. + +**Homepage:** + +## Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add newrelic-infra-operator https://newrelic.github.io/newrelic-infra-operator +helm upgrade --install newrelic-infra-operator/newrelic-infra-operator -f your-custom-values.yaml +``` + +## Source Code + +* +* + +## Usage example + +Make sure you have [added the New Relic chart repository.](../../README.md#install) + +Then, to install this chart, run the following command: + +```sh +helm upgrade --install [release-name] newrelic-infra-operator/newrelic-infra-operator --set cluster=my_cluster_name --set licenseKey [your-license-key] +``` + +When installing on Fargate add as well `--set fargate=true` + +### Configure in which pods the sidecar should be injected + +Policies are available in order to configure in which pods the sidecar should be injected. +Each policy is evaluated independently and if at least one policy matches the operator will inject the sidecar. + +Policies are composed by `namespaceSelector` checking the labels of the Pod namespace, `podSelector` checking +the labels of the Pod and `namespace` checking the namespace name. Each of those, if specified, are ANDed. + +By default, the policies are configured in order to inject the sidecar in each pod belonging to a Fargate profile. + +> Moreover, it is possible to add the label `infra-operator.newrelic.com/disable-injection` to Pods to exclude injection +for a single Pod that otherwise would be selected by the policies. + +Please make sure to configure policies correctly to avoid injecting sidecar for pods running on EC2 nodes +already monitored by the infrastructure DaemonSet. + +### Configure the sidecar with labelsSelectors + +It is also possible to configure `resourceRequirements` and `extraEnvVars` based on the labels of the mutating Pod. + +The current configuration increases the resource requirements for sidecar injected on `KSM` instances. Moreover, +injectes disable the `DISABLE_KUBE_STATE_METRICS` environment variable for Pods not running on `KSM` instances +to decrease the load on the API server. + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| admissionWebhooksPatchJob | object | See `values.yaml` | Image used to create certificates and inject them to the admission webhook | +| admissionWebhooksPatchJob.image.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| admissionWebhooksPatchJob.volumeMounts | list | `[]` | Volume mounts to add to the job, you might want to mount tmp if Pod Security Policies. Enforce a read-only root. | +| admissionWebhooksPatchJob.volumes | list | `[]` | Volumes to add to the job container. | +| affinity | object | `{}` | Sets pod/node affinities. Can be configured also with `global.affinity` | +| certManager.enabled | bool | `false` | Use cert manager for webhook certs | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Mandatory. Can be configured also with `global.cluster` | +| config | object | See `values.yaml` | Operator configuration | +| config.ignoreMutationErrors | bool | `true` | IgnoreMutationErrors instruments the operator to ignore injection error instead of failing. If set to false errors of the injection could block the creation of pods. | +| config.infraAgentInjection | object | See `values.yaml` | configuration of the sidecar injection webhook | +| config.infraAgentInjection.agentConfig | object | See `values.yaml` | agentConfig contains the configuration for the container agent injected | +| config.infraAgentInjection.agentConfig.configSelectors | list | See `values.yaml` | configSelectors is the way to configure resource requirements and extra envVars of the injected sidecar container. When mutating it will be applied the first configuration having the labelSelector matching with the mutating pod. | +| config.infraAgentInjection.agentConfig.image | object | See `values.yaml` | Image of the infrastructure agent to be injected. | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| customSecretLicenseKey | string | `""` | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` | +| customSecretName | string | `""` | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | `false` | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| image | object | See `values.yaml` | Image for the New Relic Infrastructure Operator | +| image.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector. Can be configured also with `global.nodeSelector` | +| podAnnotations | object | `{}` | Annotations to add to the pod. | +| podSecurityContext | object | `{"fsGroup":1001,"runAsGroup":1001,"runAsUser":1001}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| rbac.pspEnabled | bool | `false` | Whether the chart should create Pod Security Policy objects. | +| replicas | int | `1` | | +| resources | object | `{"limits":{"memory":"80M"},"requests":{"cpu":"100m","memory":"30M"}}` | Resources available for this pod | +| serviceAccount | object | See `values.yaml` | Settings controlling ServiceAccount creation | +| serviceAccount.create | bool | `true` | Specifies whether a ServiceAccount should be created | +| timeoutSeconds | int | `10` | Webhook timeout Ref: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts | +| tolerations | list | `[]` | Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` | + +## Maintainers + +* [alvarocabanas](https://github.com/alvarocabanas) +* [carlossscastro](https://github.com/carlossscastro) +* [sigilioso](https://github.com/sigilioso) +* [gsanchezgavier](https://github.com/gsanchezgavier) +* [kang-makes](https://github.com/kang-makes) +* [marcsanmi](https://github.com/marcsanmi) +* [paologallinaharbur](https://github.com/paologallinaharbur) +* [roobre](https://github.com/roobre) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md.gotmpl new file mode 100644 index 0000000000..1ef6033559 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/README.md.gotmpl @@ -0,0 +1,77 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +## Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add newrelic-infra-operator https://newrelic.github.io/newrelic-infra-operator +helm upgrade --install newrelic-infra-operator/newrelic-infra-operator -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Usage example + +Make sure you have [added the New Relic chart repository.](../../README.md#install) + +Then, to install this chart, run the following command: + +```sh +helm upgrade --install [release-name] newrelic-infra-operator/newrelic-infra-operator --set cluster=my_cluster_name --set licenseKey [your-license-key] +``` + +When installing on Fargate add as well `--set fargate=true` + +### Configure in which pods the sidecar should be injected + +Policies are available in order to configure in which pods the sidecar should be injected. +Each policy is evaluated independently and if at least one policy matches the operator will inject the sidecar. + +Policies are composed by `namespaceSelector` checking the labels of the Pod namespace, `podSelector` checking +the labels of the Pod and `namespace` checking the namespace name. Each of those, if specified, are ANDed. + +By default, the policies are configured in order to inject the sidecar in each pod belonging to a Fargate profile. + +> Moreover, it is possible to add the label `infra-operator.newrelic.com/disable-injection` to Pods to exclude injection +for a single Pod that otherwise would be selected by the policies. + +Please make sure to configure policies correctly to avoid injecting sidecar for pods running on EC2 nodes +already monitored by the infrastructure DaemonSet. + +### Configure the sidecar with labelsSelectors + +It is also possible to configure `resourceRequirements` and `extraEnvVars` based on the labels of the mutating Pod. + +The current configuration increases the resource requirements for sidecar injected on `KSM` instances. Moreover, +injectes disable the `DISABLE_KUBE_STATE_METRICS` environment variable for Pods not running on `KSM` instances +to decrease the load on the API server. + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/ci/test-values.yaml new file mode 100644 index 0000000000..3e154e1d46 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/ci/test-values.yaml @@ -0,0 +1,39 @@ +cluster: test-cluster +licenseKey: pleasePassCIThanks +serviceAccount: + name: newrelic-infra-operator-test +image: + repository: e2e/newrelic-infra-operator + tag: test # Defaults to AppVersion + pullPolicy: IfNotPresent + pullSecrets: + - name: test-pull-secret +admissionWebhooksPatchJob: + volumeMounts: + - name: tmp + mountPath: /tmp + volumes: + - name: tmp + emptyDir: +podAnnotations: + test-annotation: test-value +affinity: + podAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 1 + podAffinityTerm: + topologyKey: topology.kubernetes.io/zone + labelSelector: + matchExpressions: + - key: test-key + operator: In + values: + - test-value +tolerations: +- key: "key1" + operator: "Exists" + effect: "NoSchedule" +nodeSelector: + beta.kubernetes.io/os: linux + +fargate: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/NOTES.txt new file mode 100644 index 0000000000..5b11d2d833 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/NOTES.txt @@ -0,0 +1,4 @@ +Your deployment of the New Relic Infrastructure Operator is complete. +You can check on the progress of this by running the following command: + + kubectl get deployments -o wide -w --namespace {{ .Release.Namespace }} {{ include "newrelic.common.naming.fullname" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/_helpers.tpl new file mode 100644 index 0000000000..8a8858c82b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/_helpers.tpl @@ -0,0 +1,136 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Renders a value that contains template. +Usage: +{{ include "tplvalues.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "tplvalues.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} + +{{- /* +Naming helpers +*/ -}} +{{- define "newrelic-infra-operator.name.admission" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.admission" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.admission.serviceAccount" -}} +{{- if include "newrelic.common.serviceAccount.create" . -}} + {{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission") -}} +{{- else -}} + {{- include "newrelic.common.serviceAccount.name" . -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-infra-operator.name.admission-create" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission-create") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.admission-create" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission-create") }} +{{- end -}} + +{{- define "newrelic-infra-operator.name.admission-patch" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission-patch") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.admission-patch" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission-patch") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.self-signed-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "self-signed-issuer") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.root-cert" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "root-cert") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.root-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "root-issuer") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.webhook-cert" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "webhook-cert") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.infra-agent" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "infra-agent") }} +{{- end -}} + +{{- define "newrelic-infra-operator.fullname.config" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "config") }} +{{- end -}} + +{{/* +Returns Infra-agent rules +*/}} +{{- define "newrelic-infra-operator.infra-agent-monitoring-rules" -}} +- apiGroups: [""] + resources: + - "nodes" + - "nodes/metrics" + - "nodes/stats" + - "nodes/proxy" + - "pods" + - "services" + - "namespaces" + verbs: ["get", "list"] +- nonResourceURLs: ["/metrics"] + verbs: ["get"] +{{- end -}} + +{{/* +Returns fargate +*/}} +{{- define "newrelic-infra-operator.fargate" -}} +{{- if .Values.global }} + {{- if .Values.global.fargate }} + {{- .Values.global.fargate -}} + {{- end -}} +{{- else if .Values.fargate }} + {{- .Values.fargate -}} +{{- end -}} +{{- end -}} + +{{/* +Returns fargate configuration for configmap data +*/}} +{{- define "newrelic-infra-operator.fargate-config" -}} +infraAgentInjection: + resourcePrefix: {{ include "newrelic.common.naming.fullname" . }} +{{- if include "newrelic-infra-operator.fargate" . }} +{{- if not .Values.config.infraAgentInjection.policies }} + policies: + - podSelector: + matchExpressions: + - key: "eks.amazonaws.com/fargate-profile" + operator: Exists +{{- end }} + agentConfig: +{{- if not .Values.config.infraAgentInjection.agentConfig.customAttributes }} + customAttributes: + - name: computeType + defaultValue: serverless + - name: fargateProfile + fromLabel: eks.amazonaws.com/fargate-profile +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Returns configmap data +*/}} +{{- define "newrelic-infra-operator.configmap.data" -}} +{{ toYaml (merge (include "newrelic-infra-operator.fargate-config" . | fromYaml) .Values.config) }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrole.yaml new file mode 100644 index 0000000000..44c2b3eba8 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrole.yaml @@ -0,0 +1,27 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic-infra-operator.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + verbs: + - get + - update + {{- if .Values.rbac.pspEnabled }} + - apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ include "newrelic-infra-operator.fullname.admission" . }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrolebinding.yaml new file mode 100644 index 0000000000..902206c220 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/clusterrolebinding.yaml @@ -0,0 +1,20 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic-infra-operator.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic-infra-operator.fullname.admission" . }} +subjects: + - kind: ServiceAccount + name: {{ include "newrelic-infra-operator.fullname.admission.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-createSecret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-createSecret.yaml new file mode 100644 index 0000000000..022e6254e6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-createSecret.yaml @@ -0,0 +1,57 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.admission-create" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission-create" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "newrelic-infra-operator.fullname.admission-create" . }} + labels: + app: {{ include "newrelic-infra-operator.name.admission-create" . }} + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.admissionWebhooksPatchJob.image.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: create + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.admissionWebhooksPatchJob.image "context" .) }} + imagePullPolicy: {{ .Values.admissionWebhooksPatchJob.image.pullPolicy }} + args: + - create + - --host={{ include "newrelic.common.naming.fullname" . }},{{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }}.svc + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "newrelic-infra-operator.fullname.admission" . }} + - --cert-name=tls.crt + - --key-name=tls.key + {{- if .Values.admissionWebhooksPatchJob.image.volumeMounts }} + volumeMounts: + {{- include "tplvalues.render" ( dict "value" .Values.admissionWebhooksPatchJob.image.volumeMounts "context" $ ) | nindent 10 }} + {{- end }} + {{- if .Values.admissionWebhooksPatchJob.image.volumes }} + volumes: + {{- include "tplvalues.render" ( dict "value" .Values.admissionWebhooksPatchJob.image.volumes "context" $ ) | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "newrelic-infra-operator.fullname.admission.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 -}} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-patchWebhook.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-patchWebhook.yaml new file mode 100644 index 0000000000..61e3636784 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/job-patchWebhook.yaml @@ -0,0 +1,57 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.admission-patch" . }} + annotations: + "helm.sh/hook": post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission-patch" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "newrelic-infra-operator.fullname.admission-patch" . }} + labels: + app: {{ include "newrelic-infra-operator.name.admission-patch" . }} + {{- include "newrelic.common.labels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.admissionWebhooksPatchJob.image.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: patch + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.admissionWebhooksPatchJob.image "context" .) }} + imagePullPolicy: {{ .Values.admissionWebhooksPatchJob.image.pullPolicy }} + args: + - patch + - --webhook-name={{ include "newrelic.common.naming.fullname" . }} + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "newrelic-infra-operator.fullname.admission" . }} + - --patch-failure-policy=Ignore + - --patch-validating=false + {{- if .Values.admissionWebhooksPatchJob.image.volumeMounts }} + volumeMounts: + {{- include "tplvalues.render" ( dict "value" .Values.admissionWebhooksPatchJob.image.volumeMounts "context" $ ) | nindent 10 }} + {{- end }} + {{- if .Values.admissionWebhooksPatchJob.image.volumes }} + volumes: + {{- include "tplvalues.render" ( dict "value" .Values.admissionWebhooksPatchJob.image.volumes "context" $ ) | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "newrelic-infra-operator.fullname.admission.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 -}} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/psp.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/psp.yaml new file mode 100644 index 0000000000..64237abb4d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/psp.yaml @@ -0,0 +1,50 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled) (.Values.rbac.pspEnabled)) }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "newrelic-infra-operator.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + privileged: false + # Required to prevent escalations to root. + # allowPrivilegeEscalation: false + # This is redundant with non-root + disallow privilege escalation, + # but we can provide it for defense in depth. + # requiredDropCapabilities: + # - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + hostIPC: false + hostPID: false + runAsUser: + # Permits the container to run with root privileges as well. + rule: 'RunAsAny' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/role.yaml new file mode 100644 index 0000000000..e3213f7c5e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/role.yaml @@ -0,0 +1,21 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - create +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/rolebinding.yaml new file mode 100644 index 0000000000..67eb792987 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/rolebinding.yaml @@ -0,0 +1,21 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "newrelic-infra-operator.fullname.admission" . }} +subjects: + - kind: ServiceAccount + name: {{ include "newrelic-infra-operator.fullname.admission.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/serviceaccount.yaml new file mode 100644 index 0000000000..18eb7347df --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/job-patch/serviceaccount.yaml @@ -0,0 +1,14 @@ +{{- $createServiceAccount := include "newrelic.common.serviceAccount.create" . -}} +{{- if (and $createServiceAccount (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.admission.serviceAccount" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic-infra-operator.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/mutatingWebhookConfiguration.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/mutatingWebhookConfiguration.yaml new file mode 100644 index 0000000000..efa6052555 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/admission-webhooks/mutatingWebhookConfiguration.yaml @@ -0,0 +1,32 @@ +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} +{{- if .Values.certManager.enabled }} + annotations: + certmanager.k8s.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} + cert-manager.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} +{{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +webhooks: +- name: newrelic-infra-operator.newrelic.com + clientConfig: + service: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + path: "/mutate-v1-pod" +{{- if not .Values.certManager.enabled }} + caBundle: "" +{{- end }} + rules: + - operations: ["CREATE"] + apiGroups: [""] + apiVersions: ["v1"] + resources: ["pods"] + failurePolicy: Ignore + timeoutSeconds: {{ .Values.timeoutSeconds }} + sideEffects: NoneOnDryRun + admissionReviewVersions: + - v1 + reinvocationPolicy: IfNeeded diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/cert-manager.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/cert-manager.yaml new file mode 100644 index 0000000000..800dc24530 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/cert-manager.yaml @@ -0,0 +1,52 @@ +{{ if .Values.certManager.enabled }} +--- +# Create a selfsigned Issuer, in order to create a root CA certificate for +# signing webhook serving certificates +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.self-signed-issuer" . }} +spec: + selfSigned: {} +--- +# Generate a CA Certificate used to sign certificates for the webhook +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.root-cert" . }} +spec: + secretName: {{ include "newrelic-infra-operator.fullname.root-cert" . }} + duration: 43800h # 5y + issuerRef: + name: {{ include "newrelic-infra-operator.fullname.self-signed-issuer" . }} + commonName: "ca.webhook.nri" + isCA: true +--- +# Create an Issuer that uses the above generated CA certificate to issue certs +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.root-issuer" . }} +spec: + ca: + secretName: {{ include "newrelic-infra-operator.fullname.root-cert" . }} +--- +# Finally, generate a serving certificate for the webhook to use +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.webhook-cert" . }} +spec: + secretName: {{ include "newrelic-infra-operator.fullname.admission" . }} + duration: 8760h # 1y + issuerRef: + name: {{ include "newrelic-infra-operator.fullname.root-issuer" . }} + dnsNames: + - {{ include "newrelic.common.naming.fullname" . }} + - {{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }} + - {{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }}.svc +{{ end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrole.yaml new file mode 100644 index 0000000000..cb20e310db --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrole.yaml @@ -0,0 +1,39 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + {{/* Allow creating and updating secrets with license key for infra agent. */ -}} + - apiGroups: [""] + resources: + - "secrets" + verbs: ["get", "update", "patch"] + resourceNames: [ {{ include "newrelic-infra-operator.fullname.config" . | quote }} ] + {{/* resourceNames used above do not support "create" verb. */ -}} + - apiGroups: [""] + resources: + - "secrets" + verbs: ["create"] + {{/* "list" and "watch" are required for controller-runtime caching. */ -}} + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterrolebindings"] + verbs: ["list", "watch", "get"] + {{/* Our controller needs permission to add the ServiceAccounts from the user to the -infra-agent CRB. */ -}} + - apiGroups: ["rbac.authorization.k8s.io"] + resources: ["clusterrolebindings"] + verbs: ["update"] + resourceNames: [ {{ include "newrelic-infra-operator.fullname.infra-agent" . | quote }} ] + {{- /* Controller must have permissions it will grant to other ServiceAccounts. */ -}} + {{- include "newrelic-infra-operator.infra-agent-monitoring-rules" . | nindent 2 }} +--- +{{/* infra-agent is the ClusterRole to be used by the injected agents to get metrics */}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic-infra-operator.fullname.infra-agent" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + {{- include "newrelic-infra-operator.infra-agent-monitoring-rules" . | nindent 2 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..1f5f8b89ba --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/clusterrolebinding.yaml @@ -0,0 +1,26 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ template "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +--- +{{/* infra-agent is the ClusterRoleBinding to be used by the ServiceAccounts of the injected agents */}} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic-infra-operator.fullname.infra-agent" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic-infra-operator.fullname.infra-agent" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/configmap.yaml new file mode 100644 index 0000000000..fdb4a1e3ba --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/configmap.yaml @@ -0,0 +1,9 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-infra-operator.fullname.config" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + operator.yaml: {{- include "newrelic-infra-operator.configmap.data" . | toYaml | nindent 4 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/deployment.yaml new file mode 100644 index 0000000000..40f3898873 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/deployment.yaml @@ -0,0 +1,92 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + replicas: {{ .Values.replicas }} + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- if .Values.podAnnotations }} + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + serviceAccountName: {{ template "newrelic.common.serviceAccount.name" . }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.image.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: {{ include "newrelic.common.naming.name" . }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.image "context" .) }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 10 }} + {{- end }} + env: + - name: CLUSTER_NAME + value: {{ include "newrelic.common.cluster" . }} + - name: NRIA_LICENSE_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + volumeMounts: + - name: config + mountPath: /etc/newrelic/newrelic-infra-operator/ + - name: tls-key-cert-pair + mountPath: /tmp/k8s-webhook-server/serving-certs/ + readinessProbe: + httpGet: + path: /healthz + port: 9440 + initialDelaySeconds: 1 + periodSeconds: 1 + {{- if .Values.resources }} + resources: + {{- toYaml .Values.resources | nindent 10 }} + {{- end }} + volumes: + - name: config + configMap: + name: {{ include "newrelic-infra-operator.fullname.config" . }} + - name: tls-key-cert-pair + secret: + secretName: {{ include "newrelic-infra-operator.fullname.admission" . }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 -}} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 -}} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + {{- if include "newrelic.common.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/secret.yaml new file mode 100644 index 0000000000..f558ee86c9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/secret.yaml @@ -0,0 +1,2 @@ +{{- /* Common library will take care of creating the secret or not. */}} +{{- include "newrelic.common.license.secret" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/service.yaml new file mode 100644 index 0000000000..04af4d09c6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + ports: + - port: 443 + targetPort: 9443 + selector: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 4 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/serviceaccount.yaml new file mode 100644 index 0000000000..b1e74523e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- if include "newrelic.common.serviceAccount.annotations" . }} + annotations: + {{- include "newrelic.common.serviceAccount.annotations" . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/deployment_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/deployment_test.yaml new file mode 100644 index 0000000000..a1ffa88d05 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/deployment_test.yaml @@ -0,0 +1,32 @@ +suite: test cluster environment variable setup +templates: + - templates/deployment.yaml + - templates/configmap.yaml + - templates/secret.yaml +release: + name: my-release + namespace: my-namespac +tests: + - it: has a linux node selector by default + set: + cluster: my-cluster + licenseKey: use-whatever + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + template: templates/deployment.yaml + - it: has a linux node selector and additional selectors + set: + cluster: my-cluster + licenseKey: use-whatever + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue + template: templates/deployment.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_patch_psp_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_patch_psp_test.yaml new file mode 100644 index 0000000000..78f1b1f6af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_patch_psp_test.yaml @@ -0,0 +1,23 @@ +suite: test rendering for PSPs +templates: + - templates/admission-webhooks/job-patch/psp.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: If PSPs are enabled PodSecurityPolicy is rendered + set: + cluster: test-cluster + licenseKey: use-whatever + rbac: + pspEnabled: true + asserts: + - hasDocuments: + count: 1 + - it: If PSPs are disabled PodSecurityPolicy isn't rendered + set: + cluster: test-cluster + licenseKey: use-whatever + asserts: + - hasDocuments: + count: 0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_serviceaccount_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_serviceaccount_test.yaml new file mode 100644 index 0000000000..c6acda2db4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/job_serviceaccount_test.yaml @@ -0,0 +1,64 @@ +suite: test job' serviceAccount +templates: + - templates/admission-webhooks/job-patch/job-createSecret.yaml + - templates/admission-webhooks/job-patch/job-patchWebhook.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: my-release-newrelic-infra-operator-admission + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: default + + - it: has a linux node selector by default + set: + cluster: my-cluster + licenseKey: use-whatever + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + + - it: has a linux node selector and additional selectors + set: + cluster: my-cluster + licenseKey: use-whatever + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/rbac_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/rbac_test.yaml new file mode 100644 index 0000000000..03473cb393 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/tests/rbac_test.yaml @@ -0,0 +1,41 @@ +suite: test RBAC creation +templates: + - templates/admission-webhooks/job-patch/rolebinding.yaml + - templates/admission-webhooks/job-patch/clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: subjects[0].name + value: my-release-newrelic-infra-operator-admission + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: subjects[0].name + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + licenseKey: use-whatever + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: subjects[0].name + value: default diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/values.yaml new file mode 100644 index 0000000000..3dd6fd0554 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infra-operator/values.yaml @@ -0,0 +1,222 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Mandatory. Can be configured also with `global.cluster` +cluster: "" +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" +# -- In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` +customSecretName: "" +# -- In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` +customSecretLicenseKey: "" + +# -- Image for the New Relic Infrastructure Operator +# @default -- See `values.yaml` +image: + repository: newrelic/newrelic-infra-operator + tag: "" + pullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + +# -- Image used to create certificates and inject them to the admission webhook +# @default -- See `values.yaml` +admissionWebhooksPatchJob: + image: + registry: # Defaults to registry.k8s.io + repository: ingress-nginx/kube-webhook-certgen + tag: v1.3.0 + pullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + + # -- Volume mounts to add to the job, you might want to mount tmp if Pod Security Policies. + # Enforce a read-only root. + volumeMounts: [] + # - name: tmp + # mountPath: /tmp + # -- Volumes to add to the job container. + volumes: [] + # - name: tmp + # emptyDir: {} + +rbac: + # rbac.pspEnabled -- Whether the chart should create Pod Security Policy objects. + pspEnabled: false + +replicas: 1 + +# -- Resources available for this pod +resources: + limits: + memory: 80M + requests: + cpu: 100m + memory: 30M + +# -- Settings controlling ServiceAccount creation +# @default -- See `values.yaml` +serviceAccount: + # serviceAccount.create -- (bool) Specifies whether a ServiceAccount should be created + # @default -- `true` + create: + # If not set and create is true, a name is generated using the fullname template + name: "" + # Specify any annotations to add to the ServiceAccount + annotations: + +# -- Annotations to add to the pod. +podAnnotations: {} + +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- `false` +hostNetwork: +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: + fsGroup: 1001 + runAsUser: 1001 + runAsGroup: 1001 +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +# -- Sets pod/node affinities. Can be configured also with `global.affinity` +affinity: {} +# -- Sets pod's node selector. Can be configured also with `global.nodeSelector` +nodeSelector: {} +# -- Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` +tolerations: [] + +certManager: + # certManager.enabled -- Use cert manager for webhook certs + enabled: false + +# -- Webhook timeout +# Ref: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts +timeoutSeconds: 10 + +# -- Operator configuration +# @default -- See `values.yaml` +config: + # -- IgnoreMutationErrors instruments the operator to ignore injection error instead of failing. + # If set to false errors of the injection could block the creation of pods. + ignoreMutationErrors: true + + # -- configuration of the sidecar injection webhook + # @default -- See `values.yaml` + infraAgentInjection: +# policies: +# - podSelector: +# matchExpressions: +# - key: app +# operator: In +# values: [ "nginx-sidecar" ] +# + # All policies are ORed, if one policy matches the sidecar is injected. + # Within a policy PodSelectors, NamespaceSelector and NamespaceName are ANDed, any of these, if not specified, is ignored. + # The following policy is injected if global.fargate=true and matches all pods belonging to any fargate profile. + # policies: + # - podSelector: + # matchExpressions: + # - key: "eks.amazonaws.com/fargate-profile" + # operator: Exists + # Also NamespaceName and NamespaceSelector can be leveraged. + # namespaceName: "my-namespace" + # namespaceSelector: {} + + # -- agentConfig contains the configuration for the container agent injected + # @default -- See `values.yaml` + agentConfig: + # Custom Attributes allows to pass any custom attribute to the injected infra agents. + # The value is computed either from the defaultValue or taken at injected time from Label specified in "fromLabel". + # Either the label should exist or the default should be specified in order to have the injection working. + # customAttributes: + # - name: computeType + # defaultValue: serverless + # - name: fargateProfile + # fromLabel: eks.amazonaws.com/fargate-profile + + # -- Image of the infrastructure agent to be injected. + # @default -- See `values.yaml` + image: + repository: newrelic/infrastructure-k8s + tag: 2.13.15-unprivileged + pullPolicy: IfNotPresent + + # -- configSelectors is the way to configure resource requirements and extra envVars of the injected sidecar container. + # When mutating it will be applied the first configuration having the labelSelector matching with the mutating pod. + # @default -- See `values.yaml` + configSelectors: + - resourceRequirements: # resourceRequirements to apply to the injected sidecar. + limits: + memory: 100M + cpu: 200m + requests: + memory: 50M + cpu: 100m + extraEnvVars: # extraEnvVars to pass to the injected sidecar. + DISABLE_KUBE_STATE_METRICS: "true" + # NRIA_VERBOSE: "1" + labelSelector: + matchExpressions: + - key: "app.kubernetes.io/name" + operator: NotIn + values: ["kube-state-metrics"] + - key: "app" + operator: NotIn + values: ["kube-state-metrics"] + - key: "k8s-app" + operator: NotIn + values: ["kube-state-metrics"] + + - resourceRequirements: + limits: + memory: 300M + cpu: 300m + requests: + memory: 150M + cpu: 150m + labelSelector: + matchLabels: + k8s-app: kube-state-metrics + # extraEnvVars: + # NRIA_VERBOSE: "1" + + - resourceRequirements: + limits: + memory: 300M + cpu: 300m + requests: + memory: 150M + cpu: 150m + labelSelector: + matchLabels: + app: kube-state-metrics + # extraEnvVars: + # NRIA_VERBOSE: "1" + + - resourceRequirements: + limits: + memory: 300M + cpu: 300m + requests: + memory: 150M + cpu: 150m + labelSelector: + matchLabels: + app.kubernetes.io/name: kube-state-metrics + # extraEnvVars: + # NRIA_VERBOSE: "1" + + # pod Security Context of the sidecar injected. + # Notice that ReadOnlyRootFilesystem and AllowPrivilegeEscalation enforced respectively to true and to false. + # podSecurityContext: + # RunAsUser: + # RunAsGroup: diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/.helmignore new file mode 100644 index 0000000000..2bfa6a4d95 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/.helmignore @@ -0,0 +1 @@ +tests/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.lock new file mode 100644 index 0000000000..51857821fb --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T23:46:25.952459233Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.yaml new file mode 100644 index 0000000000..2c211355b5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/Chart.yaml @@ -0,0 +1,26 @@ +apiVersion: v2 +appVersion: 3.29.6 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic Kubernetes monitoring solution +home: https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/introduction-kubernetes-integration/ +icon: https://newrelic.com/themes/custom/curio/assets/mediakit/NR_logo_Horizontal.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: newrelic-infrastructure +sources: +- https://github.com/newrelic/nri-kubernetes/ +- https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure +- https://github.com/newrelic/infrastructure-agent/ +version: 3.34.6 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md new file mode 100644 index 0000000000..923b6109b1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md @@ -0,0 +1,220 @@ +# newrelic-infrastructure + +A Helm chart to deploy the New Relic Kubernetes monitoring solution + +**Homepage:** + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-kubernetes https://newrelic.github.io/nri-kubernetes +helm upgrade --install newrelic-infrastructure nri-kubernetes/newrelic-infrastructure -f your-custom-values.yaml +``` + +## Source Code + +* +* +* + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Low data mode +There are two mechanisms to reduce the amount of data that this integration sends to New Relic. See this snippet from the `values.yaml` file: +```yaml +common: + config: + interval: 15s + +lowDataMode: false +``` + +The `lowDataMode` toggle is the simplest way to reduce data send to Newrelic. Setting it to `true` changes the default scrape interval from 15 seconds +(the default) to 30 seconds. + +If you need for some reason to fine-tune the number of seconds you can use `common.config.interval` directly. If you take a look at the `values.yaml` +file, the value there is `nil`. If any value is set there, the `lowDataMode` toggle is ignored as this value takes precedence. + +Setting this interval above 40 seconds can make you experience issues with the Kubernetes Cluster Explorer so this chart limits setting the interval +inside the range of 10 to 40 seconds. + +### Affinities and tolerations + +The New Relic common library allows to set affinities, tolerations, and node selectors globally using e.g. `.global.affinity` to ease the configuration +when you use this chart using `nri-bundle`. This chart has an extra level of granularity to the components that it deploys: +control plane, ksm, and kubelet. + +Take this snippet as an example: +```yaml +global: + affinity: {} +affinity: {} + +kubelet: + affinity: {} +ksm: + affinity: {} +controlPlane: + affinity: {} +``` + +The order to set an affinity is to set first any `kubelet.affinity`, `ksm.affinity`, or `controlPlane.affinity`. If these values are empty the chart +fallbacks to `affinity` (at root level), and if that value is empty, the chart fallbacks to `global.affinity`. + +The same procedure applies to `nodeSelector` and `tolerations`. + +On the other hand, some components have affinities and tolerations predefined e.g. to be able to run kubelet pods on nodes that are tainted as master +nodes or to schedule the KSM scraper on the same node of KSM to reduce the inter-node traffic. + +If you are having problems assigning pods to nodes it may be because of this. Take a look at the [`values.yaml`](values.yaml) to see if the pod that is +not having your expected behavior has any predefined value. + +### `hostNetwork` toggle + +In versions below v3, changing the `privileged` mode affected the `hostNetwork`. We changed this behavior and now you can set pods to use `hostNetwork` +using the corresponding [flags from the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) +(`.global.hostNetwork` and `.hostNetwork`) but the component that scrapes data from the control plane has always set `hostNetwork` enabled by default +(Look in the [`values.yaml`](values.yaml) for `controlPlane.hostNetwork: true`) + +This is because the most common configuration of the control plane components is to be configured to listen only to `localhost`. + +If your cluster security policy does not allow to use `hostNetwork`, you can disable it control plane monitoring by setting `controlPlane.enabled` to +`false.` + +### `privileged` toggle + +The default value for `privileged` [from the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) is +`false` but in this particular this chart it is set to `true` (Look in the [`values.yaml`](values.yaml) for `privileged: true`) + +This is because when `kubelet` pods need to run in privileged mode to fetch cpu, memory, process, and network metrics of your nodes. + +If your cluster security policy does not allow to have `privileged` in your pod' security context, you can disable it by setting `privileged` to +`false` taking into account that you will lose all the metrics from the host and some metadata from the host that are added to the metrics of the +integrations that you have configured. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Sets pod/node affinities set almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` | +| common | object | See `values.yaml` | Config that applies to all instances of the solution: kubelet, ksm, control plane and sidecars. | +| common.agentConfig | object | `{}` | Config for the Infrastructure agent. Will be used by the forwarder sidecars and the agent running integrations. See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ | +| common.config.interval | duration | `15s` (See [Low data mode](README.md#low-data-mode)) | Intervals larger than 40s are not supported and will cause the NR UI to not behave properly. Any non-nil value will override the `lowDataMode` default. | +| common.config.namespaceSelector | object | `{}` | Config for filtering ksm and kubelet metrics by namespace. | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| controlPlane | object | See `values.yaml` | Configuration for the control plane scraper. | +| controlPlane.affinity | object | Deployed only in master nodes. | Affinity for the control plane DaemonSet. | +| controlPlane.agentConfig | object | `{}` | Config for the Infrastructure agent that will forward the metrics to the backend. It will be merged with the configuration in `.common.agentConfig` See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ | +| controlPlane.config.apiServer | object | Common settings for most K8s distributions. | API Server monitoring configuration | +| controlPlane.config.apiServer.enabled | bool | `true` | Enable API Server monitoring | +| controlPlane.config.controllerManager | object | Common settings for most K8s distributions. | Controller manager monitoring configuration | +| controlPlane.config.controllerManager.enabled | bool | `true` | Enable controller manager monitoring. | +| controlPlane.config.etcd | object | Common settings for most K8s distributions. | etcd monitoring configuration | +| controlPlane.config.etcd.enabled | bool | `true` | Enable etcd monitoring. Might require manual configuration in some environments. | +| controlPlane.config.retries | int | `3` | Number of retries after timeout expired | +| controlPlane.config.scheduler | object | Common settings for most K8s distributions. | Scheduler monitoring configuration | +| controlPlane.config.scheduler.enabled | bool | `true` | Enable scheduler monitoring. | +| controlPlane.config.timeout | string | `"10s"` | Timeout for the Kubernetes APIs contacted by the integration | +| controlPlane.enabled | bool | `true` | Deploy control plane monitoring component. | +| controlPlane.hostNetwork | bool | `true` | Run Control Plane scraper with `hostNetwork`. `hostNetwork` is required for most control plane configurations, as they only accept connections from localhost. | +| controlPlane.kind | string | `"DaemonSet"` | How to deploy the control plane scraper. If autodiscovery is in use, it should be `DaemonSet`. Advanced users using static endpoints set this to `Deployment` to avoid reporting metrics twice. | +| controlPlane.tolerations | list | Schedules in all tainted nodes | Tolerations for the control plane DaemonSet. | +| customAttributes | object | `{}` | Adds extra attributes to the cluster and all the metrics emitted to the backend. Can be configured also with `global.customAttributes` | +| customSecretLicenseKey | string | `""` | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` | +| customSecretName | string | `""` | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| enableProcessMetrics | bool | `false` | Collect detailed metrics from processes running in the host. This defaults to true for accounts created before July 20, 2020. ref: https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-1120 | +| fedramp.enabled | bool | `false` | Enables FedRAMP. Can be configured also with `global.fedramp.enabled` | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | `false` | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| images | object | See `values.yaml` | Images used by the chart for the integration and agents. | +| images.agent | object | See `values.yaml` | Image for the New Relic Infrastructure Agent plus integrations. | +| images.forwarder | object | See `values.yaml` | Image for the New Relic Infrastructure Agent sidecar. | +| images.integration | object | See `values.yaml` | Image for the New Relic Kubernetes integration. | +| images.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| integrations | object | `{}` | Config files for other New Relic integrations that should run in this cluster. | +| ksm | object | See `values.yaml` | Configuration for the Deployment that collects state metrics from KSM (kube-state-metrics). | +| ksm.affinity | object | Deployed in the same node as KSM | Affinity for the KSM Deployment. | +| ksm.agentConfig | object | `{}` | Config for the Infrastructure agent that will forward the metrics to the backend. It will be merged with the configuration in `.common.agentConfig` See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ | +| ksm.config.retries | int | `3` | Number of retries after timeout expired | +| ksm.config.scheme | string | `"http"` | Scheme to use to connect to kube-state-metrics. Supported values are `http` and `https`. | +| ksm.config.selector | string | `"app.kubernetes.io/name=kube-state-metrics"` | Label selector that will be used to automatically discover an instance of kube-state-metrics running in the cluster. | +| ksm.config.timeout | string | `"10s"` | Timeout for the ksm API contacted by the integration | +| ksm.enabled | bool | `true` | Enable cluster state monitoring. Advanced users only. Setting this to `false` is not supported and will break the New Relic experience. | +| ksm.hostNetwork | bool | Not set | Sets pod's hostNetwork. When set bypasses global/common variable | +| ksm.resources | object | 100m/150M -/850M | Resources for the KSM scraper pod. Keep in mind that sharding is not supported at the moment, so memory usage for this component ramps up quickly on large clusters. | +| ksm.tolerations | list | Schedules in all tainted nodes | Tolerations for the KSM Deployment. | +| kubelet | object | See `values.yaml` | Configuration for the DaemonSet that collects metrics from the Kubelet. | +| kubelet.agentConfig | object | `{}` | Config for the Infrastructure agent that will forward the metrics to the backend and will run the integrations in this cluster. It will be merged with the configuration in `.common.agentConfig`. You can see all the agent configurations in [New Relic docs](https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/) e.g. you can set `passthrough_environment` int the [config file](https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/configure-infrastructure-agent/#config-file) so the agent let use that environment variables to the integrations. | +| kubelet.config.retries | int | `3` | Number of retries after timeout expired | +| kubelet.config.scraperMaxReruns | int | `4` | Max number of scraper rerun when scraper runtime error happens | +| kubelet.config.timeout | string | `"10s"` | Timeout for the kubelet APIs contacted by the integration | +| kubelet.enabled | bool | `true` | Enable kubelet monitoring. Advanced users only. Setting this to `false` is not supported and will break the New Relic experience. | +| kubelet.extraEnv | list | `[]` | Add user environment variables to the agent | +| kubelet.extraEnvFrom | list | `[]` | Add user environment from configMaps or secrets as variables to the agent | +| kubelet.extraVolumeMounts | list | `[]` | Defines where to mount volumes specified with `extraVolumes` | +| kubelet.extraVolumes | list | `[]` | Volumes to mount in the containers | +| kubelet.hostNetwork | bool | Not set | Sets pod's hostNetwork. When set bypasses global/common variable | +| kubelet.tolerations | list | Schedules in all tainted nodes | Tolerations for the control plane DaemonSet. | +| labels | object | `{}` | Additional labels for chart objects. Can be configured also with `global.labels` | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| lowDataMode | bool | `false` (See [Low data mode](README.md#low-data-mode)) | Send less data by incrementing the interval from `15s` (the default when `lowDataMode` is `false` or `nil`) to `30s`. Non-nil values of `common.config.interval` will override this value. | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| nrStaging | bool | `false` | Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` | +| podAnnotations | object | `{}` | Annotations to be added to all pods created by the integration. | +| podLabels | object | `{}` | Additional labels for chart pods. Can be configured also with `global.podLabels` | +| podSecurityContext | object | `{}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| privileged | bool | `true` | Run the integration with full access to the host filesystem and network. Running in this mode allows reporting fine-grained cpu, memory, process and network metrics for your nodes. | +| proxy | string | `""` | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` | +| rbac.create | bool | `true` | Whether the chart should automatically create the RBAC objects required to run. | +| rbac.pspEnabled | bool | `false` | Whether the chart should create Pod Security Policy objects. | +| selfMonitoring.pixie.enabled | bool | `false` | Enables the Pixie Health Check nri-flex config. This Flex config performs periodic checks of the Pixie /healthz and /statusz endpoints exposed by the Pixie Cloud Connector. A status for each endpoint is sent to New Relic in a pixieHealthCheck event. | +| serviceAccount | object | See `values.yaml` | Settings controlling ServiceAccount creation. | +| serviceAccount.create | bool | `true` | Whether the chart should automatically create the ServiceAccount objects required to run. | +| sink.http.probeBackoff | string | `"5s"` | The amount of time the scraper container to backoff when it fails to probe infra agent sidecar. | +| sink.http.probeTimeout | string | `"90s"` | The amount of time the scraper container to probe infra agent sidecar container before giving up and restarting during pod starts. | +| strategy | object | `type: Recreate` | Update strategy for the deployed Deployments. | +| tolerations | list | `[]` | Sets pod's tolerations to node taints almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| updateStrategy | object | See `values.yaml` | Update strategy for the deployed DaemonSets. | +| verboseLog | bool | `false` | Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` | + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) + +## Past Contributors + +Previous iterations of this chart started as a community project in the [stable Helm chart repository](github.com/helm/charts/). New Relic is very thankful for all the 15+ community members that contributed and helped maintain the chart there over the years: + +* coreypobrien +* sstarcher +* jmccarty3 +* slayerjain +* ryanhope2 +* rk295 +* michaelajr +* isindir +* idirouhab +* ismferd +* enver +* diclophis +* jeffdesc +* costimuraru +* verwilst +* ezelenka diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md.gotmpl new file mode 100644 index 0000000000..84f2f90834 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/README.md.gotmpl @@ -0,0 +1,137 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-kubernetes https://newrelic.github.io/nri-kubernetes +helm upgrade --install newrelic-infrastructure nri-kubernetes/newrelic-infrastructure -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Low data mode +There are two mechanisms to reduce the amount of data that this integration sends to New Relic. See this snippet from the `values.yaml` file: +```yaml +common: + config: + interval: 15s + +lowDataMode: false +``` + +The `lowDataMode` toggle is the simplest way to reduce data send to Newrelic. Setting it to `true` changes the default scrape interval from 15 seconds +(the default) to 30 seconds. + +If you need for some reason to fine-tune the number of seconds you can use `common.config.interval` directly. If you take a look at the `values.yaml` +file, the value there is `nil`. If any value is set there, the `lowDataMode` toggle is ignored as this value takes precedence. + +Setting this interval above 40 seconds can make you experience issues with the Kubernetes Cluster Explorer so this chart limits setting the interval +inside the range of 10 to 40 seconds. + +### Affinities and tolerations + +The New Relic common library allows to set affinities, tolerations, and node selectors globally using e.g. `.global.affinity` to ease the configuration +when you use this chart using `nri-bundle`. This chart has an extra level of granularity to the components that it deploys: +control plane, ksm, and kubelet. + +Take this snippet as an example: +```yaml +global: + affinity: {} +affinity: {} + +kubelet: + affinity: {} +ksm: + affinity: {} +controlPlane: + affinity: {} +``` + +The order to set an affinity is to set first any `kubelet.affinity`, `ksm.affinity`, or `controlPlane.affinity`. If these values are empty the chart +fallbacks to `affinity` (at root level), and if that value is empty, the chart fallbacks to `global.affinity`. + +The same procedure applies to `nodeSelector` and `tolerations`. + +On the other hand, some components have affinities and tolerations predefined e.g. to be able to run kubelet pods on nodes that are tainted as master +nodes or to schedule the KSM scraper on the same node of KSM to reduce the inter-node traffic. + +If you are having problems assigning pods to nodes it may be because of this. Take a look at the [`values.yaml`](values.yaml) to see if the pod that is +not having your expected behavior has any predefined value. + +### `hostNetwork` toggle + +In versions below v3, changing the `privileged` mode affected the `hostNetwork`. We changed this behavior and now you can set pods to use `hostNetwork` +using the corresponding [flags from the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) +(`.global.hostNetwork` and `.hostNetwork`) but the component that scrapes data from the control plane has always set `hostNetwork` enabled by default +(Look in the [`values.yaml`](values.yaml) for `controlPlane.hostNetwork: true`) + +This is because the most common configuration of the control plane components is to be configured to listen only to `localhost`. + +If your cluster security policy does not allow to use `hostNetwork`, you can disable it control plane monitoring by setting `controlPlane.enabled` to +`false.` + +### `privileged` toggle + +The default value for `privileged` [from the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) is +`false` but in this particular this chart it is set to `true` (Look in the [`values.yaml`](values.yaml) for `privileged: true`) + +This is because when `kubelet` pods need to run in privileged mode to fetch cpu, memory, process, and network metrics of your nodes. + +If your cluster security policy does not allow to have `privileged` in your pod' security context, you can disable it by setting `privileged` to +`false` taking into account that you will lose all the metrics from the host and some metadata from the host that are added to the metrics of the +integrations that you have configured. + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} + +## Past Contributors + +Previous iterations of this chart started as a community project in the [stable Helm chart repository](github.com/helm/charts/). New Relic is very thankful for all the 15+ community members that contributed and helped maintain the chart there over the years: + +* coreypobrien +* sstarcher +* jmccarty3 +* slayerjain +* ryanhope2 +* rk295 +* michaelajr +* isindir +* idirouhab +* ismferd +* enver +* diclophis +* jeffdesc +* costimuraru +* verwilst +* ezelenka diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-cplane-kind-deployment-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-cplane-kind-deployment-values.yaml new file mode 100644 index 0000000000..1e2c36d21f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-cplane-kind-deployment-values.yaml @@ -0,0 +1,135 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +common: + agentConfig: + # We set it in order for the kubelet to not crash when posting tho the agent. Since the License_Key is + # not valid, the Identity Api doesn't return an AgentID and the server from the Agent takes to long to respond + is_forward_only: true + config: + sink: + http: + timeout: 180s + +customAttributes: + new: relic + loren: ipsum + +# Disable KSM scraper as it is not enabled when testing this chart individually. +ksm: + enabled: false + +# K8s DaemonSets update strategy. +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +enableProcessMetrics: "false" +serviceAccount: + create: true + +podAnnotations: + annotation1: "annotation" +podLabels: + label1: "label" + +securityContext: + runAsUser: 1000 + runAsGroup: 2000 + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + +privileged: true + +rbac: + create: true + pspEnabled: false + +prefixDisplayNameWithCluster: false +useNodeNameAsDisplayName: true +integrations_config: [] + +kubelet: + enabled: true + annotations: {} + tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + extraEnv: + - name: ENV_VAR1 + value: "var1" + - name: ENV_VAR2 + value: "var2" + resources: + limits: + memory: 400M + requests: + cpu: 100m + memory: 180M + config: + scheme: "http" + +controlPlane: + kind: Deployment + enabled: true + config: + etcd: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=etcd" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + - url: http://localhost:2381 + scheduler: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-scheduler" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + controllerManager: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-controller-manager" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + mtls: + secretName: secret-name + secretNamespace: default + apiServer: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-apiserver" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + mtls: + secretName: secret-name4 + - url: http://localhost:8080 + +images: + integration: + tag: test + repository: e2e/nri-kubernetes diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-values.yaml new file mode 100644 index 0000000000..125a496079 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/ci/test-values.yaml @@ -0,0 +1,134 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +common: + agentConfig: + # We set it in order for the kubelet to not crash when posting tho the agent. Since the License_Key is + # not valid, the Identity Api doesn't return an AgentID and the server from the Agent takes to long to respond + is_forward_only: true + config: + sink: + http: + timeout: 180s + +customAttributes: + new: relic + loren: ipsum + +# Disable KSM scraper as it is not enabled when testing this chart individually. +ksm: + enabled: false + +# K8s DaemonSets update strategy. +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +enableProcessMetrics: "false" +serviceAccount: + create: true + +podAnnotations: + annotation1: "annotation" +podLabels: + label1: "label" + +securityContext: + runAsUser: 1000 + runAsGroup: 2000 + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + +privileged: true + +rbac: + create: true + pspEnabled: false + +prefixDisplayNameWithCluster: false +useNodeNameAsDisplayName: true +integrations_config: [] + +kubelet: + enabled: true + annotations: {} + tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + extraEnv: + - name: ENV_VAR1 + value: "var1" + - name: ENV_VAR2 + value: "var2" + resources: + limits: + memory: 400M + requests: + cpu: 100m + memory: 180M + config: + scheme: "http" + +controlPlane: + enabled: true + config: + etcd: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=etcd" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + - url: http://localhost:2381 + scheduler: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-scheduler" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + controllerManager: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-controller-manager" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + mtls: + secretName: secret-name + secretNamespace: default + apiServer: + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-apiserver" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + mtls: + secretName: secret-name4 + - url: http://localhost:8080 + +images: + integration: + tag: test + repository: e2e/nri-kubernetes diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/NOTES.txt new file mode 100644 index 0000000000..16cc6ea13d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/NOTES.txt @@ -0,0 +1,131 @@ +{{- if not .Values.forceUnsupportedInterval }} +{{- $max := 40 }} +{{- $min := 10 }} +{{- if not (.Values.common.config.interval | hasSuffix "s") }} +{{ fail (printf "Interval must be between %ds and %ds" $min $max ) }} +{{- end }} +{{- if gt ( .Values.common.config.interval | trimSuffix "s" | int64 ) $max }} +{{ fail (printf "Intervals larger than %ds are not supported" $max) }} +{{- end }} +{{- if lt ( .Values.common.config.interval | trimSuffix "s" | int64 ) $min }} +{{ fail (printf "Intervals smaller than %ds are not supported" $min) }} +{{- end }} +{{- end }} + +{{- if or (not .Values.ksm.enabled) (not .Values.kubelet.enabled) }} +Warning: +======== + +You have specified ksm or kubelet integration components as not enabled. +Those components are needed to have the full experience on NROne kubernetes explorer. +{{- end }} + +{{- if and .Values.controlPlane.enabled (not (include "nriKubernetes.controlPlane.hostNetwork" .)) }} +Warning: +======== + +Most Control Plane components listen in the loopback address only, which is not reachable without `hostNetwork: true`. +Control plane autodiscovery might not work as expected. +You can enable hostNetwork for all pods by setting `global.hotNetwork`, `hostNetwork` or only for the control +plane pods by setting `controlPlane.hostNetwork: true`. Alternatively, you can disable control plane monitoring altogether with +`controlPlane.enabled: false`. +{{- end }} + +{{- if and (include "newrelic.fargate" .) .Values.kubelet.affinity }} +Warning: +======== + +You have specified both an EKS Fargate environment (global.fargate) and custom +nodeAffinity rules, so we couldn't automatically exclude the kubelet daemonSet from +Fargate nodes. In order for the integration to work, you MUST manually exclude +the daemonSet from Fargate nodes. + +Please make sure your `values.yaml' contains a .kubelet.affinity.nodeAffinity that achieve the same effect as: + +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: eks.amazonaws.com/compute-type + operator: NotIn + values: + - fargate +{{- end }} + +{{- if and .Values.nodeAffinity .Values.controlPlane.enabled }} +WARNING: `nodeAffinity` is deprecated +===================================== + +We have applied the old `nodeAffinity` to KSM and Kubelet components, but *NOT* to the control plane component as it +might conflict with the default nodeSelector. +This shimming will be removed in the future, please convert your `nodeAffinity` item into: +`ksm.affinity.nodeAffinity`, `controlPlane.affinity.nodeAffinity`, and `kubelet.affinity.nodeAffinity`. +{{- end }} + +{{- if and .Values.integrations_config }} +WARNING: `integrations_config` is deprecated +============================================ + +We have automatically translated `integrations_config` to the new format, but this shimming will be removed in the +future. Please migrate your configs to the new format in the `integrations` key. +{{- end }} + +{{- if or .Values.kubeStateMetricsScheme .Values.kubeStateMetricsPort .Values.kubeStateMetricsUrl .Values.kubeStateMetricsPodLabel .Values.kubeStateMetricsNamespace }} +WARNING: `kubeStateMetrics*` are deprecated +=========================================== + +We have automatically translated your `kubeStateMetrics*` values to the new format, but this shimming will be removed in +the future. Please migrate your configs to the new format in the `ksm.config` key. +{{- end }} + +{{- if .Values.runAsUser }} +WARNING: `runAsUser` is deprecated +================================== + +We have automatically translated your `runAsUser` setting to the new format, but this shimming will be removed in the +future. Please migrate your configs to the new format in the `securityContext` key. +{{- end }} + +{{- if .Values.config }} +WARNING: `config` is deprecated +=============================== + +We have automatically translated your `config` setting to the new format, but this shimming will be removed in the +future. Please migrate your agent config to the new format in the `common.agentConfig` key. +{{- end }} + + +{{ $errors:= "" }} + +{{- if .Values.logFile }} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.logFile" . ) }} +{{- end }} + +{{- if .Values.resources }} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.resources" . ) }} +{{- end }} + +{{- if .Values.image }} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.image" . ) }} +{{- end }} + +{{- if .Values.enableWindows }} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.windows" . ) }} +{{- end }} + +{{- if ( or .Values.controllerManagerEndpointUrl .Values.schedulerEndpointUrl .Values.etcdEndpointUrl .Values.apiServerEndpointUrl )}} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.apiURL" . ) }} +{{- end }} + +{{- if ( or .Values.etcdTlsSecretName .Values.etcdTlsSecretNamespace )}} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.etcdSecrets" . ) }} +{{- end }} + +{{- if .Values.apiServerSecurePort }} +{{ $errors = printf "%s\n\n%s" $errors (include "newrelic.compatibility.message.apiServerSecurePort" . ) }} +{{- end }} + +{{- if $errors | trim}} +{{- fail (printf "\n\n%s\n%s" (include "newrelic.compatibility.message.common" . ) $errors ) }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers.tpl new file mode 100644 index 0000000000..033ef0bfcf --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers.tpl @@ -0,0 +1,118 @@ +{{/* +Create a default fully qualified app name. + +This is a copy and paste from the common-library's name helper because the overriding system was broken. +As we have to change the logic to use "nrk8s" instead of `.Chart.Name` we need to maintain here a version +of the fullname helper + +By default the full name will be "" just in if it has "nrk8s" included in that, if not +it will be concatenated like "-nrk8s". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "nriKubernetes.naming.fullname" -}} +{{- $name := .Values.nameOverride | default "nrk8s" -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end -}} + + + +{{- /* Naming helpers*/ -}} +{{- define "nriKubernetes.naming.secrets" }} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "secrets") -}} +{{- end -}} + + + +{{- /* Return a YAML with the mode to be added to the labels */ -}} +{{- define "nriKubernetes._mode" -}} +{{- if include "newrelic.common.privileged" . -}} + mode: privileged +{{- else -}} + mode: unprivileged +{{- end -}} +{{- end -}} + + + +{{/* +Add `mode` label to the labels that come from the common library for all the objects +*/}} +{{- define "nriKubernetes.labels" -}} +{{- $labels := include "newrelic.common.labels" . | fromYaml -}} +{{- $mode := fromYaml ( include "nriKubernetes._mode" . ) -}} + +{{- mustMergeOverwrite $labels $mode | toYaml -}} +{{- end -}} + + + +{{/* +Add `mode` label to the labels that come from the common library for podLabels +*/}} +{{- define "nriKubernetes.labels.podLabels" -}} +{{- $labels := include "newrelic.common.labels.podLabels" . | fromYaml -}} +{{- $mode := fromYaml ( include "nriKubernetes._mode" . ) -}} + +{{- mustMergeOverwrite $labels $mode | toYaml -}} +{{- end -}} + + + +{{/* +Returns fargate +*/}} +{{- define "newrelic.fargate" -}} +{{- if .Values.fargate -}} + {{- .Values.fargate -}} +{{- else if .Values.global -}} + {{- if .Values.global.fargate -}} + {{- .Values.global.fargate -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- define "newrelic.integrationConfigDefaults" -}} +{{- if include "newrelic.common.lowDataMode" . -}} +interval: 30s +{{- else -}} +interval: 15s +{{- end -}} +{{- end -}} + + + +{{- /* These are the defaults that are used for all the containers in this chart (except the kubelet's agent */ -}} +{{- define "nriKubernetes.securityContext.containerDefaults" -}} +runAsUser: 1000 +runAsGroup: 2000 +allowPrivilegeEscalation: false +readOnlyRootFilesystem: true +{{- end -}} + + + +{{- /* Allow to change pod defaults dynamically based if we are running in privileged mode or not */ -}} +{{- define "nriKubernetes.securityContext.container" -}} +{{- $defaults := fromYaml ( include "nriKubernetes.securityContext.containerDefaults" . ) -}} +{{- $compatibilityLayer := include "newrelic.compatibility.securityContext" . | fromYaml -}} +{{- $commonLibrary := include "newrelic.common.securityContext.container" . | fromYaml -}} + +{{- $finalSecurityContext := dict -}} +{{- if $commonLibrary -}} + {{- $finalSecurityContext = mustMergeOverwrite $commonLibrary $compatibilityLayer -}} +{{- else -}} + {{- $finalSecurityContext = mustMergeOverwrite $defaults $compatibilityLayer -}} +{{- end -}} + +{{- toYaml $finalSecurityContext -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers_compatibility.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers_compatibility.tpl new file mode 100644 index 0000000000..07365e5a1a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/_helpers_compatibility.tpl @@ -0,0 +1,202 @@ +{{/* +Returns true if .Values.ksm.enabled is true and the legacy disableKubeStateMetrics is not set +*/}} +{{- define "newrelic.compatibility.ksm.enabled" -}} +{{- if and .Values.ksm.enabled (not .Values.disableKubeStateMetrics) -}} +true +{{- end -}} +{{- end -}} + +{{/* +Returns legacy ksm values +*/}} +{{- define "newrelic.compatibility.ksm.legacyData" -}} +enabled: true +{{- if .Values.kubeStateMetricsScheme }} +scheme: {{ .Values.kubeStateMetricsScheme }} +{{- end -}} +{{- if .Values.kubeStateMetricsPort }} +port: {{ .Values.kubeStateMetricsPort }} +{{- end -}} +{{- if .Values.kubeStateMetricsUrl }} +staticURL: {{ .Values.kubeStateMetricsUrl }} +{{- end -}} +{{- if .Values.kubeStateMetricsPodLabel }} +selector: {{ printf "%s=kube-state-metrics" .Values.kubeStateMetricsPodLabel }} +{{- end -}} +{{- if .Values.kubeStateMetricsNamespace }} +namespace: {{ .Values.kubeStateMetricsNamespace}} +{{- end -}} +{{- end -}} + +{{/* +Returns the new value if available, otherwise falling back on the legacy one +*/}} +{{- define "newrelic.compatibility.valueWithFallback" -}} +{{- if .supported }} +{{- toYaml .supported}} +{{- else if .legacy -}} +{{- toYaml .legacy}} +{{- end }} +{{- end -}} + +{{/* +Returns a dictionary with legacy runAsUser config +*/}} +{{- define "newrelic.compatibility.securityContext" -}} +{{- if .Values.runAsUser -}} +{{ dict "runAsUser" .Values.runAsUser | toYaml }} +{{- end -}} +{{- end -}} + +{{/* +Returns legacy annotations if available +*/}} +{{- define "newrelic.compatibility.annotations" -}} +{{- with .Values.daemonSet -}} +{{- with .annotations -}} +{{- toYaml . }} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Returns agent configmap merged with legacy config and legacy eventQueueDepth config +*/}} +{{- define "newrelic.compatibility.agentConfig" -}} +{{- $oldConfig := deepCopy (.Values.config | default dict) -}} +{{- $newConfig := deepCopy .Values.common.agentConfig -}} +{{- $eventQueueDepth := dict -}} + +{{- if .Values.eventQueueDepth -}} +{{- $eventQueueDepth = dict "event_queue_depth" .Values.eventQueueDepth -}} +{{- end -}} + +{{- mustMergeOverwrite $oldConfig $newConfig $eventQueueDepth | toYaml -}} +{{- end -}} + +{{- /* +Return a valid podSpec.affinity object from the old `.Values.nodeAffinity`. +*/ -}} +{{- define "newrelic.compatibility.nodeAffinity" -}} +{{- if .Values.nodeAffinity -}} +nodeAffinity: + {{- toYaml .Values.nodeAffinity | nindent 2 }} +{{- end -}} +{{- end -}} + +{{/* +Returns legacy integrations_config configmap data +*/}} +{{- define "newrelic.compatibility.integrations" -}} +{{- if .Values.integrations_config -}} +{{- range .Values.integrations_config }} +{{ .name -}}: |- + {{- toYaml .data | nindent 2 }} +{{- end -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic.compatibility.message.logFile" -}} +The 'logFile' option is no longer supported and has been replaced by: + - common.agentConfig.log_file. + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.resources" -}} +You have specified the legacy 'resources' option in your values, which is not fully compatible with the v3 version. +This version deploys three different components and therefore you'll need to specify resources for each of them. +Please use + - ksm.resources, + - controlPlane.resources, + - kubelet.resources. + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.apiServerSecurePort" -}} +You have specified the legacy 'apiServerSecurePort' option in your values, which is not fully compatible with the v3 +version. +Please configure the API Server port as a part of 'apiServer.autodiscover[].endpoints' + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.windows" -}} +nri-kubernetes v3 does not support deploying into windows Nodes. +Please use the latest 2.x version of the chart. + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.etcdSecrets" -}} +Values "etcdTlsSecretName" and "etcdTlsSecretNamespace" are no longer supported, please specify them as a part of the +'etcd' config in the values, for example: + - endpoints: + - url: https://localhost:9979 + insecureSkipVerify: true + auth: + type: mTLS + mtls: + secretName: {{ .Values.etcdTlsSecretName | default "etcdTlsSecretName"}} + secretNamespace: {{ .Values.etcdTlsSecretNamespace | default "etcdTlsSecretNamespace"}} + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.apiURL" -}} +Values "controllerManagerEndpointUrl", "etcdEndpointUrl", "apiServerEndpointUrl", "schedulerEndpointUrl" are no longer +supported, please specify them as a part of the 'controlplane' config in the values, for example + autodiscover: + - selector: "tier=control-plane,component=etcd" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.image" -}} +Configuring image repository an tag under 'image' is no longer supported. +The following values are no longer supported and are currently ignored: + - image.repository + - image.tag + - image.pullPolicy + - image.pullSecrets + +Notice that the 3.x version of the integration uses 3 different images. +Please set: + - images.forwarder.* to configure the infrastructure-agent forwarder. + - images.agent.* to configure the image bundling the infrastructure-agent and on-host integrations. + - images.integration.* to configure the image in charge of scraping k8s data. + +------ +{{- end -}} + +{{- define "newrelic.compatibility.message.customAttributes" -}} +We still support using custom attributes but we support it as a map and dropped it as a string. +customAttributes: {{ .Values.customAttributes | quote }} + +You should change your values to something like this: + +customAttributes: +{{- range $k, $v := fromJson .Values.customAttributes -}} + {{- $k | nindent 2 }}: {{ $v | quote }} +{{- end }} + +**NOTE**: If you read above errors like "invalid character ':' after top-level value" or "json: cannot unmarshal string into Go value of type map[string]interface {}" means that the string you have in your values is not a valid JSON, Helm is not able to parse it and we could not show you how you should change it. Sorry. +{{- end -}} + +{{- define "newrelic.compatibility.message.common" -}} +###### +The chart cannot be rendered since the values listed below are not supported. Please replace those with the new ones compatible with newrelic-infrastructure V3. + +Keep in mind that the flag "--reuse-values" is not supported when migrating from V2 to V3. +Further information can be found in the official docs https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/get-started/changes-since-v3#migration-guide" +###### +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrole.yaml new file mode 100644 index 0000000000..391dc1e1fd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrole.yaml @@ -0,0 +1,35 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }} +rules: + - apiGroups: [""] + resources: + - "nodes/metrics" + - "nodes/stats" + - "nodes/proxy" + verbs: ["get", "list"] + - apiGroups: [ "" ] + resources: + - "endpoints" + - "services" + - "nodes" + - "namespaces" + - "pods" + verbs: [ "get", "list", "watch" ] + - nonResourceURLs: ["/metrics"] + verbs: ["get"] + {{- if .Values.rbac.pspEnabled }} + - apiGroups: + - extensions + resources: + - podsecuritypolicies + resourceNames: + - privileged-{{ include "newrelic.common.naming.fullname" . }} + verbs: + - use + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..fc5dfb8dae --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }} +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_affinity_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_affinity_helper.tpl new file mode 100644 index 0000000000..320d16dae1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_affinity_helper.tpl @@ -0,0 +1,11 @@ +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 affinity so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.controlPlane.affinity" -}} +{{- if .Values.controlPlane.affinity -}} + {{- toYaml .Values.controlPlane.affinity -}} +{{- else if include "newrelic.common.affinity" . -}} + {{- include "newrelic.common.affinity" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_agent-config_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_agent-config_helper.tpl new file mode 100644 index 0000000000..e113def82f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_agent-config_helper.tpl @@ -0,0 +1,20 @@ +{{- /* +Defaults for controlPlane's agent config +*/ -}} +{{- define "nriKubernetes.controlPlane.agentConfig.defaults" -}} +is_forward_only: true +http_server_enabled: true +http_server_port: 8001 +{{- end -}} + + + +{{- define "nriKubernetes.controlPlane.agentConfig" -}} +{{- $agentDefaults := fromYaml ( include "newrelic.common.agentConfig.defaults" . ) -}} +{{- $controlPlane := fromYaml ( include "nriKubernetes.controlPlane.agentConfig.defaults" . ) -}} +{{- $agentConfig := fromYaml ( include "newrelic.compatibility.agentConfig" . ) -}} +{{- $cpAgentConfig := .Values.controlPlane.agentConfig -}} +{{- $customAttributes := dict "custom_attributes" (dict "clusterName" (include "newrelic.common.cluster" . )) -}} + +{{- mustMergeOverwrite $agentDefaults $controlPlane $agentConfig $cpAgentConfig $customAttributes | toYaml -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_host_network.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_host_network.tpl new file mode 100644 index 0000000000..2f3bdf2d94 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_host_network.tpl @@ -0,0 +1,22 @@ +{{/* Returns whether the controlPlane scraper should run with hostNetwork: true based on the user configuration. */}} +{{- define "nriKubernetes.controlPlane.hostNetwork" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values.controlPlane "hostNetwork" | kindIs "bool" -}} + {{- if .Values.controlPlane.hostNetwork -}} + {{- .Values.controlPlane.hostNetwork -}} + {{- end -}} +{{- else if include "newrelic.common.hostNetwork" . -}} + {{- include "newrelic.common.hostNetwork" . -}} +{{- end -}} +{{- end -}} + + + +{{/* Abstraction of "nriKubernetes.controlPlane.hostNetwork" that returns true of false directly */}} +{{- define "nriKubernetes.controlPlane.hostNetwork.value" -}} +{{- if include "nriKubernetes.controlPlane.hostNetwork" . -}} + true +{{- else -}} + false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_naming.tpl new file mode 100644 index 0000000000..4b9ef22e33 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_naming.tpl @@ -0,0 +1,16 @@ +{{- /* Naming helpers*/ -}} +{{- define "nriKubernetes.controlplane.fullname" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "controlplane") -}} +{{- end -}} + +{{- define "nriKubernetes.controlplane.fullname.agent" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "agent-controlplane") -}} +{{- end -}} + +{{- define "nriKubernetes.controlplane.fullname.serviceAccount" -}} +{{- if include "newrelic.common.serviceAccount.create" . -}} + {{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "controlplane") -}} +{{- else -}} + {{- include "newrelic.common.serviceAccount.name" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_rbac.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_rbac.tpl new file mode 100644 index 0000000000..a279df6b4c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_rbac.tpl @@ -0,0 +1,40 @@ +{{/* +Returns the list of namespaces where secrets need to be accessed by the controlPlane integration to do mTLS Auth +*/}} +{{- define "nriKubernetes.controlPlane.roleBindingNamespaces" -}} +{{ $namespaceList := list }} +{{- range $components := .Values.controlPlane.config }} + {{- if $components }} + {{- if kindIs "map" $components -}} + {{- if $components.staticEndpoint }} + {{- if $components.staticEndpoint.auth }} + {{- if $components.staticEndpoint.auth.mtls }} + {{- if $components.staticEndpoint.auth.mtls.secretNamespace }} + {{- $namespaceList = append $namespaceList $components.staticEndpoint.auth.mtls.secretNamespace -}} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- if $components.autodiscover }} + {{- range $autodiscover := $components.autodiscover }} + {{- if $autodiscover }} + {{- if $autodiscover.endpoints }} + {{- range $endpoint := $autodiscover.endpoints }} + {{- if $endpoint.auth }} + {{- if $endpoint.auth.mtls }} + {{- if $endpoint.auth.mtls.secretNamespace }} + {{- $namespaceList = append $namespaceList $endpoint.auth.mtls.secretNamespace -}} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} +{{- end }} +roleBindingNamespaces: + {{- uniq $namespaceList | toYaml | nindent 2 }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_tolerations_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_tolerations_helper.tpl new file mode 100644 index 0000000000..3c82e82f52 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/_tolerations_helper.tpl @@ -0,0 +1,11 @@ +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 tolerations so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.controlPlane.tolerations" -}} +{{- if .Values.controlPlane.tolerations -}} + {{- toYaml .Values.controlPlane.tolerations -}} +{{- else if include "newrelic.common.tolerations" . -}} + {{- include "newrelic.common.tolerations" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/agent-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/agent-configmap.yaml new file mode 100644 index 0000000000..77f2e11dde --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/agent-configmap.yaml @@ -0,0 +1,18 @@ +{{- if .Values.controlPlane.enabled -}} +{{- if .Values.customAttributes | kindIs "string" }} +{{- fail ( include "newrelic.compatibility.message.customAttributes" . ) -}} +{{- else -}} +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname.agent" . }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "nriKubernetes.controlPlane.agentConfig" . | nindent 4 }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrole.yaml new file mode 100644 index 0000000000..57633e7f7b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrole.yaml @@ -0,0 +1,47 @@ +{{- if and (.Values.controlPlane.enabled) (.Values.rbac.create) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname" . }} +rules: + - apiGroups: [""] + resources: + - "nodes/metrics" + - "nodes/stats" + - "nodes/proxy" + verbs: ["get", "list"] + - apiGroups: [ "" ] + resources: + - "pods" + - "nodes" + verbs: [ "get", "list", "watch" ] + - nonResourceURLs: ["/metrics"] + verbs: ["get", "head"] + {{- if .Values.rbac.pspEnabled }} + - apiGroups: + - extensions + resources: + - podsecuritypolicies + resourceNames: + - privileged-{{ include "newrelic.common.naming.fullname" . }} + verbs: + - use + {{- end -}} +{{- $namespaces := include "nriKubernetes.controlPlane.roleBindingNamespaces" . | fromYaml -}} +{{- if $namespaces.roleBindingNamespaces}} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.naming.secrets" . }} +rules: + - apiGroups: [""] + resources: + - "secrets" + verbs: ["get", "list", "watch"] +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrolebinding.yaml new file mode 100644 index 0000000000..4e35300942 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if and (.Values.controlPlane.enabled) (.Values.rbac.create) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "nriKubernetes.controlplane.fullname" . }} +subjects: + - kind: ServiceAccount + name: {{ include "nriKubernetes.controlplane.fullname.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/daemonset.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/daemonset.yaml new file mode 100644 index 0000000000..938fc48d45 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/daemonset.yaml @@ -0,0 +1,205 @@ +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: apps/v1 +kind: {{ .Values.controlPlane.kind }} +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "nriKubernetes.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname" . }} + {{- $legacyAnnotation:= fromYaml (include "newrelic.compatibility.annotations" .) -}} + {{- with include "newrelic.compatibility.valueWithFallback" (dict "legacy" $legacyAnnotation "supported" .Values.controlPlane.annotations )}} + annotations: {{ . | nindent 4 }} + {{- end }} +spec: + {{- if eq .Values.controlPlane.kind "DaemonSet"}} + {{- with .Values.updateStrategy }} + updateStrategy: {{ toYaml . | nindent 4 }} + {{- end }} + {{- end }} + {{- if eq .Values.controlPlane.kind "Deployment"}} + {{- with .Values.strategy }} + strategy: {{ toYaml . | nindent 4 }} + {{- end }} + {{- end }} + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: controlplane + template: + metadata: + annotations: + checksum/nri-kubernetes: {{ include (print $.Template.BasePath "/controlplane/scraper-configmap.yaml") . | sha256sum }} + checksum/agent-config: {{ include (print $.Template.BasePath "/controlplane/agent-configmap.yaml") . | sha256sum }} + {{- if include "newrelic.common.license.secret" . }}{{- /* If the is secret to template */}} + checksum/license-secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "nriKubernetes.labels.podLabels" . | nindent 8 }} + app.kubernetes.io/component: controlplane + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.images.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + hostNetwork: {{ include "nriKubernetes.controlPlane.hostNetwork.value" . }} + {{- if include "nriKubernetes.controlPlane.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "nriKubernetes.controlplane.fullname.serviceAccount" . }} + + {{- if .Values.controlPlane.initContainers }} + initContainers: {{- tpl (.Values.controlPlane.initContainers | toYaml) . | nindent 8 }} + {{- end }} + containers: + - name: controlplane + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} + imagePullPolicy: {{ .Values.images.integration.pullPolicy }} + {{- with include "nriKubernetes.securityContext.container" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + env: + - name: "NRI_KUBERNETES_SINK_HTTP_PORT" + value: {{ get (fromYaml (include "nriKubernetes.controlPlane.agentConfig" .)) "http_server_port" | quote }} + - name: "NRI_KUBERNETES_CLUSTERNAME" + value: {{ include "newrelic.common.cluster" . }} + - name: "NRI_KUBERNETES_VERBOSE" + value: {{ include "newrelic.common.verboseLog.valueAsBoolean" . | quote }} + + - name: "NRI_KUBERNETES_NODENAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + - name: "NRI_KUBERNETES_NODEIP" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "status.hostIP" + + {{- with .Values.controlPlane.extraEnv }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.controlPlane.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: nri-kubernetes-config + mountPath: /etc/newrelic-infra/nri-kubernetes.yml + subPath: nri-kubernetes.yml + {{- with .Values.controlPlane.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.controlPlane.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + - name: forwarder + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.forwarder "context" .) }} + imagePullPolicy: {{ .Values.images.forwarder.pullPolicy }} + {{- with include "nriKubernetes.securityContext.container" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + ports: + - containerPort: {{ get (fromYaml (include "nriKubernetes.controlPlane.agentConfig" .)) "http_server_port" }} + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + + - name: "NRIA_DNS_HOSTNAME_RESOLUTION" + value: "false" + + - name: "K8S_NODE_NAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + {{- if .Values.useNodeNameAsDisplayName }} + - name: "NRIA_DISPLAY_NAME" + {{- if .Values.prefixDisplayNameWithCluster }} + value: "{{ include "newrelic.common.cluster" . }}:$(K8S_NODE_NAME)" + {{- else }} + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + {{- end }} + {{- end }} + + {{- with .Values.controlPlane.extraEnv }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.controlPlane.extraEnvFrom }} + envFrom: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - mountPath: /var/db/newrelic-infra/data + name: forwarder-tmpfs-data + - mountPath: /var/db/newrelic-infra/user_data + name: forwarder-tmpfs-user-data + - mountPath: /tmp + name: forwarder-tmpfs-tmp + - name: config + mountPath: /etc/newrelic-infra.yml + subPath: newrelic-infra.yml + {{- with .Values.controlPlane.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.controlPlane.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + volumes: + - name: nri-kubernetes-config + configMap: + name: {{ include "nriKubernetes.controlplane.fullname" . }} + items: + - key: nri-kubernetes.yml + path: nri-kubernetes.yml + - name: forwarder-tmpfs-data + emptyDir: {} + - name: forwarder-tmpfs-user-data + emptyDir: {} + - name: forwarder-tmpfs-tmp + emptyDir: {} + - name: config + configMap: + name: {{ include "nriKubernetes.controlplane.fullname.agent" . }} + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + {{- with .Values.controlPlane.extraVolumes }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.controlPlane.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.controlPlane.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{- with .Values.controlPlane.nodeSelector | default (fromYaml (include "newrelic.common.nodeSelector" .)) }} + {{- toYaml . | nindent 8 }} + {{- end -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/rolebinding.yaml new file mode 100644 index 0000000000..d97fc181a4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/rolebinding.yaml @@ -0,0 +1,21 @@ +{{- if .Values.rbac.create }} +{{- $namespaces := (include "nriKubernetes.controlPlane.roleBindingNamespaces" . | fromYaml) -}} +{{- range $namespaces.roleBindingNamespaces }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + labels: + {{- include "newrelic.common.labels" $ | nindent 4 }} + name: {{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" $) "suffix" .) }} + namespace: {{ . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "nriKubernetes.naming.secrets" $ }} +subjects: +- kind: ServiceAccount + name: {{ include "nriKubernetes.controlplane.fullname.serviceAccount" $ }} + namespace: {{ $.Release.Namespace }} +{{- end -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/scraper-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/scraper-configmap.yaml new file mode 100644 index 0000000000..454665deda --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/scraper-configmap.yaml @@ -0,0 +1,36 @@ +{{- if .Values.controlPlane.enabled -}} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname" . }} + namespace: {{ .Release.Namespace }} +data: + nri-kubernetes.yml: |- + {{- (merge .Values.common.config (include "newrelic.integrationConfigDefaults" . | fromYaml)) | toYaml | nindent 4 }} + controlPlane: + {{- omit .Values.controlPlane.config "etcd" "scheduler" "controllerManager" "apiServer" | toYaml | nindent 6 }} + enabled: true + + {{- if .Values.controlPlane.config.etcd.enabled }} + etcd: + {{- toYaml .Values.controlPlane.config.etcd | nindent 8 -}} + {{- end -}} + + {{- if .Values.controlPlane.config.scheduler.enabled }} + scheduler: + {{- toYaml .Values.controlPlane.config.scheduler | nindent 8 -}} + {{- end -}} + + {{- if .Values.controlPlane.config.controllerManager.enabled }} + controllerManager: + {{- toYaml .Values.controlPlane.config.controllerManager | nindent 8 -}} + {{- end -}} + + {{- if .Values.controlPlane.config.apiServer.enabled }} + apiServer: + {{- toYaml .Values.controlPlane.config.apiServer | nindent 8 -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/serviceaccount.yaml new file mode 100644 index 0000000000..502e1c9865 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/controlplane/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.controlplane.fullname.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_affinity_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_affinity_helper.tpl new file mode 100644 index 0000000000..ce795708d2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_affinity_helper.tpl @@ -0,0 +1,14 @@ +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 affinity so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.ksm.affinity" -}} +{{- if or .Values.ksm.affinity .Values.nodeAffinity -}} + {{- $legacyNodeAffinity := fromYaml ( include "newrelic.compatibility.nodeAffinity" . ) | default dict -}} + {{- $valuesAffinity := .Values.ksm.affinity | default dict -}} + {{- $affinity := mustMergeOverwrite $legacyNodeAffinity $valuesAffinity -}} + {{- toYaml $affinity -}} +{{- else if include "newrelic.common.affinity" . -}} + {{- include "newrelic.common.affinity" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_agent-config_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_agent-config_helper.tpl new file mode 100644 index 0000000000..e7b55644c2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_agent-config_helper.tpl @@ -0,0 +1,20 @@ +{{- /* +Defaults for ksm's agent config +*/ -}} +{{- define "nriKubernetes.ksm.agentConfig.defaults" -}} +is_forward_only: true +http_server_enabled: true +http_server_port: 8002 +{{- end -}} + + + +{{- define "nriKubernetes.ksm.agentConfig" -}} +{{- $agentDefaults := fromYaml ( include "newrelic.common.agentConfig.defaults" . ) -}} +{{- $ksm := fromYaml ( include "nriKubernetes.ksm.agentConfig.defaults" . ) -}} +{{- $agentConfig := fromYaml ( include "newrelic.compatibility.agentConfig" . ) -}} +{{- $ksmAgentConfig := .Values.ksm.agentConfig -}} +{{- $customAttributes := dict "custom_attributes" (dict "clusterName" (include "newrelic.common.cluster" . )) -}} + +{{- mustMergeOverwrite $agentDefaults $ksm $agentConfig $ksmAgentConfig $customAttributes | toYaml -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_host_network.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_host_network.tpl new file mode 100644 index 0000000000..59a6db7be1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_host_network.tpl @@ -0,0 +1,22 @@ +{{/* Returns whether the ksm scraper should run with hostNetwork: true based on the user configuration. */}} +{{- define "nriKubernetes.ksm.hostNetwork" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values.ksm "hostNetwork" | kindIs "bool" -}} + {{- if .Values.ksm.hostNetwork -}} + {{- .Values.ksm.hostNetwork -}} + {{- end -}} +{{- else if include "newrelic.common.hostNetwork" . -}} + {{- include "newrelic.common.hostNetwork" . -}} +{{- end -}} +{{- end -}} + + + +{{/* Abstraction of "nriKubernetes.ksm.hostNetwork" that returns true of false directly */}} +{{- define "nriKubernetes.ksm.hostNetwork.value" -}} +{{- if include "nriKubernetes.ksm.hostNetwork" . -}} + true +{{- else -}} + false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_naming.tpl new file mode 100644 index 0000000000..d8c283c430 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_naming.tpl @@ -0,0 +1,8 @@ +{{- /* Naming helpers*/ -}} +{{- define "nriKubernetes.ksm.fullname" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "ksm") -}} +{{- end -}} + +{{- define "nriKubernetes.ksm.fullname.agent" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "agent-ksm") -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_tolerations_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_tolerations_helper.tpl new file mode 100644 index 0000000000..e1a9fd80c2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/_tolerations_helper.tpl @@ -0,0 +1,11 @@ +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 tolerations so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.ksm.tolerations" -}} +{{- if .Values.ksm.tolerations -}} + {{- toYaml .Values.ksm.tolerations -}} +{{- else if include "newrelic.common.tolerations" . -}} + {{- include "newrelic.common.tolerations" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/agent-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/agent-configmap.yaml new file mode 100644 index 0000000000..6a438e9a3c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/agent-configmap.yaml @@ -0,0 +1,18 @@ +{{- if .Values.ksm.enabled -}} +{{- if .Values.customAttributes | kindIs "string" }} +{{- fail ( include "newrelic.compatibility.message.customAttributes" . ) -}} +{{- else -}} +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.ksm.fullname.agent" . }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "nriKubernetes.ksm.agentConfig" . | nindent 4 }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/deployment.yaml new file mode 100644 index 0000000000..507199d5a7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/deployment.yaml @@ -0,0 +1,192 @@ +{{- if include "newrelic.compatibility.ksm.enabled" . -}} +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "nriKubernetes.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.ksm.fullname" . }} + {{- $legacyAnnotation:= fromYaml (include "newrelic.compatibility.annotations" .) -}} + {{- with include "newrelic.compatibility.valueWithFallback" (dict "legacy" $legacyAnnotation "supported" .Values.ksm.annotations )}} + annotations: {{ . | nindent 4 }} + {{- end }} +spec: + {{- with .Values.strategy }} + strategy: {{ toYaml . | nindent 4 }} + {{- end }} + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: ksm + template: + metadata: + annotations: + checksum/nri-kubernetes: {{ include (print $.Template.BasePath "/ksm/scraper-configmap.yaml") . | sha256sum }} + checksum/agent-config: {{ include (print $.Template.BasePath "/ksm/agent-configmap.yaml") . | sha256sum }} + {{- if include "newrelic.common.license.secret" . }}{{- /* If the is secret to template */}} + checksum/license-secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- end }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "nriKubernetes.labels.podLabels" . | nindent 8 }} + app.kubernetes.io/component: ksm + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.images.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + hostNetwork: {{ include "nriKubernetes.ksm.hostNetwork.value" . }} + {{- if include "nriKubernetes.ksm.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} + + {{- if .Values.ksm.initContainers }} + initContainers: {{- tpl (.Values.ksm.initContainers | toYaml) . | nindent 8 }} + {{- end }} + containers: + - name: ksm + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} + imagePullPolicy: {{ .Values.images.integration.pullPolicy }} + {{- with include "nriKubernetes.securityContext.container" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + env: + - name: "NRI_KUBERNETES_SINK_HTTP_PORT" + value: {{ get (fromYaml (include "nriKubernetes.ksm.agentConfig" .)) "http_server_port" | quote }} + - name: "NRI_KUBERNETES_CLUSTERNAME" + value: {{ include "newrelic.common.cluster" . }} + - name: "NRI_KUBERNETES_VERBOSE" + value: {{ include "newrelic.common.verboseLog.valueAsBoolean" . | quote }} + + - name: "NRI_KUBERNETES_NODENAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + {{- with .Values.ksm.extraEnv }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.ksm.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: nri-kubernetes-config + mountPath: /etc/newrelic-infra/nri-kubernetes.yml + subPath: nri-kubernetes.yml + {{- with .Values.ksm.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.ksm.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + - name: forwarder + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.forwarder "context" .) }} + imagePullPolicy: {{ .Values.images.forwarder.pullPolicy }} + {{- with include "nriKubernetes.securityContext.container" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + ports: + - containerPort: {{ get (fromYaml (include "nriKubernetes.ksm.agentConfig" .)) "http_server_port" }} + env: + - name: NRIA_LICENSE_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + + - name: "NRIA_DNS_HOSTNAME_RESOLUTION" + value: "false" + + - name: "K8S_NODE_NAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + {{- if .Values.useNodeNameAsDisplayName }} + - name: "NRIA_DISPLAY_NAME" + {{- if .Values.prefixDisplayNameWithCluster }} + value: "{{ include "newrelic.common.cluster" . }}:$(K8S_NODE_NAME)" + {{- else }} + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + {{- end }} + {{- end }} + + {{- with .Values.ksm.env }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.ksm.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - mountPath: /var/db/newrelic-infra/data + name: forwarder-tmpfs-data + - mountPath: /var/db/newrelic-infra/user_data + name: forwarder-tmpfs-user-data + - mountPath: /tmp + name: forwarder-tmpfs-tmp + - name: config + mountPath: /etc/newrelic-infra.yml + subPath: newrelic-infra.yml + {{- with .Values.ksm.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.ksm.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + volumes: + - name: nri-kubernetes-config + configMap: + name: {{ include "nriKubernetes.ksm.fullname" . }} + items: + - key: nri-kubernetes.yml + path: nri-kubernetes.yml + - name: forwarder-tmpfs-data + emptyDir: {} + - name: forwarder-tmpfs-user-data + emptyDir: {} + - name: forwarder-tmpfs-tmp + emptyDir: {} + - name: config + configMap: + name: {{ include "nriKubernetes.ksm.fullname.agent" . }} + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + {{- with .Values.ksm.extraVolumes }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.ksm.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.ksm.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{- with .Values.ksm.nodeSelector | default (fromYaml (include "newrelic.common.nodeSelector" .)) }} + {{- toYaml . | nindent 8 }} + {{- end -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/scraper-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/scraper-configmap.yaml new file mode 100644 index 0000000000..3314df9c77 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/ksm/scraper-configmap.yaml @@ -0,0 +1,15 @@ +{{- if include "newrelic.compatibility.ksm.enabled" . -}} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.ksm.fullname" . }} + namespace: {{ .Release.Namespace }} +data: + nri-kubernetes.yml: |- + {{- (merge .Values.common.config (include "newrelic.integrationConfigDefaults" . | fromYaml)) | toYaml | nindent 4 }} + ksm: + {{- mustMergeOverwrite .Values.ksm.config (include "newrelic.compatibility.ksm.legacyData" . | fromYaml) | toYaml | nindent 6 -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_affinity_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_affinity_helper.tpl new file mode 100644 index 0000000000..a3abf0855e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_affinity_helper.tpl @@ -0,0 +1,33 @@ +{{- /* +Patch to add affinity in case we are running in fargate mode +*/ -}} +{{- define "nriKubernetes.kubelet.affinity.fargateDefaults" -}} +nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: eks.amazonaws.com/compute-type + operator: NotIn + values: + - fargate +{{- end -}} + + + +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 affinity so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.kubelet.affinity" -}} + +{{- if or .Values.kubelet.affinity .Values.nodeAffinity -}} + {{- $legacyNodeAffinity := fromYaml ( include "newrelic.compatibility.nodeAffinity" . ) | default dict -}} + {{- $valuesAffinity := .Values.kubelet.affinity | default dict -}} + {{- $affinity := mustMergeOverwrite $legacyNodeAffinity $valuesAffinity -}} + {{- toYaml $affinity -}} +{{- else if include "newrelic.common.affinity" . -}} + {{- include "newrelic.common.affinity" . -}} +{{- else if include "newrelic.fargate" . -}} + {{- include "nriKubernetes.kubelet.affinity.fargateDefaults" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_agent-config_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_agent-config_helper.tpl new file mode 100644 index 0000000000..ea6ffc25f3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_agent-config_helper.tpl @@ -0,0 +1,31 @@ +{{- /* +Defaults for kubelet's agent config +*/ -}} +{{- define "nriKubernetes.kubelet.agentConfig.defaults" -}} +http_server_enabled: true +http_server_port: 8003 +features: + docker_enabled: false +{{- if not ( include "newrelic.common.privileged" . ) }} +is_secure_forward_only: true +{{- end }} +{{- /* +`enableProcessMetrics` is commented in the values and we want to configure it when it is set to something +either `true` or `false`. So we test if the variable is a boolean and in that case simply use it. +*/}} +{{- if (get .Values "enableProcessMetrics" | kindIs "bool") }} +enable_process_metrics: {{ .Values.enableProcessMetrics }} +{{- end }} +{{- end -}} + + + +{{- define "nriKubernetes.kubelet.agentConfig" -}} +{{- $agentDefaults := fromYaml ( include "newrelic.common.agentConfig.defaults" . ) -}} +{{- $kubelet := fromYaml ( include "nriKubernetes.kubelet.agentConfig.defaults" . ) -}} +{{- $agentConfig := fromYaml ( include "newrelic.compatibility.agentConfig" . ) -}} +{{- $kubeletAgentConfig := .Values.kubelet.agentConfig -}} +{{- $customAttributes := dict "custom_attributes" (dict "clusterName" (include "newrelic.common.cluster" . )) -}} + +{{- mustMergeOverwrite $agentDefaults $kubelet $agentConfig $kubeletAgentConfig $customAttributes | toYaml -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_host_network.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_host_network.tpl new file mode 100644 index 0000000000..7944f98a72 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_host_network.tpl @@ -0,0 +1,22 @@ +{{/* Returns whether the kubelet scraper should run with hostNetwork: true based on the user configuration. */}} +{{- define "nriKubernetes.kubelet.hostNetwork" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values.kubelet "hostNetwork" | kindIs "bool" -}} + {{- if .Values.kubelet.hostNetwork -}} + {{- .Values.kubelet.hostNetwork -}} + {{- end -}} +{{- else if include "newrelic.common.hostNetwork" . -}} + {{- include "newrelic.common.hostNetwork" . -}} +{{- end -}} +{{- end -}} + + + +{{/* Abstraction of "nriKubernetes.kubelet.hostNetwork" that returns true of false directly */}} +{{- define "nriKubernetes.kubelet.hostNetwork.value" -}} +{{- if include "nriKubernetes.kubelet.hostNetwork" . -}} + true +{{- else -}} + false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_naming.tpl new file mode 100644 index 0000000000..71c142156d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_naming.tpl @@ -0,0 +1,12 @@ +{{- /* Naming helpers*/ -}} +{{- define "nriKubernetes.kubelet.fullname" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "kubelet") -}} +{{- end -}} + +{{- define "nriKubernetes.kubelet.fullname.agent" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "agent-kubelet") -}} +{{- end -}} + +{{- define "nriKubernetes.kubelet.fullname.integrations" -}} +{{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "nriKubernetes.naming.fullname" .) "suffix" "integrations-cfg") -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_security_context_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_security_context_helper.tpl new file mode 100644 index 0000000000..4e334466c1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_security_context_helper.tpl @@ -0,0 +1,32 @@ +{{- /*This defines the defaults that the privileged mode has for the agent's securityContext */ -}} +{{- define "nriKubernetes.kubelet.securityContext.privileged" -}} +runAsUser: 0 +runAsGroup: 0 +allowPrivilegeEscalation: true +privileged: true +readOnlyRootFilesystem: true +{{- end -}} + + + +{{- /* This is the container security context for the agent */ -}} +{{- define "nriKubernetes.kubelet.securityContext.agentContainer" -}} +{{- $defaults := dict -}} +{{- if include "newrelic.common.privileged" . -}} +{{- $defaults = fromYaml ( include "nriKubernetes.kubelet.securityContext.privileged" . ) -}} +{{- else -}} +{{- $defaults = fromYaml ( include "nriKubernetes.securityContext.containerDefaults" . ) -}} +{{- end -}} + +{{- $compatibilityLayer := include "newrelic.compatibility.securityContext" . | fromYaml -}} +{{- $commonLibrary := include "newrelic.common.securityContext.container" . | fromYaml -}} + +{{- $finalSecurityContext := dict -}} +{{- if $commonLibrary -}} + {{- $finalSecurityContext = mustMergeOverwrite $commonLibrary $compatibilityLayer -}} +{{- else -}} + {{- $finalSecurityContext = mustMergeOverwrite $defaults $compatibilityLayer -}} +{{- end -}} + +{{- toYaml $finalSecurityContext -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_tolerations_helper.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_tolerations_helper.tpl new file mode 100644 index 0000000000..e46d83d69d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/_tolerations_helper.tpl @@ -0,0 +1,11 @@ +{{- /* +As this chart deploys what it should be three charts to maintain the transition to v3 as smooth as possible. +This means that this chart has 3 tolerations so a helper should be done per scraper. +*/ -}} +{{- define "nriKubernetes.kubelet.tolerations" -}} +{{- if .Values.kubelet.tolerations -}} + {{- toYaml .Values.kubelet.tolerations -}} +{{- else if include "newrelic.common.tolerations" . -}} + {{- include "newrelic.common.tolerations" . -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/agent-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/agent-configmap.yaml new file mode 100644 index 0000000000..0f71f129ae --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/agent-configmap.yaml @@ -0,0 +1,18 @@ +{{- if .Values.kubelet.enabled -}} +{{- if .Values.customAttributes | kindIs "string" }} +{{- fail ( include "newrelic.compatibility.message.customAttributes" . ) -}} +{{- else -}} +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.kubelet.fullname.agent" . }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "nriKubernetes.kubelet.agentConfig" . | nindent 4 }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/daemonset.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/daemonset.yaml new file mode 100644 index 0000000000..517079be76 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/daemonset.yaml @@ -0,0 +1,265 @@ +{{- if (.Values.kubelet.enabled) }} +apiVersion: apps/v1 +kind: DaemonSet +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "nriKubernetes.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.kubelet.fullname" . }} + {{- $legacyAnnotation:= fromYaml (include "newrelic.compatibility.annotations" .) -}} + {{- with include "newrelic.compatibility.valueWithFallback" (dict "legacy" $legacyAnnotation "supported" .Values.kubelet.annotations )}} + annotations: {{ . | nindent 4 }} + {{- end }} +spec: + {{- with .Values.updateStrategy }} + updateStrategy: {{ toYaml . | nindent 4 }} + {{- end }} + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + app.kubernetes.io/component: kubelet + template: + metadata: + annotations: + checksum/nri-kubernetes: {{ include (print $.Template.BasePath "/kubelet/scraper-configmap.yaml") . | sha256sum }} + checksum/agent-config: {{ include (print $.Template.BasePath "/kubelet/agent-configmap.yaml") . | sha256sum }} + {{- if include "newrelic.common.license.secret" . }}{{- /* If the is secret to template */}} + checksum/license-secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- end }} + checksum/integrations_config: {{ include (print $.Template.BasePath "/kubelet/integrations-configmap.yaml") . | sha256sum }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "nriKubernetes.labels.podLabels" . | nindent 8 }} + app.kubernetes.io/component: kubelet + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.images.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + hostNetwork: {{ include "nriKubernetes.kubelet.hostNetwork.value" . }} + {{- if include "nriKubernetes.kubelet.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} + + {{- if .Values.kubelet.initContainers }} + initContainers: {{- tpl (.Values.kubelet.initContainers | toYaml) . | nindent 8 }} + {{- end }} + containers: + - name: kubelet + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} + imagePullPolicy: {{ .Values.images.integration.pullPolicy }} + {{- with include "nriKubernetes.securityContext.container" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + env: + - name: "NRI_KUBERNETES_SINK_HTTP_PORT" + value: {{ get (fromYaml (include "nriKubernetes.kubelet.agentConfig" .)) "http_server_port" | quote }} + - name: "NRI_KUBERNETES_CLUSTERNAME" + value: {{ include "newrelic.common.cluster" . }} + - name: "NRI_KUBERNETES_VERBOSE" + value: {{ include "newrelic.common.verboseLog.valueAsBoolean" . | quote }} + + - name: "NRI_KUBERNETES_NODENAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + # Required to connect to the kubelet + - name: "NRI_KUBERNETES_NODEIP" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "status.hostIP" + + {{- with .Values.kubelet.extraEnv }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.kubelet.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: nri-kubernetes-config + mountPath: /etc/newrelic-infra/nri-kubernetes.yml + subPath: nri-kubernetes.yml + {{- with .Values.kubelet.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.kubelet.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + - name: agent + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} + args: [ "newrelic-infra" ] + imagePullPolicy: {{ .Values.images.agent.pullPolicy }} + {{- with include "nriKubernetes.kubelet.securityContext.agentContainer" . | fromYaml }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} + ports: + - containerPort: {{ get (fromYaml (include "nriKubernetes.kubelet.agentConfig" .)) "http_server_port" }} + env: + - name: NRIA_LICENSE_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + + - name: "NRIA_OVERRIDE_HOSTNAME_SHORT" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + - name: "NRIA_OVERRIDE_HOSTNAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + {{- if not (include "newrelic.common.privileged" .) }} + # Override NRIA_OVERRIDE_HOST_ROOT to empty if unprivileged. This must be done as an env var as the + # `k8s-events-forwarder` and `infrastructure-bundle` images ship this very same env var set to /host. + - name: "NRIA_OVERRIDE_HOST_ROOT" + value: "" + {{- end }} + + - name: "NRI_KUBERNETES_NODE_NAME" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + + {{- if .Values.useNodeNameAsDisplayName }} + - name: "NRIA_DISPLAY_NAME" + {{- if .Values.prefixDisplayNameWithCluster }} + value: "{{ include "newrelic.common.cluster" . }}:$(NRI_KUBERNETES_NODE_NAME)" + {{- else }} + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "spec.nodeName" + {{- end }} + {{- end }} + + {{- /* Needed to populate clustername in integration metrics */}} + - name: "CLUSTER_NAME" + value: {{ include "newrelic.common.cluster" . }} + - name: "NRIA_PASSTHROUGH_ENVIRONMENT" + value: "CLUSTER_NAME" + + {{- /* Needed for autodiscovery since hostNetwork=false */}} + - name: "NRIA_HOST" + valueFrom: + fieldRef: + apiVersion: "v1" + fieldPath: "status.hostIP" + + {{- with .Values.kubelet.extraEnv }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.kubelet.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: config + mountPath: /etc/newrelic-infra.yml + subPath: newrelic-infra.yml + - name: nri-integrations-cfg-volume + mountPath: /etc/newrelic-infra/integrations.d/ + {{- if include "newrelic.common.privileged" . }} + - name: dev + mountPath: /dev + - name: host-containerd-socket + mountPath: /run/containerd/containerd.sock + - name: host-docker-socket + mountPath: /var/run/docker.sock + - name: log + mountPath: /var/log + - name: host-volume + mountPath: /host + mountPropagation: HostToContainer + readOnly: true + {{- end }} + - mountPath: /var/db/newrelic-infra/data + name: agent-tmpfs-data + - mountPath: /var/db/newrelic-infra/user_data + name: agent-tmpfs-user-data + - mountPath: /tmp + name: agent-tmpfs-tmp + {{- with .Values.kubelet.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + {{- with .Values.kubelet.resources }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + volumes: + {{- if include "newrelic.common.privileged" . }} + - name: dev + hostPath: + path: /dev + - name: host-containerd-socket + hostPath: + path: /run/containerd/containerd.sock + - name: host-docker-socket + hostPath: + path: /var/run/docker.sock + - name: log + hostPath: + path: /var/log + - name: host-volume + hostPath: + path: / + {{- end }} + - name: agent-tmpfs-data + emptyDir: {} + - name: agent-tmpfs-user-data + emptyDir: {} + - name: agent-tmpfs-tmp + emptyDir: {} + - name: nri-kubernetes-config + configMap: + name: {{ include "nriKubernetes.kubelet.fullname" . }} + items: + - key: nri-kubernetes.yml + path: nri-kubernetes.yml + - name: config + configMap: + name: {{ include "nriKubernetes.kubelet.fullname.agent" . }} + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + - name: nri-integrations-cfg-volume + configMap: + name: {{ include "nriKubernetes.kubelet.fullname.integrations" . }} + {{- with .Values.kubelet.extraVolumes }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.kubelet.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + {{- with include "nriKubernetes.kubelet.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{- with .Values.kubelet.nodeSelector | default (fromYaml (include "newrelic.common.nodeSelector" .)) }} + {{- toYaml . | nindent 8 }} + {{- end -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/integrations-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/integrations-configmap.yaml new file mode 100644 index 0000000000..abf381f383 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/integrations-configmap.yaml @@ -0,0 +1,72 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.kubelet.fullname.integrations" . }} +data: + # This ConfigMap holds config files for integrations. They should have the following format: + #redis-config.yml: | + # # Run auto discovery to find pods with label "app=redis" + # discovery: + # command: + # # Run discovery for Kubernetes. Use the following optional arguments: + # # --namespaces: Comma separated list of namespaces to discover pods on + # # --tls: Use secure (TLS) connection + # # --port: Port used to connect to the kubelet. Default is 10255 + # exec: /var/db/newrelic-infra/nri-discovery-kubernetes --port PORT --tls + # match: + # label.app: redis + # integrations: + # - name: nri-redis + # env: + # # using the discovered IP as the hostname address + # HOSTNAME: ${discovery.ip} + # PORT: 6379 + # KEYS: '{"0":[""],"1":[""]}' + # REMOTE_MONITORING: true + # labels: + # env: production + {{- if .Values.integrations -}} + {{- range $k, $v := .Values.integrations -}} + {{- $k | trimSuffix ".yaml" | trimSuffix ".yml" | nindent 2 -}}.yaml: |- + {{- $v | toYaml | nindent 4 -}} + {{- end }} + {{- end }} + + {{- /* This template will add and template the integrations in the old .Values.integrations_config */}} + {{- include "newrelic.compatibility.integrations" . | nindent 2 }} + + {{- /* This template will add Pixie Health check to the integrations */}} + {{- if .Values.selfMonitoring.pixie.enabled }} + pixie-health-check.yaml: | + --- + # This Flex config performs periodic checks of the Pixie + # /healthz and /statusz endpoints exposed by the Pixie Cloud Connector. + # A status for each endpoint is sent to New Relic in a pixieHealthCheck event. + # + # If Pixie is not installed in the cluster, no events will be generated. + # This can also be disabled with enablePixieHealthCheck: false in the values.yaml file. + discovery: + command: + exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 + match: + label.name: vizier-cloud-connector + integrations: + - name: nri-flex + interval: 60s + config: + name: pixie-health-check + apis: + - event_type: pixieHealth + commands: + - run: curl --insecure -s https://${discovery.ip}:50800/healthz | xargs | awk '{print "cloud_connector_health:"$1}' + split_by: ":" + merge: pixieHealthCheck + - event_type: pixieStatus + commands: + - run: curl --insecure -s https://${discovery.ip}:50800/statusz | awk '{if($1 == ""){ print "cloud_connector_status:OK" } else { print "cloud_connector_status:"$1 }}' + split_by: ":" + merge: pixieHealthCheck + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/scraper-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/scraper-configmap.yaml new file mode 100644 index 0000000000..e43b5227fc --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/kubelet/scraper-configmap.yaml @@ -0,0 +1,18 @@ +{{- if .Values.kubelet.enabled -}} +--- +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "nriKubernetes.kubelet.fullname" . }} + namespace: {{ .Release.Namespace }} +data: + nri-kubernetes.yml: | + {{- (merge .Values.common.config (include "newrelic.integrationConfigDefaults" . | fromYaml)) | toYaml | nindent 4 }} + kubelet: + enabled: true + {{- if .Values.kubelet.config }} + {{- toYaml .Values.kubelet.config | nindent 6 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/podsecuritypolicy.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..5b5058511b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/podsecuritypolicy.yaml @@ -0,0 +1,26 @@ +{{- if .Values.rbac.pspEnabled }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: privileged-{{ include "newrelic.common.naming.fullname" . }} +spec: + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + privileged: true + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' + hostPID: true + hostIPC: true + hostNetwork: true + hostPorts: + - min: 1 + max: 65536 +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/secret.yaml new file mode 100644 index 0000000000..f558ee86c9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/secret.yaml @@ -0,0 +1,2 @@ +{{- /* Common library will take care of creating the secret or not. */}} +{{- include "newrelic.common.license.secret" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/serviceaccount.yaml new file mode 100644 index 0000000000..f987cc5124 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/values.yaml new file mode 100644 index 0000000000..8cda93b3bb --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-infrastructure/values.yaml @@ -0,0 +1,602 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` +cluster: "" +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" +# -- In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` +customSecretName: "" +# -- In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` +customSecretLicenseKey: "" + +# -- Images used by the chart for the integration and agents. +# @default -- See `values.yaml` +images: + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + # -- Image for the New Relic Infrastructure Agent sidecar. + # @default -- See `values.yaml` + forwarder: + registry: "" + repository: newrelic/k8s-events-forwarder + tag: 1.57.2 + pullPolicy: IfNotPresent + # -- Image for the New Relic Infrastructure Agent plus integrations. + # @default -- See `values.yaml` + agent: + registry: "" + repository: newrelic/infrastructure-bundle + tag: 3.2.55 + pullPolicy: IfNotPresent + # -- Image for the New Relic Kubernetes integration. + # @default -- See `values.yaml` + integration: + registry: "" + repository: newrelic/nri-kubernetes + tag: + pullPolicy: IfNotPresent + +# -- Config that applies to all instances of the solution: kubelet, ksm, control plane and sidecars. +# @default -- See `values.yaml` +common: + # Configuration entries that apply to all instances of the integration: kubelet, ksm and control plane. + config: + # common.config.interval -- (duration) Intervals larger than 40s are not supported and will cause the NR UI to not + # behave properly. Any non-nil value will override the `lowDataMode` default. + # @default -- `15s` (See [Low data mode](README.md#low-data-mode)) + interval: + # -- Config for filtering ksm and kubelet metrics by namespace. + namespaceSelector: {} + # If you want to include only namespaces with a given label you could do so by adding: + # matchLabels: + # newrelic.com/scrape: true + # Otherwise you can build more complex filters and include or exclude certain namespaces by adding one or multiple + # expressions that are added, for instance: + # matchExpressions: + # - {key: newrelic.com/scrape, operator: NotIn, values: ["false"]} + + # -- Config for the Infrastructure agent. + # Will be used by the forwarder sidecars and the agent running integrations. + # See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + agentConfig: {} + +# lowDataMode -- (bool) Send less data by incrementing the interval from `15s` (the default when `lowDataMode` is `false` or `nil`) to `30s`. +# Non-nil values of `common.config.interval` will override this value. +# @default -- `false` (See [Low data mode](README.md#low-data-mode)) +lowDataMode: + +# sink - Configuration for the scraper sink. +sink: + http: + # -- The amount of time the scraper container to probe infra agent sidecar container before giving up and restarting during pod starts. + probeTimeout: 90s + # -- The amount of time the scraper container to backoff when it fails to probe infra agent sidecar. + probeBackoff: 5s + +# kubelet -- Configuration for the DaemonSet that collects metrics from the Kubelet. +# @default -- See `values.yaml` +kubelet: + # -- Enable kubelet monitoring. + # Advanced users only. Setting this to `false` is not supported and will break the New Relic experience. + enabled: true + annotations: {} + # -- Tolerations for the control plane DaemonSet. + # @default -- Schedules in all tainted nodes + tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + nodeSelector: {} + # -- (bool) Sets pod's hostNetwork. When set bypasses global/common variable + # @default -- Not set + hostNetwork: + affinity: {} + # -- Config for the Infrastructure agent that will forward the metrics to the backend and will run the integrations in this cluster. + # It will be merged with the configuration in `.common.agentConfig`. You can see all the agent configurations in + # [New Relic docs](https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/) + # e.g. you can set `passthrough_environment` int the [config file](https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/configure-infrastructure-agent/#config-file) + # so the agent let use that environment variables to the integrations. + agentConfig: {} + # passthrough_environment: + # - A_ENVIRONMENT_VARIABLE_SET_IN_extraEnv + # - A_ENVIRONMENT_VARIABLE_SET_IN_A_CONFIG_MAP_SET_IN_entraEnvForm + + # -- Add user environment variables to the agent + extraEnv: [] + # -- Add user environment from configMaps or secrets as variables to the agent + extraEnvFrom: [] + # -- Volumes to mount in the containers + extraVolumes: [] + # -- Defines where to mount volumes specified with `extraVolumes` + extraVolumeMounts: [] + initContainers: [] + resources: + limits: + memory: 300M + requests: + cpu: 100m + memory: 150M + config: + # -- Timeout for the kubelet APIs contacted by the integration + timeout: 10s + # -- Number of retries after timeout expired + retries: 3 + # -- Max number of scraper rerun when scraper runtime error happens + scraperMaxReruns: 4 + # port: + # scheme: + +# ksm -- Configuration for the Deployment that collects state metrics from KSM (kube-state-metrics). +# @default -- See `values.yaml` +ksm: + # -- Enable cluster state monitoring. + # Advanced users only. Setting this to `false` is not supported and will break the New Relic experience. + enabled: true + annotations: {} + # -- Tolerations for the KSM Deployment. + # @default -- Schedules in all tainted nodes + tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + nodeSelector: {} + # -- (bool) Sets pod's hostNetwork. When set bypasses global/common variable + # @default -- Not set + hostNetwork: + # -- Affinity for the KSM Deployment. + # @default -- Deployed in the same node as KSM + affinity: + podAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - podAffinityTerm: + topologyKey: kubernetes.io/hostname + labelSelector: + matchLabels: + app.kubernetes.io/name: kube-state-metrics + weight: 100 + # -- Config for the Infrastructure agent that will forward the metrics to the backend. It will be merged with the configuration in `.common.agentConfig` + # See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + agentConfig: {} + extraEnv: [] + extraEnvFrom: [] + extraVolumes: [] + extraVolumeMounts: [] + initContainers: [] + # -- Resources for the KSM scraper pod. + # Keep in mind that sharding is not supported at the moment, so memory usage for this component ramps up quickly on + # large clusters. + # @default -- 100m/150M -/850M + resources: + limits: + memory: 850M # Bump me up if KSM pod shows restarts. + requests: + cpu: 100m + memory: 150M + config: + # -- Timeout for the ksm API contacted by the integration + timeout: 10s + # -- Number of retries after timeout expired + retries: 3 + # -- if specified autodiscovery is not performed and the specified URL is used + # staticUrl: "http://test.io:8080/metrics" + # -- Label selector that will be used to automatically discover an instance of kube-state-metrics running in the cluster. + selector: "app.kubernetes.io/name=kube-state-metrics" + # -- Scheme to use to connect to kube-state-metrics. Supported values are `http` and `https`. + scheme: "http" + # -- Restrict autodiscovery of the kube-state-metrics endpoint to those using a specific port. If empty or `0`, all endpoints are considered regardless of their port (recommended). + # port: 8080 + # -- Restrict autodiscovery of the kube-state-metrics service to a particular namespace. + # @default -- All namespaces are searched (recommended). + # namespace: "ksm-namespace" + +# controlPlane -- Configuration for the control plane scraper. +# @default -- See `values.yaml` +controlPlane: + # -- Deploy control plane monitoring component. + enabled: true + annotations: {} + # -- Tolerations for the control plane DaemonSet. + # @default -- Schedules in all tainted nodes + tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + nodeSelector: {} + # -- Affinity for the control plane DaemonSet. + # @default -- Deployed only in master nodes. + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: node-role.kubernetes.io/control-plane + operator: Exists + - matchExpressions: + - key: node-role.kubernetes.io/controlplane + operator: Exists + - matchExpressions: + - key: node-role.kubernetes.io/etcd + operator: Exists + - matchExpressions: + - key: node-role.kubernetes.io/master + operator: Exists + # -- How to deploy the control plane scraper. If autodiscovery is in use, it should be `DaemonSet`. + # Advanced users using static endpoints set this to `Deployment` to avoid reporting metrics twice. + kind: DaemonSet + # -- Run Control Plane scraper with `hostNetwork`. + # `hostNetwork` is required for most control plane configurations, as they only accept connections from localhost. + hostNetwork: true + # -- Config for the Infrastructure agent that will forward the metrics to the backend. It will be merged with the configuration in `.common.agentConfig` + # See: https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + agentConfig: {} + extraEnv: [] + extraEnvFrom: [] + extraVolumes: [] + extraVolumeMounts: [] + initContainers: [] + resources: + limits: + memory: 300M + requests: + cpu: 100m + memory: 150M + config: + # -- Timeout for the Kubernetes APIs contacted by the integration + timeout: 10s + # -- Number of retries after timeout expired + retries: 3 + # -- etcd monitoring configuration + # @default -- Common settings for most K8s distributions. + etcd: + # -- Enable etcd monitoring. Might require manual configuration in some environments. + enabled: true + # Discover etcd pods using the following namespaces and selectors. + # If a pod matches the selectors, the scraper will attempt to reach it through the `endpoints` defined below. + autodiscover: + - selector: "tier=control-plane,component=etcd" + namespace: kube-system + # Set to true to consider only pods sharing the node with the scraper pod. + # This should be set to `true` if Kind is Daemonset, `false` otherwise. + matchNode: true + # Try to reach etcd using the following endpoints. + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + - url: http://localhost:2381 + - selector: "k8s-app=etcd-manager-main" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + - selector: "k8s-app=etcd" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:4001 + insecureSkipVerify: true + auth: + type: bearer + # Openshift users might want to remove previous autodiscover entries and add this one instead. + # Manual steps are required to create a secret containing the required TLS certificates to connect to etcd. + # - selector: "app=etcd,etcd=true,k8s-app=etcd" + # namespace: openshift-etcd + # matchNode: true + # endpoints: + # - url: https://localhost:9979 + # insecureSkipVerify: true + # auth: + # type: mTLS + # mtls: + # secretName: secret-name + # secretNamespace: secret-namespace + + # -- staticEndpoint configuration. + # It is possible to specify static endpoint to scrape. If specified 'autodiscover' section is ignored. + # If set the static endpoint should be reachable, otherwise an error will be returned and the integration stops. + # Notice that if deployed as a daemonSet and not as a Deployment setting static URLs could lead to duplicate data + # staticEndpoint: + # url: https://url:port + # insecureSkipVerify: true + # auth: {} + + # -- Scheduler monitoring configuration + # @default -- Common settings for most K8s distributions. + scheduler: + # -- Enable scheduler monitoring. + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-scheduler" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + - selector: "k8s-app=kube-scheduler" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=openshift-kube-scheduler,scheduler=true" + namespace: openshift-kube-scheduler + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=openshift-kube-scheduler,scheduler=true" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10259 + insecureSkipVerify: true + auth: + type: bearer + # -- staticEndpoint configuration. + # It is possible to specify static endpoint to scrape. If specified 'autodiscover' section is ignored. + # If set the static endpoint should be reachable, otherwise an error will be returned and the integration stops. + # Notice that if deployed as a daemonSet and not as a Deployment setting static URLs could lead to duplicate data + # staticEndpoint: + # url: https://url:port + # insecureSkipVerify: true + # auth: {} + + # -- Controller manager monitoring configuration + # @default -- Common settings for most K8s distributions. + controllerManager: + # -- Enable controller manager monitoring. + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-controller-manager" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + - selector: "k8s-app=kube-controller-manager" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=kube-controller-manager,kube-controller-manager=true" + namespace: openshift-kube-controller-manager + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=kube-controller-manager,kube-controller-manager=true" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=controller-manager,controller-manager=true" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:10257 + insecureSkipVerify: true + auth: + type: bearer + # mtls: + # secretName: secret-name + # secretNamespace: secret-namespace + # -- staticEndpoint configuration. + # It is possible to specify static endpoint to scrape. If specified 'autodiscover' section is ignored. + # If set the static endpoint should be reachable, otherwise an error will be returned and the integration stops. + # Notice that if deployed as a daemonSet and not as a Deployment setting static URLs could lead to duplicate data + # staticEndpoint: + # url: https://url:port + # insecureSkipVerify: true + # auth: {} + + # -- API Server monitoring configuration + # @default -- Common settings for most K8s distributions. + apiServer: + # -- Enable API Server monitoring + enabled: true + autodiscover: + - selector: "tier=control-plane,component=kube-apiserver" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + # Endpoint distributions target: Kind(v1.22.1) + - url: https://localhost:6443 + insecureSkipVerify: true + auth: + type: bearer + - url: http://localhost:8080 + - selector: "k8s-app=kube-apiserver" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + - url: http://localhost:8080 + - selector: "app=openshift-kube-apiserver,apiserver=true" + namespace: openshift-kube-apiserver + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + - url: https://localhost:6443 + insecureSkipVerify: true + auth: + type: bearer + - selector: "app=openshift-kube-apiserver,apiserver=true" + namespace: kube-system + matchNode: true + endpoints: + - url: https://localhost:8443 + insecureSkipVerify: true + auth: + type: bearer + # -- staticEndpoint configuration. + # It is possible to specify static endpoint to scrape. If specified 'autodiscover' section is ignored. + # If set the static endpoint should be reachable, otherwise an error will be returned and the integration stops. + # Notice that if deployed as a daemonSet and not as a Deployment setting static URLs could lead to duplicate data + # staticEndpoint: + # url: https://url:port + # insecureSkipVerify: true + # auth: {} + +# -- Update strategy for the deployed DaemonSets. +# @default -- See `values.yaml` +updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 1 + +# -- Update strategy for the deployed Deployments. +# @default -- `type: Recreate` +strategy: + type: Recreate + +# -- Adds extra attributes to the cluster and all the metrics emitted to the backend. Can be configured also with `global.customAttributes` +customAttributes: {} + +# -- Settings controlling ServiceAccount creation. +# @default -- See `values.yaml` +serviceAccount: + # -- (bool) Whether the chart should automatically create the ServiceAccount objects required to run. + # @default -- `true` + create: + annotations: {} + # If not set and create is true, a name is generated using the fullname template + name: "" + +# -- Additional labels for chart objects. Can be configured also with `global.labels` +labels: {} +# -- Annotations to be added to all pods created by the integration. +podAnnotations: {} +# -- Additional labels for chart pods. Can be configured also with `global.podLabels` +podLabels: {} + +# -- Run the integration with full access to the host filesystem and network. +# Running in this mode allows reporting fine-grained cpu, memory, process and network metrics for your nodes. +privileged: true +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- `false` +hostNetwork: +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: {} +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} + +# Settings controlling RBAC objects creation. +rbac: + # rbac.create -- Whether the chart should automatically create the RBAC objects required to run. + create: true + # rbac.pspEnabled -- Whether the chart should create Pod Security Policy objects. + pspEnabled: false + +# -- Sets pod/node affinities set almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +affinity: {} +# -- Sets pod's node selector almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +nodeSelector: {} +# -- Sets pod's tolerations to node taints almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +tolerations: [] + +# -- Config files for other New Relic integrations that should run in this cluster. +integrations: {} +# If you wish to monitor services running on Kubernetes you can provide integrations +# configuration under `integrations`. You just need to create a new entry where +# the key is the filename of the configuration file and the value is the content of +# the integration configuration. +# The data is the actual integration configuration as described in the spec here: +# https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 +# For example, if you wanted to monitor a Redis instance that has a label "app=sampleapp" +# you could do so by adding following entry: +# nri-redis-sampleapp: +# discovery: +# command: +# # Run NRI Discovery for Kubernetes +# # https://github.com/newrelic/nri-discovery-kubernetes +# exec: /var/db/newrelic-infra/nri-discovery-kubernetes +# match: +# label.app: sampleapp +# integrations: +# - name: nri-redis +# env: +# # using the discovered IP as the hostname address +# HOSTNAME: ${discovery.ip} +# PORT: 6379 +# labels: +# env: test + +# -- (bool) Collect detailed metrics from processes running in the host. +# This defaults to true for accounts created before July 20, 2020. +# ref: https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-1120 +# @default -- `false` +enableProcessMetrics: + +# Prefix nodes display name with cluster to reduce chances of collisions +# prefixDisplayNameWithCluster: false + +# 'true' will use the node name as the name for the "host", +# note that it may cause data collision if the node name is the same in different clusters +# and prefixDisplayNameWithCluster is not set to true. +# 'false' will use the host name as the name for the "host". +# useNodeNameAsDisplayName: true + +selfMonitoring: + pixie: + # selfMonitoring.pixie.enabled -- Enables the Pixie Health Check nri-flex config. + # This Flex config performs periodic checks of the Pixie /healthz and /statusz endpoints exposed by the Pixie + # Cloud Connector. A status for each endpoint is sent to New Relic in a pixieHealthCheck event. + enabled: false + + +# -- Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` +proxy: "" + +# -- (bool) Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` +# @default -- `false` +nrStaging: +fedramp: + # -- (bool) Enables FedRAMP. Can be configured also with `global.fedramp.enabled` + # @default -- `false` + enabled: + +# -- (bool) Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` +# @default -- `false` +verboseLog: diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/.helmignore new file mode 100644 index 0000000000..1ed4e226ea --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/.helmignore @@ -0,0 +1,25 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ + +templates/apiservice/job-patch/README.md diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.lock new file mode 100644 index 0000000000..23b2bd33c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T23:31:11.079152974Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.yaml new file mode 100644 index 0000000000..7fb6703e2b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/Chart.yaml @@ -0,0 +1,25 @@ +apiVersion: v2 +appVersion: 0.13.4 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic Kubernetes Metrics Adapter. +home: https://hub.docker.com/r/newrelic/newrelic-k8s-metrics-adapter +icon: https://newrelic.com/assets/newrelic/source/NewRelic-logo-square.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: newrelic-k8s-metrics-adapter +sources: +- https://github.com/newrelic/newrelic-k8s-metrics-adapter +- https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter +version: 1.11.4 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md new file mode 100644 index 0000000000..9c5428201c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md @@ -0,0 +1,139 @@ +[![New Relic Experimental header](https://github.com/newrelic/opensource-website/raw/master/src/images/categories/Experimental.png)](https://opensource.newrelic.com/oss-category/#new-relic-experimental) + +# newrelic-k8s-metrics-adapter + +A Helm chart to deploy the New Relic Kubernetes Metrics Adapter. + +**Homepage:** + +## Source Code + +* +* + +## Requirements + +| Repository | Name | Version | +|------------|------|---------| +| https://helm-charts.newrelic.com | common-library | 1.3.0 | + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Node affinity to use for scheduling. | +| apiServicePatchJob.image | object | See `values.yaml`. | Registry, repository, tag, and pull policy for the job container image. | +| apiServicePatchJob.volumeMounts | list | `[]` | Additional Volume mounts for Cert Job, you might want to mount tmp if Pod Security Policies. | +| apiServicePatchJob.volumes | list | `[]` | Additional Volumes for Cert Job. | +| certManager.enabled | bool | `false` | Use cert manager for APIService certs, rather than the built-in patch job. | +| config.accountID | string | `nil` | New Relic [Account ID](https://docs.newrelic.com/docs/accounts/accounts-billing/account-structure/account-id/) where the configured metrics are sourced from. (**Required**) | +| config.cacheTTLSeconds | int | `30` | Period of time in seconds in which a cached value of a metric is consider valid. | +| config.externalMetrics | string | See `values.yaml` | Contains all the external metrics definition of the adapter. Each key of the externalMetric entry represents the metric name and contains the parameters that defines it. | +| config.nrdbClientTimeoutSeconds | int | 30 | Defines the NRDB client timeout. The maximum allowed value is 120. | +| config.region | string | Automatically detected from `licenseKey`. | New Relic account region. If not set, it will be automatically derived from the License Key. | +| containerSecurityContext | string | `nil` | Configure containerSecurityContext | +| extraEnv | list | `[]` | Array to add extra environment variables | +| extraEnvFrom | list | `[]` | Array to add extra envFrom | +| extraVolumeMounts | list | `[]` | Add extra volume mounts | +| extraVolumes | list | `[]` | Array to add extra volumes | +| fullnameOverride | string | `""` | To fully override common.naming.fullname | +| image | object | See `values.yaml`. | Registry, repository, tag, and pull policy for the container image. | +| image.pullSecrets | list | `[]` | The image pull secrets. | +| nodeSelector | object | `{}` | Node label to use for scheduling. | +| personalAPIKey | string | `nil` | New Relic [Personal API Key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#user-api-key) (stored in a secret). Used to connect to NerdGraph in order to fetch the configured metrics. (**Required**) | +| podAnnotations | string | `nil` | Additional annotations to apply to the pod(s). | +| podSecurityContext | string | `nil` | Configure podSecurityContext | +| proxy | string | `nil` | Configure proxy for the metrics-adapter. | +| rbac.pspEnabled | bool | `false` | Whether the chart should create Pod Security Policy objects. | +| replicas | int | `1` | Number of replicas in the deployment. | +| resources | object | See `values.yaml` | Resources you wish to assign to the pod. | +| serviceAccount.create | string | `true` | Specifies whether a ServiceAccount should be created for the job and the deployment. false avoids creation, true or empty will create the ServiceAccount | +| serviceAccount.name | string | Automatically generated. | If `serviceAccount.create` this will be the name of the ServiceAccount to use. If not set and create is true, a name is generated using the fullname template. If create is false, a serviceAccount with the given name must exist | +| tolerations | list | `[]` | List of node taints to tolerate (requires Kubernetes >= 1.6) | +| verboseLog | bool | `false` | Enable metrics adapter verbose logs. | + +## Example + +Make sure you have [added the New Relic chart repository.](../../README.md#install) + +Because of metrics configuration, we recommend to use an external values file to deploy the chart. An example with the required parameters looks like: + +```yaml +cluster: ClusterName +personalAPIKey: +config: + accountID: + externalMetrics: + nginx_average_requests: + query: "FROM Metric SELECT average(nginx.server.net.requestsPerSecond) SINCE 2 MINUTES AGO" +``` + +Then, to install this chart, run the following command: + +```sh +helm upgrade --install [release-name] newrelic-k8s-metrics-adapter/newrelic-k8s-metrics-adapter --values [values file path] +``` + +Once deployed the metric `nginx_average_requests` will be available to use by any HPA. This is and example of an HPA yaml using this metric: + +```yaml +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v2beta2 +metadata: + name: nginx-scaler +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: nginx + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: External + external: + metric: + name: nginx_average_requests + selector: + matchLabels: + k8s.namespaceName: nginx + target: + type: Value + value: 10000 +``` + +The NRQL query that will be run to get the `nginx_average_requests` value will be: + +```sql +FROM Metric SELECT average(nginx.server.net.requestsPerSecond) WHERE clusterName='ClusterName' AND `k8s.namespaceName`='nginx' SINCE 2 MINUTES AGO +``` + +## External Metrics + +An example of multiple external metrics defined: + +```yaml +externalMetrics: + nginx_average_requests: + query: "FROM Metric SELECT average(nginx.server.net.requestsPerSecond) SINCE 2 MINUTES AGO" + container_average_cores_utilization: + query: "FROM Metric SELECT average(`k8s.container.cpuCoresUtilization`) SINCE 2 MINUTES AGO" +``` + +## Resources + +The default set of resources assigned to the newrelic-k8s-metrics-adapter pods is shown below: + +```yaml +resources: + limits: + memory: 80M + requests: + cpu: 100m + memory: 30M +``` + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md.gotmpl new file mode 100644 index 0000000000..1de8c95531 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/README.md.gotmpl @@ -0,0 +1,107 @@ +[![New Relic Experimental header](https://github.com/newrelic/opensource-website/raw/master/src/images/categories/Experimental.png)](https://opensource.newrelic.com/oss-category/#new-relic-experimental) + +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +{{ template "chart.sourcesSection" . }} + +{{ template "chart.requirementsSection" . }} + +{{ template "chart.valuesSection" . }} + +## Example + +Make sure you have [added the New Relic chart repository.](../../README.md#install) + +Because of metrics configuration, we recommend to use an external values file to deploy the chart. An example with the required parameters looks like: + +```yaml +cluster: ClusterName +personalAPIKey: +config: + accountID: + externalMetrics: + nginx_average_requests: + query: "FROM Metric SELECT average(nginx.server.net.requestsPerSecond) SINCE 2 MINUTES AGO" +``` + +Then, to install this chart, run the following command: + +```sh +helm upgrade --install [release-name] newrelic-k8s-metrics-adapter/newrelic-k8s-metrics-adapter --values [values file path] +``` + +Once deployed the metric `nginx_average_requests` will be available to use by any HPA. This is and example of an HPA yaml using this metric: + +```yaml +kind: HorizontalPodAutoscaler +apiVersion: autoscaling/v2beta2 +metadata: + name: nginx-scaler +spec: + scaleTargetRef: + apiVersion: apps/v1 + kind: Deployment + name: nginx + minReplicas: 1 + maxReplicas: 10 + metrics: + - type: External + external: + metric: + name: nginx_average_requests + selector: + matchLabels: + k8s.namespaceName: nginx + target: + type: Value + value: 10000 +``` + +The NRQL query that will be run to get the `nginx_average_requests` value will be: + +```sql +FROM Metric SELECT average(nginx.server.net.requestsPerSecond) WHERE clusterName='ClusterName' AND `k8s.namespaceName`='nginx' SINCE 2 MINUTES AGO +``` + +## External Metrics + +An example of multiple external metrics defined: + +```yaml +externalMetrics: + nginx_average_requests: + query: "FROM Metric SELECT average(nginx.server.net.requestsPerSecond) SINCE 2 MINUTES AGO" + container_average_cores_utilization: + query: "FROM Metric SELECT average(`k8s.container.cpuCoresUtilization`) SINCE 2 MINUTES AGO" +``` + +## Resources + +The default set of resources assigned to the newrelic-k8s-metrics-adapter pods is shown below: + +```yaml +resources: + limits: + memory: 80M + requests: + cpu: 100m + memory: 30M +``` + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/ci/test-values.yaml new file mode 100644 index 0000000000..f0f9be1f92 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/ci/test-values.yaml @@ -0,0 +1,14 @@ +global: + cluster: test-cluster + +personalAPIKey: "a21321" +verboseLog: false + +config: + accountID: 111 + region: EU + nrdbClientTimeoutSeconds: 30 + +image: + repository: e2e/newrelic-metrics-adapter + tag: "test" # Defaults to AppVersion diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/_helpers.tpl new file mode 100644 index 0000000000..6a5f765037 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/_helpers.tpl @@ -0,0 +1,57 @@ +{{/* vim: set filetype=mustache: */}} + +{{- /* Allow to change pod defaults dynamically based if we are running in privileged mode or not */ -}} +{{- define "newrelic-k8s-metrics-adapter.securityContext.pod" -}} +{{- if include "newrelic.common.securityContext.pod" . -}} +{{- include "newrelic.common.securityContext.pod" . -}} +{{- else -}} +fsGroup: 1001 +runAsUser: 1001 +runAsGroup: 1001 +{{- end -}} +{{- end -}} + + + +{{/* +Select a value for the region +When this value is empty the New Relic client region will be the default 'US' +*/}} +{{- define "newrelic-k8s-metrics-adapter.region" -}} +{{- if .Values.config.region -}} + {{- .Values.config.region | upper -}} +{{- else if (include "newrelic.common.nrStaging" .) -}} +Staging +{{- else if hasPrefix "eu" (include "newrelic.common.license._licenseKey" .) -}} +EU +{{- end -}} +{{- end -}} + + + +{{- /* +Naming helpers +*/ -}} +{{- define "newrelic-k8s-metrics-adapter.name.apiservice" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "apiservice") }} +{{- end -}} + +{{- define "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" -}} +{{- if include "newrelic.common.serviceAccount.create" . -}} + {{- include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "apiservice") -}} +{{- else -}} + {{- include "newrelic.common.serviceAccount.name" . -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-k8s-metrics-adapter.name.apiservice-create" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "apiservice-create") }} +{{- end -}} + +{{- define "newrelic-k8s-metrics-adapter.name.apiservice-patch" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "apiservice-patch") }} +{{- end -}} + +{{- define "newrelic-k8s-metrics-adapter.name.hpa-controller" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "hpa-controller") }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-clusterrolebinding.yaml new file mode 100644 index 0000000000..40bcba8b63 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-clusterrolebinding.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic.common.naming.fullname" . }}:system:auth-delegator + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:auth-delegator +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-rolebinding.yaml new file mode 100644 index 0000000000..afb5d2d550 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/adapter-rolebinding.yaml @@ -0,0 +1,15 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: kube-system + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: extension-apiserver-authentication-reader +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/apiservice.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/apiservice.yaml new file mode 100644 index 0000000000..8f01b64071 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/apiservice.yaml @@ -0,0 +1,19 @@ +apiVersion: apiregistration.k8s.io/v1 +kind: APIService +metadata: + name: v1beta1.external.metrics.k8s.io + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +{{- if .Values.certManager.enabled }} + annotations: + certmanager.k8s.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} + cert-manager.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} +{{- end }} +spec: + service: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + group: external.metrics.k8s.io + version: v1beta1 + groupPriorityMinimum: 100 + versionPriority: 100 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrole.yaml new file mode 100644 index 0000000000..5c364eb375 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrole.yaml @@ -0,0 +1,26 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - apiregistration.k8s.io + resources: + - apiservices + verbs: + - get + - update +{{- if .Values.rbac.pspEnabled }} + - apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrolebinding.yaml new file mode 100644 index 0000000000..8aa95792e8 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/clusterrolebinding.yaml @@ -0,0 +1,19 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} +subjects: + - kind: ServiceAccount + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-createSecret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-createSecret.yaml new file mode 100644 index 0000000000..6cf89b79e6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-createSecret.yaml @@ -0,0 +1,55 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice-create" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice-create" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.image.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: create + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.apiServicePatchJob.image "context" .) }} + imagePullPolicy: {{ .Values.apiServicePatchJob.image.pullPolicy }} + args: + - create + - --host={{ include "newrelic.common.naming.fullname" . }},{{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }}.svc + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + - --cert-name=tls.crt + - --key-name=tls.key + {{- with .Values.apiServicePatchJob.volumeMounts }} + volumeMounts: + {{- toYaml . | nindent 10 }} + {{- end }} + {{- with .Values.apiServicePatchJob.volumes }} + volumes: + {{- toYaml . | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-patchAPIService.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-patchAPIService.yaml new file mode 100644 index 0000000000..9d651c2108 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/job-patchAPIService.yaml @@ -0,0 +1,53 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice-patch" . }} + annotations: + "helm.sh/hook": post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice-patch" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.image.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: patch + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.apiServicePatchJob.image "context" .) }} + imagePullPolicy: {{ .Values.apiServicePatchJob.image.pullPolicy }} + args: + - patch + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + - --apiservice-name=v1beta1.external.metrics.k8s.io + {{- with .Values.apiServicePatchJob.volumeMounts }} + volumeMounts: + {{- toYaml . | nindent 10 }} + {{- end }} + {{- with .Values.apiServicePatchJob.volumes }} + volumes: + {{- toYaml . | nindent 6 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/psp.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/psp.yaml new file mode 100644 index 0000000000..1dd6bc1a6f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/psp.yaml @@ -0,0 +1,49 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled) (.Values.rbac.pspEnabled)) }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + privileged: false + # Required to prevent escalations to root. + # allowPrivilegeEscalation: false + # This is redundant with non-root + disallow privilege escalation, + # but we can provide it for defense in depth. + # requiredDropCapabilities: + # - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Permits the container to run with root privileges as well. + rule: 'RunAsAny' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/role.yaml new file mode 100644 index 0000000000..1e870e0828 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/role.yaml @@ -0,0 +1,20 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - create +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/rolebinding.yaml new file mode 100644 index 0000000000..cbe8bdb72d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/rolebinding.yaml @@ -0,0 +1,20 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} +subjects: + - kind: ServiceAccount + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/serviceaccount.yaml new file mode 100644 index 0000000000..68a3cfd730 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/apiservice/job-patch/serviceaccount.yaml @@ -0,0 +1,18 @@ +{{- $createServiceAccount := include "newrelic.common.serviceAccount.create" . -}} +{{- if (and $createServiceAccount (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic-k8s-metrics-adapter.name.apiservice.serviceAccount" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + # When hooks are sorted by weight and name, kind order gets overwritten, + # then this serviceAccount doesn't get created before dependent objects causing a failure. + # This weight is set, forcing it always to get created before the other objects. + # We submitted this PR to fix the issue: https://github.com/helm/helm/pull/10787 + "helm.sh/hook-weight": "-1" + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/configmap.yaml new file mode 100644 index 0000000000..8e88ad59e4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/configmap.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + config.yaml: | + accountID: {{ .Values.config.accountID | required "config.accountID is required" }} + {{- with (include "newrelic-k8s-metrics-adapter.region" .) }} + region: {{ . }} + {{- end }} + cacheTTLSeconds: {{ .Values.config.cacheTTLSeconds | default "0" }} + {{- with .Values.config.externalMetrics }} + externalMetrics: + {{- toYaml . | nindent 6 }} + {{- end }} + nrdbClientTimeoutSeconds: {{ .Values.config.nrdbClientTimeoutSeconds | default "30" }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/deployment.yaml new file mode 100644 index 0000000000..1b96459a5d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/deployment.yaml @@ -0,0 +1,113 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + replicas: {{ .Values.replicas }} + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} + {{- if .Values.podAnnotations }} + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + {{- with include "newrelic-k8s-metrics-adapter.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.image.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: {{ include "newrelic.common.naming.name" . }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.image "context" .) }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 10 }} + {{- end }} + args: + - --tls-cert-file=/tmp/k8s-metrics-adapter/serving-certs/tls.crt + - --tls-private-key-file=/tmp/k8s-metrics-adapter/serving-certs/tls.key + {{- if .Values.verboseLog }} + - --v=10 + {{- else }} + - --v=1 + {{- end }} + readinessProbe: + httpGet: + scheme: HTTPS + path: /healthz + port: 6443 + initialDelaySeconds: 1 + {{- if .Values.resources }} + resources: + {{- toYaml .Values.resources | nindent 10 }} + {{- end }} + env: + - name: CLUSTER_NAME + value: {{ include "newrelic.common.cluster" . }} + - name: NEWRELIC_API_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.naming.fullname" . }} + key: personalAPIKey + {{- with (include "newrelic.common.proxy" .) }} + - name: HTTPS_PROXY + value: {{ . }} + {{- end }} + {{- with .Values.extraEnv }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.extraEnvFrom }} + envFrom: {{ toYaml . | nindent 8 }} + {{- end }} + volumeMounts: + - name: tls-key-cert-pair + mountPath: /tmp/k8s-metrics-adapter/serving-certs/ + - name: config + mountPath: /etc/newrelic/adapter/ + {{- with .Values.extraVolumeMounts }} + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: tls-key-cert-pair + secret: + secretName: {{ include "newrelic-k8s-metrics-adapter.name.apiservice" . }} + - name: config + configMap: + name: {{ include "newrelic.common.naming.fullname" . }} + {{- with .Values.extraVolumes }} + {{- toYaml . | nindent 6 }} + {{- end }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrole.yaml new file mode 100644 index 0000000000..402fece01e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrole.yaml @@ -0,0 +1,15 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic.common.naming.fullname" . }}:external-metrics + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: +- apiGroups: + - external.metrics.k8s.io + resources: + - "*" + verbs: + - list + - get + - watch diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrolebinding.yaml new file mode 100644 index 0000000000..390fab452b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/hpa-clusterrolebinding.yaml @@ -0,0 +1,14 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic-k8s-metrics-adapter.name.hpa-controller" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }}:external-metrics +subjects: +- kind: ServiceAccount + name: horizontal-pod-autoscaler + namespace: kube-system diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/secret.yaml new file mode 100644 index 0000000000..09a70ab655 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/secret.yaml @@ -0,0 +1,10 @@ +apiVersion: v1 +kind: Secret +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +type: Opaque +stringData: + personalAPIKey: {{ .Values.personalAPIKey | required "personalAPIKey must be set" | quote }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/service.yaml new file mode 100644 index 0000000000..82015830c6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + namespace: {{ .Release.Namespace }} + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + ports: + - port: 443 + targetPort: 6443 + selector: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 4 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/serviceaccount.yaml new file mode 100644 index 0000000000..b1e74523e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- if include "newrelic.common.serviceAccount.annotations" . }} + annotations: + {{- include "newrelic.common.serviceAccount.annotations" . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/apiservice_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/apiservice_test.yaml new file mode 100644 index 0000000000..086160edc5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/apiservice_test.yaml @@ -0,0 +1,22 @@ +suite: test naming helper for APIService's certmanager annotations and service name +templates: + - templates/apiservice/apiservice.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: Annotations are correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 11111111 + certManager: + enabled: true + asserts: + - matchRegex: + path: metadata.annotations["certmanager.k8s.io/inject-ca-from"] + pattern: ^my-namespace\/.*-root-cert + - matchRegex: + path: metadata.annotations["cert-manager.io/inject-ca-from"] + pattern: ^my-namespace\/.*-root-cert diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/common_extra_naming_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/common_extra_naming_test.yaml new file mode 100644 index 0000000000..82098ba1c4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/common_extra_naming_test.yaml @@ -0,0 +1,27 @@ +suite: test naming helpers +templates: + - templates/adapter-clusterrolebinding.yaml + - templates/hpa-clusterrole.yaml + - templates/hpa-clusterrolebinding.yaml + - templates/apiservice/job-patch/clusterrole.yaml + - templates/apiservice/job-patch/clusterrolebinding.yaml + - templates/apiservice/job-patch/job-createSecret.yaml + - templates/apiservice/job-patch/job-patchAPIService.yaml + - templates/apiservice/job-patch/psp.yaml + - templates/apiservice/job-patch/rolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: default values has its name correctly defined + set: + cluster: test-cluster + personalAPIKey: 21321 + config: + accountID: 11111111 + rbac: + pspEnabled: true + asserts: + - matchRegex: + path: metadata.name + pattern: ^.*(-apiservice|-hpa-controller|:external-metrics|:system:auth-delegator) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/configmap_test.yaml new file mode 100644 index 0000000000..90b8798a7b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/configmap_test.yaml @@ -0,0 +1,104 @@ +suite: test configmap region helper and externalMetrics +templates: + - templates/configmap.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: has the correct region when defined in local values + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + region: A-REGION + cacheTTLSeconds: 30 + nrdbClientTimeoutSeconds: 30 + - it: has the correct region when global staging + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + global: + nrStaging: true + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + region: Staging + cacheTTLSeconds: 30 + nrdbClientTimeoutSeconds: 30 + - it: has the correct region when global values and licenseKey is from eu + set: + personalAPIKey: 21321 + licenseKey: eu-whatever + cluster: test-cluster + config: + accountID: 111 + global: + aRandomGlobalValue: true + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + region: EU + cacheTTLSeconds: 30 + nrdbClientTimeoutSeconds: 30 + - it: has the correct region when no global values exist and licenseKey is from eu + set: + personalAPIKey: 21321 + cluster: test-cluster + licenseKey: eu-whatever + config: + accountID: 111 + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + region: EU + cacheTTLSeconds: 30 + nrdbClientTimeoutSeconds: 30 + - it: has no region when not defined and licenseKey is not from eu + set: + personalAPIKey: 21321 + cluster: test-cluster + licenseKey: us-whatever + config: + accountID: 111 + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + cacheTTLSeconds: 30 + nrdbClientTimeoutSeconds: 30 + - it: has externalMetrics when defined + set: + personalAPIKey: 21321 + cluster: test-cluster + licenseKey: us-whatever + config: + accountID: 111 + externalMetrics: + nginx_average_requests: + query: "FROM Metric SELECT average(nginx.server.net.requestsPerSecond)" + asserts: + - equal: + path: data["config.yaml"] + value: | + accountID: 111 + cacheTTLSeconds: 30 + externalMetrics: + nginx_average_requests: + query: FROM Metric SELECT average(nginx.server.net.requestsPerSecond) + nrdbClientTimeoutSeconds: 30 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/deployment_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/deployment_test.yaml new file mode 100644 index 0000000000..7a18987907 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/deployment_test.yaml @@ -0,0 +1,99 @@ +suite: test deployent images +release: + name: my-release + namespace: my-namespace +tests: + - it: has the correct image + set: + global: + cluster: test-cluster + personalAPIKey: 21321 + image: + repository: newrelic/newrelic-k8s-metrics-adapter + tag: "latest" + pullSecrets: + - name: regsecret + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: ^.*newrelic/newrelic-k8s-metrics-adapter:latest + template: templates/deployment.yaml + - equal: + path: spec.template.spec.imagePullSecrets + value: + - name: regsecret + template: templates/deployment.yaml + - it: correctly uses the cluster helper + set: + personalAPIKey: 21321 + config: + accountID: 111 + region: A-REGION + cluster: a-cluster + asserts: + - equal: + path: spec.template.spec.containers[0].env[0].value + value: a-cluster + template: templates/deployment.yaml + - it: correctly uses common.securityContext.podDefaults + set: + personalAPIKey: 21321 + config: + accountID: 111 + region: A-REGION + cluster: a-cluster + asserts: + - equal: + path: spec.template.spec.securityContext + value: + fsGroup: 1001 + runAsGroup: 1001 + runAsUser: 1001 + template: templates/deployment.yaml + - it: correctly uses common.proxy + set: + personalAPIKey: 21321 + config: + accountID: 111 + region: A-REGION + cluster: a-cluster + proxy: localhost:1234 + asserts: + - equal: + path: spec.template.spec.containers[0].env[2].value + value: localhost:1234 + template: templates/deployment.yaml + + - it: has a linux node selector by default + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + template: templates/deployment.yaml + + - it: has a linux node selector and additional selectors + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue + template: templates/deployment.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/hpa_clusterrolebinding_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/hpa_clusterrolebinding_test.yaml new file mode 100644 index 0000000000..4fba87fbe1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/hpa_clusterrolebinding_test.yaml @@ -0,0 +1,18 @@ +suite: test naming helper for clusterRolebBinding roleRef +templates: + - templates/hpa-clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: roleRef.name has its name correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: roleRef.name + pattern: ^.*:external-metrics diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_cluster_rolebinding_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_cluster_rolebinding_test.yaml new file mode 100644 index 0000000000..dd582313ea --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_cluster_rolebinding_test.yaml @@ -0,0 +1,22 @@ +suite: test job-patch RoleBinding and ClusterRoleBinding rendering and roleRef/Subjects names +templates: + - templates/apiservice/job-patch/rolebinding.yaml + - templates/apiservice/job-patch/clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: roleRef apiGroup and Subjets are correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: roleRef.name + pattern: ^.*-apiservice + - matchRegex: + path: subjects[0].name + pattern: ^.*-apiservice diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_clusterrole_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_clusterrole_test.yaml new file mode 100644 index 0000000000..33a1eaa731 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_clusterrole_test.yaml @@ -0,0 +1,20 @@ +suite: test job-patch clusterRole rule resourceName and rendering +templates: + - templates/apiservice/job-patch/clusterrole.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: PodSecurityPolicy rule resourceName is correctly defined + set: + rbac: + pspEnabled: true + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: rules[1].resourceNames[0] + pattern: ^.*-apiservice diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_common_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_common_test.yaml new file mode 100644 index 0000000000..91cd791d1c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_common_test.yaml @@ -0,0 +1,27 @@ +suite: test labels and rendering for job-batch objects +templates: + - templates/apiservice/job-patch/clusterrole.yaml + - templates/apiservice/job-patch/clusterrolebinding.yaml + - templates/apiservice/job-patch/job-createSecret.yaml + - templates/apiservice/job-patch/job-patchAPIService.yaml + - templates/apiservice/job-patch/psp.yaml + - templates/apiservice/job-patch/role.yaml + - templates/apiservice/job-patch/rolebinding.yaml + - templates/apiservice/job-patch/serviceaccount.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: If customTLSCertificate and Certmanager enabled do not render + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + customTLSCertificate: a-tls-cert + certManager: + enabled: true + asserts: + - hasDocuments: + count: 0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_createsecret_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_createsecret_test.yaml new file mode 100644 index 0000000000..6db79234fd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_createsecret_test.yaml @@ -0,0 +1,47 @@ +suite: test naming helper for job-createSecret +templates: + - templates/apiservice/job-patch/job-createSecret.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: spec metadata name is is correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - equal: + path: spec.template.metadata.name + value: my-release-newrelic-k8s-metrics-adapter-apiservice-create + - it: container args are correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: spec.template.spec.containers[0].args[1] + pattern: --host=.*,.*\.my-namespace.svc + - matchRegex: + path: spec.template.spec.containers[0].args[3] + pattern: --secret-name=.*-apiservice + - it: has the correct image + set: + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + personalAPIKey: 21321 + apiServicePatchJob: + image: + repository: registry.k8s.io/ingress-nginx/kube-webhook-certgen + tag: "latest" + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: ^.*registry.k8s.io/ingress-nginx/kube-webhook-certgen:latest diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_patchapiservice_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_patchapiservice_test.yaml new file mode 100644 index 0000000000..0be0833135 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_patch_job_patchapiservice_test.yaml @@ -0,0 +1,56 @@ +suite: test naming helper for job-patchAPIService +templates: + - templates/apiservice/job-patch/job-patchAPIService.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: spec metadata name is is correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: spec.template.metadata.name + pattern: .*-apiservice-patch$ + - it: container args are correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: spec.template.spec.containers[0].args[2] + pattern: ^--secret-name=.*-apiservice + + - it: serviceAccountName is correctly defined + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - matchRegex: + path: spec.template.spec.serviceAccountName + pattern: .*-apiservice$ + - it: has the correct image + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + apiServicePatchJob: + image: + repository: registry.k8s.io/ingress-nginx/kube-webhook-certgen + tag: "latest" + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: .*registry.k8s.io/ingress-nginx/kube-webhook-certgen:latest$ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_serviceaccount_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_serviceaccount_test.yaml new file mode 100644 index 0000000000..9b6207c350 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/job_serviceaccount_test.yaml @@ -0,0 +1,79 @@ +suite: test job' serviceAccount +templates: + - templates/apiservice/job-patch/job-createSecret.yaml + - templates/apiservice/job-patch/job-patchAPIService.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: my-release-newrelic-k8s-metrics-adapter-apiservice + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: default + + - it: has a linux node selector by default + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + + - it: has a linux node selector and additional selectors + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/rbac_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/rbac_test.yaml new file mode 100644 index 0000000000..78884c0229 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/tests/rbac_test.yaml @@ -0,0 +1,50 @@ +suite: test RBAC creation +templates: + - templates/apiservice/job-patch/rolebinding.yaml + - templates/apiservice/job-patch/clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: subjects[0].name + value: my-release-newrelic-k8s-metrics-adapter-apiservice + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: subjects[0].name + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + personalAPIKey: 21321 + cluster: test-cluster + config: + accountID: 111 + region: A-REGION + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: subjects[0].name + value: default diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/values.yaml new file mode 100644 index 0000000000..5c610f7926 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-k8s-metrics-adapter/values.yaml @@ -0,0 +1,156 @@ +# IMPORTANT: The Kubernetes cluster name +# https://docs.newrelic.com/docs/kubernetes-monitoring-integration +# +# licenseKey: +# cluster: +# IMPORTANT: the previous values can also be set as global so that they +# can be shared by other newrelic product's charts. +# +# global: +# licenseKey: +# cluster: +# nrStaging: + +# -- New Relic [Personal API Key](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/#user-api-key) (stored in a secret). Used to connect to NerdGraph in order to fetch the configured metrics. (**Required**) +personalAPIKey: + +# -- Enable metrics adapter verbose logs. +verboseLog: false + +config: + # -- New Relic [Account ID](https://docs.newrelic.com/docs/accounts/accounts-billing/account-structure/account-id/) where the configured metrics are sourced from. (**Required**) + accountID: + + # config.region -- New Relic account region. If not set, it will be automatically derived from the License Key. + # @default -- Automatically detected from `licenseKey`. + region: + # For US-based accounts, the region is: `US`. + # For EU-based accounts, the region is: `EU`. + # For Staging accounts, the region is: 'Staging' this is also automatically derived form `global.nrStaging` + + + # config.cacheTTLSeconds -- Period of time in seconds in which a cached value of a metric is consider valid. + cacheTTLSeconds: 30 + # Not setting it or setting it to '0' disables the cache. + + # config.externalMetrics -- Contains all the external metrics definition of the adapter. Each key of the externalMetric entry represents the metric name and contains the parameters that defines it. + # @default -- See `values.yaml` + externalMetrics: + # Names cannot contain uppercase characters and + # "/" or "%" characters. + # my_external_metric_name_example: + # + # NRQL query that will executed to obtain the metric value. + # The query must return just one value so is recommended to use aggregator functions like average or latest. + # Default time span for aggregator func is 1h so is recommended to use the SINCE clause to reduce the time span. + # query: "FROM Metric SELECT average(`k8s.container.cpuCoresUtilization`) SINCE 2 MINUTES AGO" + # + # By default a cluster filter is added to the query to ensure no cross cluster metrics are taking into account. + # The added filter is equivalent to WHERE `clusterName`=. + # If metrics are not from the cluster use removeClusterFilter. Default value for this parameter is false. + # removeClusterFilter: false + + # config.nrdbClientTimeoutSeconds -- Defines the NRDB client timeout. The maximum allowed value is 120. + # @default -- 30 + nrdbClientTimeoutSeconds: 30 + +# image -- Registry, repository, tag, and pull policy for the container image. +# @default -- See `values.yaml`. +image: + registry: + repository: newrelic/newrelic-k8s-metrics-adapter + tag: "" + pullPolicy: IfNotPresent + # It is possible to specify docker registry credentials. + # See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod + # image.pullSecrets -- The image pull secrets. + pullSecrets: [] + # - name: regsecret + +# -- Number of replicas in the deployment. +replicas: 1 + +# -- Resources you wish to assign to the pod. +# @default -- See `values.yaml` +resources: + limits: + memory: 80M + requests: + cpu: 100m + memory: 30M + +serviceAccount: + # -- Specifies whether a ServiceAccount should be created for the job and the deployment. + # false avoids creation, true or empty will create the ServiceAccount + # @default -- `true` + create: + # -- If `serviceAccount.create` this will be the name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template. + # If create is false, a serviceAccount with the given name must exist + # @default -- Automatically generated. + name: + +# -- Configure podSecurityContext +podSecurityContext: + +# -- Configure containerSecurityContext +containerSecurityContext: + +# -- Array to add extra environment variables +extraEnv: [] +# -- Array to add extra envFrom +extraEnvFrom: [] +# -- Array to add extra volumes +extraVolumes: [] +# -- Add extra volume mounts +extraVolumeMounts: [] + +# -- Additional annotations to apply to the pod(s). +podAnnotations: + +# Due to security restrictions, some users might require to use a https proxy to route traffic over the internet. +# In this specific case, when the metrics adapter sends a request to the New Relic backend. If this is the case +# for you, set this value to your http proxy endpoint. +# -- Configure proxy for the metrics-adapter. +proxy: + +# Pod scheduling priority +# Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ +# priorityClassName: high-priority + +# fullnameOverride -- To fully override common.naming.fullname +fullnameOverride: "" +# -- Node affinity to use for scheduling. +affinity: {} +# -- Node label to use for scheduling. +nodeSelector: {} +# -- List of node taints to tolerate (requires Kubernetes >= 1.6) +tolerations: [] + +apiServicePatchJob: + # apiServicePatchJob.image -- Registry, repository, tag, and pull policy for the job container image. + # @default -- See `values.yaml`. + image: + registry: # defaults to registry.k8s.io + repository: ingress-nginx/kube-webhook-certgen + tag: v1.3.0 + pullPolicy: IfNotPresent + + # -- Additional Volumes for Cert Job. + volumes: [] + # - name: tmp + # emptyDir: {} + + # -- Additional Volume mounts for Cert Job, you might want to mount tmp if Pod Security Policies. + volumeMounts: [] + # - name: tmp + # mountPath: /tmp + # Enforce a read-only root. + +certManager: + # -- Use cert manager for APIService certs, rather than the built-in patch job. + enabled: false + +rbac: + # rbac.pspEnabled -- Whether the chart should create Pod Security Policy objects. + pspEnabled: false diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.lock new file mode 100644 index 0000000000..064abf8aae --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.2.0 +digest: sha256:fa87cb007564a39a72739a3e850a91d6b03c0fc27a1115deac042b3ef77b4142 +generated: "2024-07-17T19:29:15.951407+05:30" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.yaml new file mode 100644 index 0000000000..9163e72ddc --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/Chart.yaml @@ -0,0 +1,20 @@ +apiVersion: v2 +appVersion: 2.0.2 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.2.0 +description: A Helm chart to deploy New Relic Kubernetes Logging as a DaemonSet, supporting + both Linux and Windows nodes and containers +home: https://github.com/newrelic/kubernetes-logging +icon: https://newrelic.com/assets/newrelic/source/NewRelic-logo-square.svg +keywords: +- logging +- newrelic +maintainers: +- email: logging-team@newrelic.com + name: jsubirat +- name: danybmx +- name: sdaubin +name: newrelic-logging +version: 1.23.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/README.md new file mode 100644 index 0000000000..1635b0d863 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/README.md @@ -0,0 +1,268 @@ +# newrelic-logging + + +## Chart Details +New Relic offers a [Fluent Bit](https://fluentbit.io/) output [plugin](https://github.com/newrelic/newrelic-fluent-bit-output) to easily forward your logs to [New Relic Logs](https://docs.newrelic.com/docs/logs/new-relic-logs/get-started/introduction-new-relic-logs). This plugin is also provided in a standalone Docker image that can be installed in a [Kubernetes](https://kubernetes.io/) cluster in the form of a [DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/), which we refer as the Kubernetes plugin. + +This document explains how to install it in your cluster using our [Helm](https://helm.sh/) chart. + + +## Install / Upgrade / Uninstall instructions +Despite the `newrelic-logging` chart being able to work standalone, we recommend installing it as part of the [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) chart. The best way of doing so is through the guided installation process documented [here](https://docs.newrelic.com//docs/kubernetes-pixie/kubernetes-integration/installation/kubernetes-integration-install-configure/). This guided install can generate the Helm 3 commands required to install it (select "Helm 3" in Step 3 from the previous documentation link). You can also opt to install it manually using Helm by following [these steps](https://docs.newrelic.com//docs/kubernetes-pixie/kubernetes-integration/installation/install-kubernetes-integration-using-helm/#install-k8-helm). To uninstall it, refer to the steps outlined in [this page](https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/uninstall-kubernetes/). + +### Installing or updating the helm New Relic repository + +To install the repo you can run: +``` +helm repo add newrelic https://helm-charts.newrelic.com +``` + +To update the repo you can run: +``` +helm repo update newrelic +``` + +## Configuration + +### How to configure the chart +The `newrelic-logging` chart can be installed either alone or as part of the [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) chart (recommended). The chart default settings should be suitable for most users. Nevertheless, you may be interested in overriding the defaults, either by passing them through a `values-newrelic.yaml` file or via the command line when installing the chart. Depending on how you installed it, you'll need to specify the `newrelic-logging`-specific configuration values using the chart name (`newrelic-logging`) as a prefix. In the table below, you can find a quick reference of how to configure the chart in these scenarios. The example depicts how you'd specify the mandatory `licenseKey` and `cluster` settings and how you'd override the `fluentBit.retryLimit` setting to `10`. + + + + + + + + + + + + + + + + + +
Installation methodConfiguration via values.yamlConfiguration via command line
Standalone newrelic-logging + + +``` +# values-newrelic.yaml configuration contents + +licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ +cluster: _K8S_CLUSTER_NAME_ + +fluentBit: + retryLimit: 10 +``` + +``` +# Install / upgrade command + +helm upgrade --install newrelic-logging newrelic/newrelic-logging \ +--namespace newrelic \ +--create-namespace \ +-f values-newrelic.yaml +``` + + +``` +# Install / upgrade command + +helm upgrade --install newrelic-logging newrelic/newrelic-logging \ +--namespace=newrelic \ +--set licenseKey=_YOUR_NEW_RELIC_LICENSE_KEY_ \ +--set cluster=_K8S_CLUSTER_NAME_ \ +--set fluentBit.retryLimit=10 +``` +
As part of nri-bundle + +``` +# values-newrelic.yaml configuration contents + +# General settings that apply to all the child charts +global: + licenseKey: _YOUR_NEW_RELIC_LICENSE_KEY_ + cluster: _K8S_CLUSTER_NAME_ + +# Specific configuration for the newrelic-logging child chart +newrelic-logging: + fluentBit: + retryLimit: 10 +``` + +``` +# Install / upgrade command + +helm upgrade --install newrelic-bundle newrelic/nri-bundle \ + --namespace newrelic \ + --create-namespace \ + -f values-newrelic.yaml \ +``` + + +``` +# Install / upgrade command + +helm upgrade --install newrelic-bundle newrelic/nri-bundle \ +--namespace=newrelic \ +--set global.licenseKey=_YOUR_NEW_RELIC_LICENSE_KEY_ \ +--set global.cluster=_K8S_CLUSTER_NAME_ \ +--set newrelic-logging.fluentBit.retryLimit=10 +``` +
+ + +### Supported configuration parameters +See [values.yaml](values.yaml) for the default values + +| Parameter | Description | Default | +| ------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------- | +| `global.cluster` - `cluster` | The cluster name for the Kubernetes cluster. | | +| `global.licenseKey` - `licenseKey` | The [license key](https://docs.newrelic.com/docs/accounts/install-new-relic/account-setup/license-key) for your New Relic Account. This will be the preferred configuration option if both `licenseKey` and `customSecret*` values are specified. | | +| `global.customSecretName` - `customSecretName` | Name of the Secret object where the license key is stored | | +| `global.customSecretLicenseKey` - `customSecretLicenseKey` | Key in the Secret object where the license key is stored. | | +| `global.fargate` | Must be set to `true` when deploying in an EKS Fargate environment. Prevents DaemonSet pods from being scheduled in Fargate nodes. | | +| `global.lowDataMode` - `lowDataMode` | If `true`, send minimal attributes on Kubernetes logs. Labels and annotations are not sent when lowDataMode is enabled. | `false` | +| `rbac.create` | Enable Role-based authentication | `true` | +| `rbac.pspEnabled` | Enable pod security policy support | `false` | +| `image.repository` | The container to pull. | `newrelic/newrelic-fluentbit-output` | +| `image.pullPolicy` | The pull policy. | `IfNotPresent` | +| `image.pullSecrets` | Image pull secrets. | `nil` | +| `image.tag` | The version of the container to pull. | See value in [values.yaml]` | +| `exposedPorts` | Any ports you wish to expose from the pod. Ex. 2020 for metrics | `[]` | +| `resources` | Any resources you wish to assign to the pod. | See Resources below | +| `priorityClassName` | Scheduling priority of the pod | `nil` | +| `nodeSelector` | Node label to use for scheduling on Linux nodes | `{ kubernetes.io/os: linux }` | +| `windowsNodeSelector` | Node label to use for scheduling on Windows nodes | `{ kubernetes.io/os: windows, node.kubernetes.io/windows-build: BUILD_NUMBER }` | +| `tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6) | See Tolerations below | +| `updateStrategy` | Strategy for DaemonSet updates (requires Kubernetes >= 1.6) | `RollingUpdate` | +| `extraVolumeMounts` | Additional DaemonSet volume mounts | `[]` | +| `extraVolumes` | Additional DaemonSet volumes | `[]` | +| `initContainers` | [Init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that will be executed before the actual container in charge of shipping logs to New Relic is initialized. Use this if you are using a custom Fluent Bit configuration that requires downloading certain files inside the volumes being accessed by the log-shipping pod. | `[]` | +| `windows.initContainers` | [Init containers](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) that will be executed before the actual container in charge of shipping logs to New Relic is initialized. Use this if you are using a custom Fluent Bit configuration that requires downloading certain files inside the volumes being accessed by the log-shipping pod. | `[]` | +| `serviceAccount.create` | If true, a service account would be created and assigned to the deployment | `true` | +| `serviceAccount.name` | The service account to assign to the deployment. If `serviceAccount.create` is true then this name will be used when creating the service account | | +| `serviceAccount.annotations` | The annotations to add to the service account if `serviceAccount.create` is set to true. | | +| `global.nrStaging` - `nrStaging` | Send data to staging (requires a staging license key) | `false` | +| `fluentBit.path` | Node path logs are forwarded from. Patterns are supported, as well as specifying multiple paths/patterns separated by commas. | `/var/log/containers/*.log` | +| `fluentBit.linuxMountPath` | The path mounted on linux Fluent-Bit pods to read logs from. Defaults to /var because some engines write the logs to /var/log and others to /var/lib (symlinked to /var/log) so Fluent-Bit need access to both in those cases | `/var` | +| `fluentBit.db` | Node path used by Fluent Bit to store a database file to keep track of monitored files and offsets. | `/var/log/containers/*.log` | +| `fluentBit.k8sBufferSize` | Set the buffer size for HTTP client when reading responses from Kubernetes API server. A value of 0 results in no limit and the buffer will expand as needed. | `32k` | +| `fluentBit.k8sLoggingExclude` | Set to "true" to allow excluding pods by adding the annotation `fluentbit.io/exclude: "true"` to pods you wish to exclude. | `false` | +| `fluentBit.additionalEnvVariables` | Additional environmental variables for fluentbit pods | `[]]` | +| `fluentBit.persistence.mode` | The [persistence mode](#Fluent-Bit-persistence-modes) you want to use, options are "hostPath", "none" or "persistentVolume" (this last one available only for linux) | +| `fluentBit.persistence.persistentVolume.storageClass` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), indicates the storage class that will be used for create the PersistentVolume and PersistentVolumeClaim. | | +| `fluentBit.persistence.persistentVolume.size` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), indicates the capacity for the PersistentVolume and PersistentVolumeClaim | 10Gi | +| `fluentBit.persistence.persistentVolume.dynamicProvisioning` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), indicates if the storage class used provide dynamic provisioning. If it does, only the PersistentVolumeClaim will be created. | true | +| `fluentBit.persistence.persistentVolume.existingVolume` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), indicates and existing volume in case you want to reuse one, bear in mind that it should allow ReadWriteMany access mode. A PersistentVolumeClaim will be created using it. | | +| `fluentBit.persistence.persistentVolume.existingVolumeClaim` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), indicates and existing volume claim that will be used on the daemonset. It should allow ReadWriteMany access mode. | | +| `fluentBit.persistence.persistentVolume.annotations.volume` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), allows to add annotations to the PersistentVolume (if created). | | +| `fluentBit.persistence.persistentVolume.annotations.claim` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), allows to add annotations to the PersistentVolumeClaim (if created). | | +| `fluentBit.persistence.persistentVolume.extra.volume` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), allows to add extra properties to the PersistentVolume (if created). | | +| `fluentBit.persistence.persistentVolume.extra.claim` | On "persistentVolume" [persistence mode](#Fluent-Bit-persistence-modes), allows to add extra properties to the PersistentVolumeClaim (if created). | | +| `daemonSet.annotations` | The annotations to add to the `DaemonSet`. | | +| `podAnnotations` | The annotations to add to the `DaemonSet` created `Pod`s. | | +| `hostNetwork` | Set the hostNetwork property for fluentbit pods. | | +| `enableLinux` | Enable log collection from Linux containers. This is the default behavior. In case you are only interested of collecting logs from Windows containers, set this to `false`. | `true` | +| `enableWindows` | Enable log collection from Windows containers. Please refer to the [Windows support](#windows-support) section for more details. | `false` | +| `fluentBit.config.service` | Contains fluent-bit.conf Service config | | +| `fluentBit.config.inputs` | Contains fluent-bit.conf Inputs config | | +| `fluentBit.config.extraInputs` | Contains extra fluent-bit.conf Inputs config | | +| `fluentBit.config.filters` | Contains fluent-bit.conf Filters config | | +| `fluentBit.config.extraFilters` | Contains extra fluent-bit.conf Filters config | | +| `fluentBit.config.lowDataModeFilters` | Contains fluent-bit.conf Filters config for lowDataMode | | +| `fluentBit.config.outputs` | Contains fluent-bit.conf Outputs config | | +| `fluentBit.config.extraOutputs` | Contains extra fluent-bit.conf Outputs config | | +| `fluentBit.config.parsers` | Contains parsers.conf Parsers config | | +| `fluentBit.retryLimit` | Amount of times to retry sending a given batch of logs to New Relic. This prevents data loss if there is a temporary network disruption, if a request to the Logs API is lost or when receiving a recoverable HTTP response. Set it to "False" for unlimited retries. | 5 | +| `fluentBit.sendMetrics` | Enable the collection of Fluent Bit internal metrics in Prometheus format as well as newrelic-fluent-bit-output internal plugin metrics. See [this documentation page](https://docs.newrelic.com/docs/logs/forward-logs/kubernetes-plugin-log-forwarding/#troubleshoot-installation) for more details. | `false` | +| `dnsConfig` | [DNS configuration](https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-dns-config) that will be added to the pods. Can be configured also with `global.dnsConfig`. | `{}` | +| `fluentBit.criEnabled` | We assume that `kubelet`directly communicates with the container engine using the [CRI](https://kubernetes.io/docs/concepts/overview/components/#container-runtime) specification. Set this to `false` if your K8s installation uses [dockershim](https://kubernetes.io/docs/tasks/administer-cluster/migrating-from-dockershim/) instead, in order to get the logs properly parsed. | `true` | + +### Fluent Bit persistence modes + +Fluent Bit uses a database file to keep track of log lines read from files (offsets). This database file is stored in the host node by default, using a `hostPath` mount. It's specifically stored (by default) in `/var/log/flb_kube.db` to keep things simple, as we're already mounting `/var` for accessing container logs. + +Sometimes the security constraints of some clusters don't allow mounting `hostPath`s in read-write mode. That's why you can chose among the following +persistence modes. Each one has their pros and cons. + +- `hostPath` (default) will use a `hostPath` mount to store the DB file on the node disk. This is the easiest, cheapest an most reliable option, but prohibited by some cloud vendor security policies. +- `none` will disable the Fluent Bit DB file. This can cause log duplication or data loss in case Fluent Bit gets restarted. +- `persistentVolume` (Linux only) will use a `ReadWriteMany` persistent volume to store the DB file. This will override the `fluentBit.db` path and use `/db/${NODE_NAME}-fb.db` instead. If you use this option in a Windows cluster it will default to `none` on Windows nodes. + +#### GKE Autopilot example + +If you're using the `persistentVolume` persistence mode you need to provide at least the `storageClass`, and it should be `ReadWriteMany`. This is an example of the configuration for persistence in [GKE Autopilot](https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview). + +``` +fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: standard-rwx + linuxMountPath: /var/log +``` + +### Proxy support + +Since Fluent Bit Kubernetes plugin is using [newrelic-fluent-bit-output](https://github.com/newrelic/newrelic-fluent-bit-output) we can configure the [proxy support](https://github.com/newrelic/newrelic-fluent-bit-output#proxy-support) in order to set up the proxy configuration. + +#### As environment variables + +The easiest way to configure the proxy is by means of specifying the `HTTP_PROXY` or `HTTPS_PROXY` variables as follows: + +``` +# values-newrelic.yml + +fluentBit: + additionalEnvVariables: + - name: HTTPS_PROXY + value: https://your-https-proxy-hostname:3129 +``` + + +#### Custom proxy configuration (for proxies using self-signed certificates) + +If you need to use a proxy using self-signed certificates, you'll need to mount a volume with the Certificate Authority +bundle file and reference it from the Fluent Bit configuration as follows: + +``` +# values-newrelic.yaml +extraVolumes: [] + - name: proxyConfig + # Example using hostPath. You can also place the caBundleFile.pem contents in a ConfigMap and reference it here instead, + # as explained here: https://kubernetes.io/docs/concepts/storage/volumes/#configmap + hostPath: + path: /path/in/node/to/your/caBundleFile.pem + +extraVolumeMounts: [] + - name: proxyConfig + mountPath: /proxyConfig/caBundleFile.pem + +fluentBit: + config: + outputs: | + [OUTPUT] + Name newrelic + Match * + licenseKey ${LICENSE_KEY} + endpoint ${ENDPOINT} + lowDataMode ${LOW_DATA_MODE} + Retry_Limit ${RETRY_LIMIT} + proxy https://your-https-proxy-hostname:3129 + caBundleFile /proxyConfig/caBundleFile.pem +``` + + +## Windows support + +Since version `1.7.0`, this Helm chart supports shipping logs from Windows containers. To this end, you need to set the `enableWindows` configuration parameter to `true`. + +Windows containers have some constraints regarding Linux containers. The main one being that they can only be executed on _hosts_ using the exact same Windows version and build number. On the other hand, Kubernetes nodes only supports the Windows versions listed [here](https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#windows-os-version-support). + +This Helm chart deploys one `DaemonSet` for each of the Windows versions it supports, while ensuring that only containers matching the host operating system will be deployed in each host. + +This Helm chart currently supports the following Windows versions: +- Windows Server LTSC 2019, build 10.0.17763 +- Windows Server LTSC 2022, build 10.0.20348 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..b65ac15d4c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.2.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..3ccc108e2d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/DEVELOPERS.md @@ -0,0 +1,663 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..647b4ff439 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-enable-windows-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-enable-windows-values.yaml new file mode 100644 index 0000000000..870bc082ac --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-enable-windows-values.yaml @@ -0,0 +1,2 @@ +enableLinux: false +enableWindows: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-lowdatamode-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-lowdatamode-values.yaml new file mode 100644 index 0000000000..7740338b05 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-lowdatamode-values.yaml @@ -0,0 +1 @@ +lowDataMode: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-override-global-lowdatamode.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-override-global-lowdatamode.yaml new file mode 100644 index 0000000000..22dd7e05e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-override-global-lowdatamode.yaml @@ -0,0 +1,3 @@ +global: + lowDataMode: true +lowDataMode: false diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-staging-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-staging-values.yaml new file mode 100644 index 0000000000..efbdccaf81 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-staging-values.yaml @@ -0,0 +1 @@ +nrStaging: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-with-empty-global.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-with-empty-global.yaml new file mode 100644 index 0000000000..490a0b7ed3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-with-empty-global.yaml @@ -0,0 +1 @@ +global: {} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-with-empty-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/ci/test-with-empty-values.yaml new file mode 100644 index 0000000000..e69de29bb2 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/fluent-bit-and-plugin-metrics-dashboard-template.json b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/fluent-bit-and-plugin-metrics-dashboard-template.json new file mode 100644 index 0000000000..cafdaf85ca --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/fluent-bit-and-plugin-metrics-dashboard-template.json @@ -0,0 +1,2237 @@ +{ + "name": "Kubernetes Fluent Bit monitoring", + "description": null, + "permissions": "PUBLIC_READ_WRITE", + "pages": [ + { + "name": "Fluent Bit metrics: General", + "description": null, + "widgets": [ + { + "title": "", + "layout": { + "column": 1, + "row": 1, + "width": 6, + "height": 6 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# README\n\n## About this page\nThis page represents most of [Fluent Bit's internal metrics](https://docs.fluentbit.io/manual/administration/monitoring#for-v2-metrics). The metric representations are grouped by categories and faceted by each plugin instance where appropriate.\n\n## How to filter\n1. Select the Kubernetes cluster you want to troubleshoot in the \"Cluster Name\" variable above.\n2. [OPTIONAL] You can use any of the values in the `Node name` and `Pod name` columns on the \"Fluent Bit version\" table to further filter the metrics displayed in the graphs below. To do so, you need to enable [facet filtering](https://docs.newrelic.com/docs/query-your-data/explore-query-data/dashboards/filter-new-relic-one-dashboards-facets/) on that table by clicking on the \"Edit\" submenu and select \"Filter the current dashboard\" under \"Facet Linking\". \n\n## Legend\n### Metric dimensions\n- **name**: the name of the Fluent Bit plugin. Version 1.21.0 of our Helm chart names them according to the plugin names described in the following section.\n- **pod_name**: the `newrelic-logging` pod (Fluent Bit instance) that emitted this metric.\n- **node_name**: physical Kubernetes node where the `newrelic-logging` pod is running.\n\n### Plugin names\n- **pod-logs-tailer**: `tail` *INPUT* plugin normally reading from `/var/log/containers/*.log`\n- **kubernetes-enricher**: `kubernetes` *FILTER* plugin that queries the Kubernetes API to enrich the logs with pod/container metadata.\n- **node-attributes-enricher**: `record_modifier` *FILTER* plugin that enriches logs with `cluster_name`.\n- **kubernetes-attribute-lifter** (only when in low data mode): `nest` *FILTER* plugin that lifts all the keys under `kubernetes`. This plugin is transparent to the final shape of the log.\n- **node-attributes-enricher-filter** (only when in low data mode): same as node-attributes-enricher`, but it also removes attributes that are not strictly necessary for correct platform functioning.\n- **newrelic-logs-forwarder**: `newrelic` *OUTPUT* plugin that sends logs to the New Relic Logs API" + } + }, + { + "title": "Fluent Bit version", + "layout": { + "column": 7, + "row": 1, + "width": 6, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.table" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT latest(os) as 'OS', latest(version) as 'FB version', latest(cluster_name) FROM Metric where metricName = 'fluentbit_build_info' AND cluster_name IN ({{cluster_name}}) since 1 hour ago facet pod_name, node_name limit max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + } + } + }, + { + "title": "Fluent Bit uptime", + "layout": { + "column": 7, + "row": 4, + "width": 6, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT latest(fluentbit_uptime) FROM Metric where cluster_name IN ({{cluster_name}}) facet pod_name timeseries" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 7, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# INPUTS" + } + }, + { + "title": "Input byte rate (bytes/minute)", + "layout": { + "column": 1, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_input_bytes_total), 1 minute) as 'bytes/minute' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) timeseries max facet name, pod_name" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Input log rate (records/minute)", + "layout": { + "column": 5, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_input_records_total), 1 minute) as 'logs/minute' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average incoming record size (bytes)", + "layout": { + "column": 9, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_input_bytes_total)/sum(fluentbit_input_records_total) as 'Average incoming record size (bytes)' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 11, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# FILTERS" + } + }, + { + "title": "Filter byte rate (bytes/minute)", + "layout": { + "column": 1, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_bytes_total), 1 minute) FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Filter log rate (records/minute)", + "layout": { + "column": 5, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_records_total), 1 minute) FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average filtered record size (bytes)", + "layout": { + "column": 9, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_filter_bytes_total)/sum(fluentbit_filter_records_total) AS 'Average filtered record size (bytes)' FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Record add/drop rate per FILTER plugin", + "layout": { + "column": 1, + "row": 15, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_add_records_total), 1 minute) as 'Added back to pipeline', rate(sum(fluentbit_filter_drop_records_total), 1 minute) as 'Removed from pipeline' FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries MAX" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "REQUESTS_PER_MINUTE" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 18, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# OUTPUTS" + } + }, + { + "title": "Output byte rate (bytes/minute)", + "layout": { + "column": 1, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_bytes_total), 1 minute) as 'bytes/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Output log rate (records/minute)", + "layout": { + "column": 5, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_records_total), 1 minute) as 'records/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average outgoing record size (bytes)", + "layout": { + "column": 9, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_output_proc_bytes_total)/sum(fluentbit_output_proc_records_total) as 'bytes' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries MAX" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "newrelic plugin statistics (records/minute)", + "layout": { + "column": 1, + "row": 22, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_records_total), 1 minute) as 'Processed', rate(sum(fluentbit_output_dropped_records_total), 1 minute) as 'Dropped', rate(sum(fluentbit_output_retried_records_total), 1 minute) as 'Retried' FROM Metric where cluster_name IN ({{cluster_name}}) AND name = 'newrelic-logs-forwarder' facet pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Other OUTPUT plugin statistics (records/minute)", + "layout": { + "column": 5, + "row": 22, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_records_total), 1 minute) as 'Processed', rate(sum(fluentbit_output_dropped_records_total), 1 minute) as 'Dropped', rate(sum(fluentbit_output_retried_records_total), 1 minute) as 'Retried' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'newrelic-logs-forwarder' and name != 'fb-metrics-forwarder' facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Connections per OUTPUT plugin", + "layout": { + "column": 9, + "row": 22, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_output_upstream_total_connections) as 'Total', max(fluentbit_output_upstream_busy_connections) as 'Busy' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries MAX" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "newrelic plugin errors (errors/minute)", + "layout": { + "column": 1, + "row": 25, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_errors_total), 1 minute) AS 'Errors/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name = 'newrelic-logs-forwarder' facet pod_name timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "newrelic plugin chunk retry statistics (retries/minute)", + "layout": { + "column": 5, + "row": 25, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_retries_total), 1 minute) as 'Retries', rate(sum(fluentbit_output_retries_failed_total), 1 minute) as 'Expirations' FROM Metric where cluster_name IN ({{cluster_name}}) AND name = 'newrelic-logs-forwarder' facet pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 28, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# MEMORY USAGE" + } + }, + { + "title": "Input plugin memory usage", + "layout": { + "column": 1, + "row": 29, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_memory_bytes) as 'Max' FROM Metric where cluster_name IN ({{cluster_name}}) and name != 'fb-metrics-collector' timeseries max facet name, pod_name " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "INPUT memory buffer over limit", + "layout": { + "column": 5, + "row": 29, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "colors": { + "seriesOverrides": [ + { + "color": "#013ef4", + "seriesName": "pod-logs-tailer" + } + ] + }, + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_overlimit) FROM Metric where cluster_name IN ({{cluster_name}}) and name != 'fb-metrics-collector' timeseries max facet name, pod_name" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true, + "thresholds": [ + { + "from": 0.95, + "name": "Mem buf overlimit", + "severity": "critical", + "to": 1.05 + } + ] + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Chunk statistics per INPUT plugin", + "layout": { + "column": 9, + "row": 29, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT average(fluentbit_input_storage_chunks_up) AS 'Up (in memory)', average(fluentbit_input_storage_chunks_down) AS 'Down (in fs)', average(fluentbit_input_storage_chunks_busy) AS 'Busy', average(fluentbit_input_storage_chunks) as 'Total' FROM Metric where name != 'fb-metrics-collector' since 1 hour ago timeseries MAX facet name, pod_name " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Buffered chunks", + "layout": { + "column": 1, + "row": 32, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_chunks) AS 'Total', max(fluentbit_storage_mem_chunks) AS 'Memory', max(fluentbit_storage_fs_chunks) AS 'Filesystem' FROM Metric where cluster_name IN ({{cluster_name}}) facet pod_name timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Busy chunks' size", + "layout": { + "column": 5, + "row": 32, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_chunks_busy_bytes) FROM Metric where name != 'fb-metrics-collector' facet name, pod_name timeseries MAX since 1 hour ago" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Filesystem chunks state", + "layout": { + "column": 9, + "row": 32, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT average(fluentbit_storage_fs_chunks_up) AS 'Up (in memory)', average(fluentbit_storage_fs_chunks_down) AS 'Down (fs only)' FROM Metric since '2024-02-29 13:22:00+0000' UNTIL '2024-02-29 14:31:00+0000' timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + } + ] + }, + { + "name": "Fluent Bit metrics: Pipeline View", + "description": null, + "widgets": [ + { + "title": "", + "layout": { + "column": 1, + "row": 1, + "width": 6, + "height": 6 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# README\n\n## About this page\nThis page represents the same metrics that are displayed in the \"Fluent Bit metrics: General\" page. Nevertheless, they are grouped differently to allow you to visualize a given metric across the whole pipeline with a single glance.\n\n## How to filter\n1. Select the Kubernetes cluster you want to troubleshoot in the \"Cluster Name\" variable above.\n2. [OPTIONAL] You can use any of the values in the `Node name` and `Pod name` columns on the \"Fluent Bit version\" table to further filter the metrics displayed in the graphs below. To do so, you need to enable [facet filtering](https://docs.newrelic.com/docs/query-your-data/explore-query-data/dashboards/filter-new-relic-one-dashboards-facets/) on that table by clicking on the \"Edit\" submenu and select \"Filter the current dashboard\" under \"Facet Linking\". \n\n## Legend\n### Metric dimensions\n- **name**: the name of the Fluent Bit plugin. Version 1.21.0 of our Helm chart names them according to the plugin names described in the following section.\n- **pod_name**: the `newrelic-logging` pod (Fluent Bit instance) that emitted this metric.\n- **node_name**: physical Kubernetes node where the `newrelic-logging` pod is running.\n\n### Plugin names\n- **pod-logs-tailer**: `tail` *INPUT* plugin normally reading from `/var/log/containers/*.log`\n- **kubernetes-enricher**: `kubernetes` *FILTER* plugin that queries the Kubernetes API to enrich the logs with pod/container metadata.\n- **node-attributes-enricher**: `record_modifier` *FILTER* plugin that enriches logs with `cluster_name`.\n- **kubernetes-attribute-lifter** (only when in low data mode): `nest` *FILTER* plugin that lifts all the keys under `kubernetes`. This plugin is transparent to the final shape of the log.\n- **node-attributes-enricher-filter** (only when in low data mode): same as node-attributes-enricher`, but it also removes attributes that are not strictly necessary for correct platform functioning.\n- **newrelic-logs-forwarder**: `newrelic` *OUTPUT* plugin that sends logs to the New Relic Logs API" + } + }, + { + "title": "Fluent Bit version", + "layout": { + "column": 7, + "row": 1, + "width": 6, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.table" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT latest(os) as 'OS', latest(version) as 'FB version', latest(cluster_name) FROM Metric where metricName = 'fluentbit_build_info' AND cluster_name IN ({{cluster_name}}) since 1 hour ago facet pod_name, node_name limit max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + } + } + }, + { + "title": "Fluent Bit uptime", + "layout": { + "column": 7, + "row": 4, + "width": 6, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT latest(fluentbit_uptime) FROM Metric where cluster_name IN ({{cluster_name}}) timeseries facet pod_name " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "SECONDS" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 7, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# BYTE RATES" + } + }, + { + "title": "Input byte rate (bytes/minute)", + "layout": { + "column": 1, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_input_bytes_total), 1 minute) as 'bytes/minute' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) timeseries max facet name, pod_name" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Filter byte rate (bytes/minute)", + "layout": { + "column": 5, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_bytes_total), 1 minute) FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Output byte rate (bytes/minute)", + "layout": { + "column": 9, + "row": 8, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_bytes_total), 1 minute) as 'bytes/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 11, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# LOG RECORD RATES" + } + }, + { + "title": "Input log rate (records/minute)", + "layout": { + "column": 1, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_input_records_total), 1 minute) as 'logs/minute' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Filter log rate (records/minute)", + "layout": { + "column": 5, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_records_total), 1 minute) FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Output log rate (records/minute)", + "layout": { + "column": 9, + "row": 12, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_proc_records_total), 1 minute) as 'records/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Record add/drop rate per FILTER plugin", + "layout": { + "column": 5, + "row": 15, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_filter_add_records_total), 1 minute) as 'Added back to pipeline', rate(sum(fluentbit_filter_drop_records_total), 1 minute) as 'Removed from pipeline' FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries MAX" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "REQUESTS_PER_MINUTE" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 18, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# AVERAGE LOG RECORD SIZES AT THE END OF EACH STAGE" + } + }, + { + "title": "Average incoming record size (bytes)", + "layout": { + "column": 1, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_input_bytes_total)/sum(fluentbit_input_records_total) as 'Average incoming record size (bytes)' FROM Metric where name != 'fb-metrics-collector' and cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average filtered record size (bytes)", + "layout": { + "column": 5, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_filter_bytes_total)/sum(fluentbit_filter_records_total) AS 'Average filtered record size (bytes)' FROM Metric WHERE cluster_name IN ({{cluster_name}}) facet name, pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average outgoing record size (bytes)", + "layout": { + "column": 9, + "row": 19, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT sum(fluentbit_output_proc_bytes_total)/sum(fluentbit_output_proc_records_total) as 'bytes' FROM Metric where cluster_name IN ({{cluster_name}}) AND name != 'fb-metrics-forwarder' facet name, pod_name timeseries MAX" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "", + "layout": { + "column": 1, + "row": 22, + "width": 12, + "height": 1 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# MEMORY USAGE AND BACKPRESSURE" + } + }, + { + "title": "Input plugin memory usage", + "layout": { + "column": 1, + "row": 23, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_memory_bytes) as 'Max' FROM Metric where cluster_name IN ({{cluster_name}}) and name != 'fb-metrics-collector' timeseries max facet name, pod_name " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Busy chunks' size", + "layout": { + "column": 5, + "row": 23, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_chunks_busy_bytes) FROM Metric where name != 'fb-metrics-collector' facet name, pod_name timeseries MAX since 1 hour ago" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "newrelic plugin chunk retry statistics (retries/minute)", + "layout": { + "column": 9, + "row": 23, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_retries_total), 1 minute) as 'Retries', rate(sum(fluentbit_output_retries_failed_total), 1 minute) as 'Expirations' FROM Metric where cluster_name IN ({{cluster_name}}) AND name = 'newrelic-logs-forwarder' facet pod_name timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "INPUT memory buffer over limit", + "layout": { + "column": 1, + "row": 26, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "colors": { + "seriesOverrides": [ + { + "color": "#013ef4", + "seriesName": "pod-logs-tailer" + } + ] + }, + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT max(fluentbit_input_storage_overlimit) FROM Metric where cluster_name IN ({{cluster_name}}) and name != 'fb-metrics-collector' timeseries max facet name, pod_name" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true, + "thresholds": [ + { + "from": 0.95, + "name": "Mem buf overlimit", + "severity": "critical", + "to": 1.05 + } + ] + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Chunk statistics per INPUT plugin", + "layout": { + "column": 5, + "row": 26, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT average(fluentbit_input_storage_chunks_up) AS 'Up (in memory)', average(fluentbit_input_storage_chunks_down) AS 'Down (in fs)', average(fluentbit_input_storage_chunks_busy) AS 'Busy', average(fluentbit_input_storage_chunks) as 'Total' FROM Metric where name != 'fb-metrics-collector' since 1 hour ago timeseries MAX facet name, pod_name " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "newrelic plugin errors (errors/minute)", + "layout": { + "column": 9, + "row": 26, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(fluentbit_output_errors_total), 1 minute) AS 'Errors/minute' FROM Metric where cluster_name IN ({{cluster_name}}) AND name = 'newrelic-logs-forwarder' facet pod_name timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": { + "isLabelVisible": true + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + } + ] + }, + { + "name": "newrelic-fluent-bit-output plugin metrics", + "description": null, + "widgets": [ + { + "title": "", + "layout": { + "column": 1, + "row": 1, + "width": 4, + "height": 9 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.markdown" + }, + "rawConfiguration": { + "text": "# README\n## About this page\nThis page displays metrics collected internally in the [New Relic Fluent Bit output plugin](https://github.com/newrelic/newrelic-fluent-bit-output) (in short, **NR FB plugin**). These metrics are independent of Fluent Bit's, and **must not be considered as a stable API: they can change its naming or dimensions at any time in newer plugin versions**.\n\nPlease note that **the NR FB plugin does not include the `pod_name` nor the `node_name` dimensions**. Therefore, the graphs below represent an aggregation of all your running Fluent Bit instances across one or more clusters. You can use the `cluster_name` dimension (or dashboard variable above) to narrow down the troubleshooting to one or more clusters.\n\n## Basic naming conventions\n- Fluent Bit aggregates logs in batches, also referred as **[chunks](https://docs.fluentbit.io/manual/administration/buffering-and-storage#chunks-memory-filesystem-and-backpressure)**. Each chunk therefore contains an unknown amount of logs.\n- Chunks are received sequentially at the NR FB plugin, which takes care of reading the logs they contain and splitting them into the so-called New Relic *payloads*.\n- Each **payload** is a compressed stream of bytes that can be [at most 1MB long](https://docs.newrelic.com/docs/logs/log-api/introduction-log-api/#limits), and follows the [data format required by the Logs API](https://docs.newrelic.com/docs/logs/log-api/introduction-log-api/#json-content).\n\n\n## Error-detection graphs and recommended actions\n\nThe following are the main graphs used to detect potential problems in your log forwarding setup. Refer to each section to learn the recommended actions for each graph.\n\n### Payload packaging errors\nRepresents the percentage of Fluent Bit chunks that threw an error when they were attempted to be packaged as New Relic payloads. Such errors are never expected to happen. Therefore, **any value greater than 0% should be thoroughly investigated**.\n\nIf you find errors in this graph, please open a support ticket and include a sample of your logs for further investigation.\n\n### Payload sending errors\nRepresents the percentage of New Relic payloads that threw an unexpected error when they were attempted to be sent to New Relic. Such errors can happen sporadically: timeouts due to poor network performance or sudden network changes can cause them from time to time. Observing **values greater than 0% can sometimes be normal, but any value above 10% should be considered as an annomalous situation and should be thoroughly investigated**.\n\nIf you find errors in this graph, please ensure that you don't have any weak spots in your network path to New Relic: are you using a proxy? Is it or any network hop introducing too much latency due to being saturated? If you can't find anything on you side, please open a support ticket and include as much information as possible from your network setup.\n\n### Payload send results\nRepresents the amount of API requests that were performed to send logs to New Relic. **Ideally, you should only observe 202 responses here**. Sometimes, intermediary CDN providers can introduce some errors (503 error codes) from time to time, in which case your logs will not be lost and reattempted to be sent.\n\nIf you find a considerable amount of non-202 responses in this graph, please open a customer support ticket.\n\n## Additional troubleshooting graphs\n\nThe following graphs include additional fine-grained information that will be useful for New Relic to troubleshoot your potential installation issues.\n\n### Average timings\nRepresents the average amount of time the plugin spent packaging the log payloads and sending them to New Relic, respectively.\n\n### Accumulated time per minute\nRepresents the amount of time per minute the plugin spent packaging the log payloads and sending them to New Relic, respectively.\n\n### Payload size\nRepresents the size in bytes of the individual compressed payloads sent to New Relic.\n\n### Payload packets per Fluent Bit chunk\nRepresents the amount of payloads sent to New Relic per each Fluent Bit chunk." + } + }, + { + "title": "Payload packaging errors", + "layout": { + "column": 5, + "row": 1, + "width": 2, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.billboard" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "FROM Metric SELECT percentage(count(`logs.fb.packaging.time`), WHERE hasError = true) AS 'packaging errors'" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": [ + { + "alertSeverity": "CRITICAL", + "value": 0 + } + ] + } + }, + { + "title": "Payload sending errors", + "layout": { + "column": 7, + "row": 1, + "width": 2, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.billboard" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "FROM Metric SELECT percentage(count(`logs.fb.payload.send.time`), WHERE hasError = true) AS 'send errors'" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "thresholds": [ + { + "alertSeverity": "WARNING", + "value": 0 + }, + { + "alertSeverity": "CRITICAL", + "value": 0.1 + } + ] + } + }, + { + "title": "Payload send results", + "layout": { + "column": 9, + "row": 1, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(count(logs.fb.payload.send.time), 1 minute) AS 'Status Code' FROM Metric FACET CASES(WHERE statusCode = 0 AS 'Send error') OR statusCode timeseries max" + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "units": { + "unit": "REQUESTS_PER_MINUTE" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Average timings", + "layout": { + "column": 5, + "row": 4, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT average(logs.fb.payload.send.time) AS 'Payload sending', average(logs.fb.packaging.time) AS 'Payload packaging' FROM Metric timeseries max" + } + ], + "nullValues": { + "nullValue": "zero" + }, + "platformOptions": { + "ignoreTimeRange": false + }, + "units": { + "unit": "MS" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Accumulated time per minute", + "layout": { + "column": 9, + "row": 4, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.area" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT rate(sum(logs.fb.total.send.time), 1 minute) AS 'Sending', rate(sum(logs.fb.packaging.time), 1 minute) AS 'Packaging' FROM Metric TIMESERIES max" + } + ], + "nullValues": { + "nullValue": "zero" + }, + "platformOptions": { + "ignoreTimeRange": false + }, + "units": { + "unit": "MS" + } + } + }, + { + "title": "Payload size", + "layout": { + "column": 5, + "row": 7, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT min(logs.fb.payload.size) AS 'Minimum', average(logs.fb.payload.size) AS 'Average', max(logs.fb.payload.size) AS 'Maximum' FROM Metric timeseries MAX " + } + ], + "nullValues": { + "nullValue": "default" + }, + "platformOptions": { + "ignoreTimeRange": false + }, + "units": { + "unit": "BYTES" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + }, + { + "title": "Payload packets per Fluent Bit chunk", + "layout": { + "column": 9, + "row": 7, + "width": 4, + "height": 3 + }, + "linkedEntityGuids": null, + "visualization": { + "id": "viz.line" + }, + "rawConfiguration": { + "facet": { + "showOtherSeries": false + }, + "legend": { + "enabled": true + }, + "nrqlQueries": [ + { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT min(logs.fb.payload.count) AS 'Minimum', average(logs.fb.payload.count) AS 'Average', max(logs.fb.payload.count) AS 'Maximum' FROM Metric timeseries MAX " + } + ], + "platformOptions": { + "ignoreTimeRange": false + }, + "units": { + "unit": "COUNT" + }, + "yAxisLeft": { + "zero": true + }, + "yAxisRight": { + "zero": true + } + } + } + ] + } + ], + "variables": [ + { + "name": "cluster_name", + "items": null, + "defaultValues": [ + { + "value": { + "string": "*" + } + } + ], + "nrqlQuery": { + "accountIds": [ + YOUR_ACCOUNT_ID + ], + "query": "SELECT uniques(cluster_name) FROM Metric where metricName = 'fluentbit_input_storage_overlimit'" + }, + "options": { + "ignoreTimeRange": false + }, + "title": "Cluster Name", + "type": "NRQL", + "isMultiSelection": true, + "replacementStrategy": "STRING" + } + ] +} \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/NOTES.txt new file mode 100644 index 0000000000..289f2157f3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/NOTES.txt @@ -0,0 +1,18 @@ +{{- if (include "newrelic-logging.areValuesValid" .) }} +Your deployment of the New Relic Kubernetes Logging is complete. You can check on the progress of this by running the following command: + + kubectl get daemonset -o wide -w --namespace {{ .Release.Namespace }} {{ template "newrelic-logging.fullname" . }} +{{- else -}} +############################################################################## +#### ERROR: You did not set a license key. #### +############################################################################## + +This deployment will be incomplete until you get your API key from New Relic. + +Then run: + + helm upgrade {{ .Release.Name }} \ + --set licenseKey=(your-license-key) \ + newrelic/newrelic-logging + +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/_helpers.tpl new file mode 100644 index 0000000000..439d25cae4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/_helpers.tpl @@ -0,0 +1,215 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "newrelic-logging.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic-logging.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if ne $name .Release.Name -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + + +{{/* Generate basic labels */}} +{{- define "newrelic-logging.labels" }} +app: {{ template "newrelic-logging.name" . }} +chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} +heritage: {{.Release.Service }} +release: {{.Release.Name }} +app.kubernetes.io/name: {{ template "newrelic-logging.name" . }} +{{- end }} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "newrelic-logging.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + + +{{/* +Create the name of the fluent bit config +*/}} +{{- define "newrelic-logging.fluentBitConfig" -}} +{{ template "newrelic-logging.fullname" . }}-fluent-bit-config +{{- end -}} + +{{/* +Return the licenseKey +*/}} +{{- define "newrelic-logging.licenseKey" -}} +{{- if .Values.global}} + {{- if .Values.global.licenseKey }} + {{- .Values.global.licenseKey -}} + {{- else -}} + {{- .Values.licenseKey | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.licenseKey | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the cluster name +*/}} +{{- define "newrelic-logging.cluster" -}} +{{- if .Values.global}} + {{- if .Values.global.cluster }} + {{- .Values.global.cluster -}} + {{- else -}} + {{- .Values.cluster | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.cluster | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the customSecretName +*/}} +{{- define "newrelic-logging.customSecretName" -}} +{{- if .Values.global }} + {{- if .Values.global.customSecretName }} + {{- .Values.global.customSecretName -}} + {{- else -}} + {{- .Values.customSecretName | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.customSecretName | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the customSecretLicenseKey +*/}} +{{- define "newrelic-logging.customSecretKey" -}} +{{- if .Values.global }} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- else -}} + {{- if .Values.global.customSecretKey }} + {{- .Values.global.customSecretKey -}} + {{- else -}} + {{- .Values.customSecretKey | default "" -}} + {{- end -}} + {{- end -}} +{{- else -}} + {{- if .Values.customSecretLicenseKey }} + {{- .Values.customSecretLicenseKey -}} + {{- else -}} + {{- .Values.customSecretKey | default "" -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Returns nrStaging +*/}} +{{- define "newrelic.nrStaging" -}} +{{- if .Values.global }} + {{- if .Values.global.nrStaging }} + {{- .Values.global.nrStaging -}} + {{- end -}} +{{- else if .Values.nrStaging }} + {{- .Values.nrStaging -}} +{{- end -}} +{{- end -}} + +{{/* +Returns fargate +*/}} +{{- define "newrelic.fargate" -}} +{{- if .Values.global }} + {{- if .Values.global.fargate }} + {{- .Values.global.fargate -}} + {{- end -}} +{{- else if .Values.fargate }} + {{- .Values.fargate -}} +{{- end -}} +{{- end -}} + +{{/* +Returns lowDataMode +*/}} +{{- define "newrelic-logging.lowDataMode" -}} +{{/* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{/* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic-logging.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{/* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Returns logsEndpoint +*/}} +{{- define "newrelic-logging.logsEndpoint" -}} +{{- if (include "newrelic.nrStaging" .) -}} +https://staging-log-api.newrelic.com/log/v1 +{{- else if .Values.endpoint -}} +{{ .Values.endpoint -}} +{{- else if eq (substr 0 2 (include "newrelic-logging.licenseKey" .)) "eu" -}} +https://log-api.eu.newrelic.com/log/v1 +{{- else -}} +https://log-api.newrelic.com/log/v1 +{{- end -}} +{{- end -}} + +{{/* +Returns metricsHost +*/}} +{{- define "newrelic-logging.metricsHost" -}} +{{- if (include "newrelic.nrStaging" .) -}} +staging-metric-api.newrelic.com +{{- else if eq (substr 0 2 (include "newrelic-logging.licenseKey" .)) "eu" -}} +metric-api.eu.newrelic.com +{{- else -}} +metric-api.newrelic.com +{{- end -}} +{{- end -}} + +{{/* +Returns if the template should render, it checks if the required values are set. +*/}} +{{- define "newrelic-logging.areValuesValid" -}} +{{- $licenseKey := include "newrelic-logging.licenseKey" . -}} +{{- $customSecretName := include "newrelic-logging.customSecretName" . -}} +{{- $customSecretKey := include "newrelic-logging.customSecretKey" . -}} +{{- and (or $licenseKey (and $customSecretName $customSecretKey))}} +{{- end -}} + +{{/* +If additionalEnvVariables is set, renames to extraEnv. Returns extraEnv. +*/}} +{{- define "newrelic-logging.extraEnv" -}} +{{- if .Values.fluentBit }} + {{- if .Values.fluentBit.additionalEnvVariables }} + {{- toYaml .Values.fluentBit.additionalEnvVariables -}} + {{- else if .Values.fluentBit.extraEnv }} + {{- toYaml .Values.fluentBit.extraEnv -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrole.yaml new file mode 100644 index 0000000000..b36340fe63 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrole.yaml @@ -0,0 +1,23 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }} +rules: + - apiGroups: [""] + resources: + - namespaces + - pods + verbs: ["get", "list", "watch"] +{{- if .Values.rbac.pspEnabled }} + - apiGroups: + - extensions + resources: + - podsecuritypolicies + resourceNames: + - privileged-{{ template "newrelic-logging.fullname" . }} + verbs: + - use +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..6b258f6973 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/clusterrolebinding.yaml @@ -0,0 +1,15 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ template "newrelic-logging.fullname" . }} +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/configmap.yaml new file mode 100644 index 0000000000..4b1d890144 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/configmap.yaml @@ -0,0 +1,38 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ .Release.Namespace }} + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fluentBitConfig" . }} +data: + fluent-bit.conf: | + {{- if .Values.fluentBit.config.service }} + {{- .Values.fluentBit.config.service | nindent 4 }} + {{- end }} + {{- if .Values.fluentBit.config.inputs }} + {{- .Values.fluentBit.config.inputs | nindent 4 }} + {{- end }} + {{- if .Values.fluentBit.config.extraInputs }} + {{- .Values.fluentBit.config.extraInputs | nindent 4}} + {{- end }} + {{- if and (include "newrelic-logging.lowDataMode" .) (.Values.fluentBit.config.lowDataModeFilters) }} + {{- .Values.fluentBit.config.lowDataModeFilters | nindent 4 }} + {{- else }} + {{- .Values.fluentBit.config.filters | nindent 4 }} + {{- end }} + {{- if .Values.fluentBit.config.extraFilters }} + {{- .Values.fluentBit.config.extraFilters | nindent 4}} + {{- end }} + {{- if .Values.fluentBit.config.outputs }} + {{- .Values.fluentBit.config.outputs | nindent 4 }} + {{- end }} + {{- if .Values.fluentBit.config.extraOutputs }} + {{- .Values.fluentBit.config.extraOutputs | nindent 4}} + {{- end }} + {{- if and (.Values.fluentBit.sendMetrics) (.Values.fluentBit.config.metricInstrumentation) }} + {{- .Values.fluentBit.config.metricInstrumentation | nindent 4}} + {{- end }} + parsers.conf: | + {{- if .Values.fluentBit.config.parsers }} + {{- .Values.fluentBit.config.parsers | nindent 4}} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset-windows.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset-windows.yaml new file mode 100644 index 0000000000..6a3145d134 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset-windows.yaml @@ -0,0 +1,174 @@ +{{- if and (include "newrelic-logging.areValuesValid" $) $.Values.enableWindows }} +{{- range .Values.windowsOsList }} +apiVersion: apps/v1 +kind: DaemonSet +metadata: + namespace: {{ $.Release.Namespace }} + labels: + kubernetes.io/os: windows +{{ include "newrelic-logging.labels" $ | indent 4 }} + name: {{ template "newrelic-logging.fullname" $ }}-windows-{{ .version }} + annotations: + {{- if $.Values.daemonSet.annotations }} +{{ toYaml $.Values.daemonSet.annotations | indent 4 }} + {{- end }} +spec: + updateStrategy: + type: {{ $.Values.updateStrategy }} + selector: + matchLabels: + app: {{ template "newrelic-logging.name" $ }} + release: {{ $.Release.Name }} + kubernetes.io/os: windows + template: + metadata: + annotations: + checksum/fluent-bit-config: {{ include (print $.Template.BasePath "/configmap.yaml") $ | sha256sum }} + {{- if $.Values.podAnnotations }} +{{ toYaml $.Values.podAnnotations | indent 8}} + {{- end }} + labels: + app: {{ template "newrelic-logging.name" $ }} + release: {{ $.Release.Name }} + kubernetes.io/os: windows + app.kubernetes.io/name: {{ template "newrelic-logging.name" $ }} + {{- if $.Values.podLabels}} +{{ toYaml $.Values.podLabels | indent 8 }} + {{- end }} + spec: + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" $ }} + {{- with include "newrelic.common.dnsConfig" $ }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + dnsPolicy: ClusterFirst + terminationGracePeriodSeconds: 10 + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list $.Values.image.pullSecrets) "context" $) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- if $.Values.hostNetwork }} + hostNetwork: {{ $.Values.hostNetwork }} + {{- end }} + {{- if $.Values.windows.initContainers }} + initContainers: +{{ toYaml $.Values.windows.initContainers | indent 8 }} + {{- end }} + containers: + - name: {{ template "newrelic-logging.name" $ }} + # We have to use 'replace' to remove the double-quotes that "newrelic.common.images.image" has, so that + # we can append the Windows image tag suffix (and then re-quote that value) + image: "{{ include "newrelic.common.images.image" ( dict "imageRoot" $.Values.image "context" $) | replace "\"" ""}}-{{ .imageTagSuffix }}" + imagePullPolicy: "{{ $.Values.image.pullPolicy }}" + securityContext: {} + env: + - name: ENDPOINT + value: {{ include "newrelic-logging.logsEndpoint" $ | quote }} + - name: SOURCE + value: {{ if (include "newrelic-logging.lowDataMode" $) }} "k8s" {{- else }} "kubernetes" {{- end }} + - name: LICENSE_KEY + valueFrom: + secretKeyRef: + {{- if (include "newrelic-logging.licenseKey" $) }} + name: {{ template "newrelic-logging.fullname" $ }}-config + key: license + {{- else }} + name: {{ include "newrelic-logging.customSecretName" $ }} + key: {{ include "newrelic-logging.customSecretKey" $ }} + {{- end }} + - name: CLUSTER_NAME + value: {{ include "newrelic-logging.cluster" $ }} + - name: LOG_LEVEL + value: {{ $.Values.fluentBit.logLevel | quote }} + - name: LOG_PARSER + {{- if $.Values.fluentBit.criEnabled }} + value: "cri,docker" + {{- else }} + value: "docker,cri" + {{- end }} + {{- if or (not $.Values.fluentBit.persistence) (eq $.Values.fluentBit.persistence.mode "hostPath") }} + - name: FB_DB + value: {{ $.Values.fluentBit.windowsDb | quote }} + {{- else }} + - name: FB_DB + value: "" + {{- end }} + - name: PATH + value: {{ $.Values.fluentBit.windowsPath | quote }} + - name: K8S_BUFFER_SIZE + value: {{ $.Values.fluentBit.k8sBufferSize | quote }} + - name: K8S_LOGGING_EXCLUDE + value: {{ $.Values.fluentBit.k8sLoggingExclude | default "false" | quote }} + - name: LOW_DATA_MODE + value: {{ include "newrelic-logging.lowDataMode" $ | default "false" | quote }} + - name: RETRY_LIMIT + value: {{ $.Values.fluentBit.retryLimit | quote }} + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: HOSTNAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: SEND_OUTPUT_PLUGIN_METRICS + value: {{ $.Values.fluentBit.sendMetrics | default "false" | quote }} + - name: METRICS_HOST + value: {{ include "newrelic-logging.metricsHost" $ | quote }} + {{- include "newrelic-logging.extraEnv" $ | nindent 12 }} + command: + - C:\fluent-bit\bin\fluent-bit.exe + - -c + - c:\fluent-bit\etc\fluent-bit.conf + - -e + - C:\fluent-bit\bin\out_newrelic.dll + {{- if $.Values.exposedPorts }} + ports: {{ toYaml $.Values.exposedPorts | nindent 12 }} + {{- end }} + volumeMounts: + - mountPath: C:\fluent-bit\etc + name: fluent-bit-config + - mountPath: C:\var\log + name: logs + {{- if and ($.Values.fluentBit.persistence) (ne $.Values.fluentBit.persistence.mode "hostPath") }} + readOnly: true + {{- end }} + # We need to also mount this because the logs in C:\var\logs are actually symlinks to C:\ProgramData. + # So, in order to be able to read these logs, the reading process needs to also have access to C:\ProgramData. + - mountPath: C:\ProgramData + name: progdata + {{- if and ($.Values.fluentBit.persistence) (ne $.Values.fluentBit.persistence.mode "hostPath") }} + readOnly: true + {{- end }} + {{- if $.Values.resources }} + resources: +{{ toYaml $.Values.resources | indent 12 }} + {{- end }} + volumes: + - name: fluent-bit-config + configMap: + name: {{ template "newrelic-logging.fluentBitConfig" $ }} + - name: logs + hostPath: + path: C:\var\log + - name: progdata + hostPath: + path: C:\ProgramData + {{- if $.Values.priorityClassName }} + priorityClassName: {{ $.Values.priorityClassName }} + {{- end }} + nodeSelector: + {{- if $.Values.windowsNodeSelector }} +{{ toYaml $.Values.windowsNodeSelector | indent 8 }} + {{- else }} + kubernetes.io/os: windows + # Windows containers can only be executed on hosts running the exact same Windows version and build number + node.kubernetes.io/windows-build: {{ .buildNumber }} + {{- end }} + {{- if $.Values.tolerations }} + tolerations: +{{ toYaml $.Values.tolerations | indent 8 }} + {{- end }} +--- +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset.yaml new file mode 100644 index 0000000000..7dac9e07e3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/daemonset.yaml @@ -0,0 +1,211 @@ +{{- if and (include "newrelic-logging.areValuesValid" .) .Values.enableLinux }} +apiVersion: apps/v1 +kind: DaemonSet +metadata: + namespace: {{ .Release.Namespace }} + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }} + annotations: + {{- if .Values.daemonSet.annotations }} +{{ toYaml .Values.daemonSet.annotations | indent 4 }} + {{- end }} +spec: + updateStrategy: + type: {{ .Values.updateStrategy }} + selector: + matchLabels: + app: {{ template "newrelic-logging.name" . }} + release: {{.Release.Name }} + template: + metadata: + annotations: + checksum/fluent-bit-config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- if .Values.podAnnotations }} +{{ toYaml .Values.podAnnotations | indent 8}} + {{- end }} + labels: + app: {{ template "newrelic-logging.name" . }} + release: {{.Release.Name }} + kubernetes.io/os: linux + app.kubernetes.io/name: {{ template "newrelic-logging.name" . }} + {{- if .Values.podLabels}} +{{ toYaml .Values.podLabels | indent 8 }} + {{- end }} + spec: + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + dnsPolicy: ClusterFirstWithHostNet + terminationGracePeriodSeconds: 10 + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.image.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + {{- if .Values.hostNetwork }} + hostNetwork: {{ .Values.hostNetwork }} + {{- end }} + initContainers: + {{- if and (.Values.fluentBit.persistence) (eq .Values.fluentBit.persistence.mode "persistentVolume") }} + - name: init + image: busybox:1.36 + command: ["/bin/sh", "-c"] + args: ["/bin/find /db -type f -mtime +1 -delete"] # Delete all db files not updated in the last 24h + volumeMounts: + - name: fb-db-pvc + mountPath: /db + {{- end }} + {{- if .Values.initContainers }} +{{ toYaml .Values.initContainers | indent 8 }} + {{- end }} + containers: + - name: {{ template "newrelic-logging.name" . }} + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 12 }} + {{- end }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.image "context" .) }} + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + env: + - name: ENDPOINT + value: {{ include "newrelic-logging.logsEndpoint" . | quote }} + - name: SOURCE + value: {{ if (include "newrelic-logging.lowDataMode" .) }} "k8s" {{- else }} "kubernetes" {{- end }} + - name: LICENSE_KEY + valueFrom: + secretKeyRef: + {{- if (include "newrelic-logging.licenseKey" .) }} + name: {{ template "newrelic-logging.fullname" . }}-config + key: license + {{- else }} + name: {{ include "newrelic-logging.customSecretName" . }} + key: {{ include "newrelic-logging.customSecretKey" . }} + {{- end }} + - name: CLUSTER_NAME + value: {{ include "newrelic-logging.cluster" . }} + - name: LOG_LEVEL + value: {{ .Values.fluentBit.logLevel | quote }} + - name: LOG_PARSER + {{- if .Values.fluentBit.criEnabled }} + value: "cri,docker" + {{- else }} + value: "docker,cri" + {{- end }} + {{- if or (not .Values.fluentBit.persistence) (eq .Values.fluentBit.persistence.mode "hostPath") }} + - name: FB_DB + value: {{ .Values.fluentBit.db | quote }} + {{- else if eq .Values.fluentBit.persistence.mode "persistentVolume" }} + - name: FB_DB + value: "/db/$(NODE_NAME)-fb.db" + {{- else }} + - name: FB_DB + value: "" + {{- end }} + - name: PATH + value: {{ .Values.fluentBit.path | quote }} + - name: K8S_BUFFER_SIZE + value: {{ .Values.fluentBit.k8sBufferSize | quote }} + - name: K8S_LOGGING_EXCLUDE + value: {{ .Values.fluentBit.k8sLoggingExclude | default "false" | quote }} + - name: LOW_DATA_MODE + value: {{ include "newrelic-logging.lowDataMode" . | default "false" | quote }} + - name: RETRY_LIMIT + value: {{ .Values.fluentBit.retryLimit | quote }} + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: HOSTNAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: SEND_OUTPUT_PLUGIN_METRICS + value: {{ $.Values.fluentBit.sendMetrics | default "false" | quote }} + - name: METRICS_HOST + value: {{ include "newrelic-logging.metricsHost" . | quote }} + {{- include "newrelic-logging.extraEnv" . | nindent 12 }} + command: + - /fluent-bit/bin/fluent-bit + - -c + - /fluent-bit/etc/fluent-bit.conf + - -e + - /fluent-bit/bin/out_newrelic.so + volumeMounts: + - name: fluent-bit-config + mountPath: /fluent-bit/etc + - name: logs + # We mount /var by default because container logs could be on /var/log or /var/lib/docker/containers (symlinked to /var/log) + mountPath: {{ .Values.fluentBit.linuxMountPath | default "/var" }} + {{- if and (.Values.fluentBit.persistence) (ne .Values.fluentBit.persistence.mode "hostPath") }} + readOnly: true + {{- end }} + {{- if and (.Values.fluentBit.persistence) (eq .Values.fluentBit.persistence.mode "persistentVolume") }} + - name: fb-db-pvc + mountPath: /db + {{- end }} + {{- if .Values.extraVolumeMounts }} + {{- toYaml .Values.extraVolumeMounts | nindent 12 }} + {{- end }} + {{- if .Values.exposedPorts }} + ports: {{ toYaml .Values.exposedPorts | nindent 12 }} + {{- end }} + {{- if .Values.resources }} + resources: +{{ toYaml .Values.resources | indent 12 }} + {{- end }} + volumes: + - name: fluent-bit-config + configMap: + name: {{ template "newrelic-logging.fluentBitConfig" . }} + - name: logs + hostPath: + path: {{ .Values.fluentBit.linuxMountPath | default "/var" }} + {{- if and (.Values.fluentBit.persistence) (eq .Values.fluentBit.persistence.mode "persistentVolume") }} + - name: fb-db-pvc + persistentVolumeClaim: + {{- if .Values.fluentBit.persistence.persistentVolume.existingVolumeClaim }} + claimName: {{ .Values.fluentBit.persistence.persistentVolume.existingVolumeClaim }} + {{- else }} + claimName: {{ template "newrelic-logging.fullname" . }}-pvc + {{- end }} + {{- end }} + {{- if .Values.extraVolumes }} + {{- toYaml .Values.extraVolumes | nindent 8 }} + {{- end }} + {{- if $.Values.priorityClassName }} + priorityClassName: {{ $.Values.priorityClassName }} + {{- end }} + {{- if .Values.nodeAffinity }} + affinity: + nodeAffinity: {{ .Values.nodeAffinity | toYaml | nindent 10 }} + {{- else if include "newrelic.fargate" . }} + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: eks.amazonaws.com/compute-type + operator: NotIn + values: + - fargate + {{- end }} + nodeSelector: + {{- if .Values.nodeSelector }} +{{ toYaml .Values.nodeSelector | indent 8 }} + {{- else if $.Values.enableWindows }} + # We add this only if Windows is enabled to keep backwards-compatibility. Prior to version 1.14, this label was + # named beta.kubernetes.io/os. In version 1.14, it was deprecated and replaced by this one. Version 1.14 also + # introduces Windows support. Therefore, anyone wishing to use Windows containers must bet at version >=1.14 and + # are going to need this label, in order to avoid placing a linux container on a windows node, and vice-versa. + kubernetes.io/os: linux + {{- end }} + {{- if .Values.tolerations }} + tolerations: +{{ toYaml .Values.tolerations | indent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/persistentvolume.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/persistentvolume.yaml new file mode 100644 index 0000000000..f2fb93d77a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/persistentvolume.yaml @@ -0,0 +1,57 @@ +{{- if (not (empty .Values.fluentBit.persistence)) }} + +{{- if and (eq .Values.fluentBit.persistence.mode "persistentVolume") (not .Values.fluentBit.persistence.persistentVolume.storageClass) (not .Values.fluentBit.persistence.persistentVolume.existingVolumeClaim) }} +{{ fail "You should provide a ReadWriteMany storageClass or an existingVolumeClaim if using persitentVolume as Fluent Bit persistence mode." }} +{{- end }} + +{{- if and (eq .Values.fluentBit.persistence.mode "persistentVolume") (not .Values.fluentBit.persistence.persistentVolume.existingVolumeClaim) }} +{{- if and (not .Values.fluentBit.persistence.persistentVolume.dynamicProvisioning) (not .Values.fluentBit.persistence.persistentVolume.existingVolume) }} +apiVersion: v1 +kind: PersistentVolume +metadata: + namespace: {{ .Release.Namespace }} + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }}-pv + annotations: + {{- if .Values.fluentBit.persistence.persistentVolume.annotations.volume }} +{{ toYaml .Values.fluentBit.persistence.persistentVolume.annotations.volume | indent 4 }} + {{- end }} +spec: + accessModes: + - ReadWriteMany + capacity: + storage: {{ .Values.fluentBit.persistence.persistentVolume.size }} + storageClassName: {{ .Values.fluentBit.persistence.persistentVolume.storageClass }} + persistentVolumeReclaimPolicy: Delete + {{- if .Values.fluentBit.persistence.persistentVolume.extra.volume }} +{{ toYaml .Values.fluentBit.persistence.persistentVolume.extra.volume | indent 2 }} + {{- end }} +--- +{{- end }} +apiVersion: v1 +kind: PersistentVolumeClaim +metadata: + namespace: {{ .Release.Namespace }} + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }}-pvc + annotations: + {{- if .Values.fluentBit.persistence.persistentVolume.annotations.claim }} +{{ toYaml .Values.fluentBit.persistence.persistentVolume.annotations.claim | indent 4 }} + {{- end }} +spec: + storageClassName: {{ .Values.fluentBit.persistence.persistentVolume.storageClass }} + accessModes: + - ReadWriteMany +{{- if .Values.fluentBit.persistence.persistentVolume.existingVolume }} + volumeName: {{ .Values.fluentBit.persistence.persistentVolume.existingVolume }} +{{- else if not .Values.fluentBit.persistence.persistentVolume.dynamicProvisioning }} + volumeName: {{ template "newrelic-logging.fullname" . }}-pv +{{- end }} + resources: + requests: + storage: {{ .Values.fluentBit.persistence.persistentVolume.size }} + {{- if .Values.fluentBit.persistence.persistentVolume.extra.claim }} +{{ toYaml .Values.fluentBit.persistence.persistentVolume.extra.claim | indent 2 }} + {{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/podsecuritypolicy.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/podsecuritypolicy.yaml new file mode 100644 index 0000000000..2c8c598e24 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/podsecuritypolicy.yaml @@ -0,0 +1,24 @@ +{{- if .Values.rbac.pspEnabled }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: privileged-{{ template "newrelic-logging.fullname" . }} +spec: + allowedCapabilities: + - '*' + fsGroup: + rule: RunAsAny + runAsUser: + rule: RunAsAny + seLinux: + rule: RunAsAny + supplementalGroups: + rule: RunAsAny + volumes: + - '*' + hostPID: true + hostIPC: true + hostPorts: + - min: 1 + max: 65536 +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/secret.yaml new file mode 100644 index 0000000000..47a56e573e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/secret.yaml @@ -0,0 +1,12 @@ +{{- $licenseKey := include "newrelic-logging.licenseKey" . -}} +{{- if $licenseKey }} +apiVersion: v1 +kind: Secret +metadata: + namespace: {{ .Release.Namespace }} + labels: {{ include "newrelic-logging.labels" . | indent 4 }} + name: {{ template "newrelic-logging.fullname" . }}-config +type: Opaque +data: + license: {{ $licenseKey | b64enc }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/serviceaccount.yaml new file mode 100644 index 0000000000..51da56a3e7 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/templates/serviceaccount.yaml @@ -0,0 +1,17 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- if include "newrelic.common.serviceAccount.annotations" . }} + annotations: + {{- include "newrelic.common.serviceAccount.annotations" . | nindent 4 }} + {{- end }} + labels: + app: {{ template "newrelic-logging.name" . }} + chart: {{ template "newrelic-logging.chart" . }} + heritage: "{{ .Release.Service }}" + release: "{{ .Release.Name }}" + {{- /*include "newrelic.common.labels" . | nindent 4 /!\ Breaking change /!\ */}} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/cri_parser_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/cri_parser_test.yaml new file mode 100644 index 0000000000..f4a1d01d07 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/cri_parser_test.yaml @@ -0,0 +1,37 @@ +suite: test cri, docker parser options in daemonsets +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: cri enabled by default and docker as fallback + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: LOG_PARSER + value: "cri,docker" + - it: docker is set if enabled by and cri as fallback + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + fluentBit: + criEnabled: false + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: LOG_PARSER + value: "docker,cri" \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/dns_config_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/dns_config_test.yaml new file mode 100644 index 0000000000..76d24eac5f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/dns_config_test.yaml @@ -0,0 +1,62 @@ +suite: test dnsConfig options in daemonsets +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: daemonsets contain dnsConfig block when provided + set: + licenseKey: nr_license_key + enableWindows: true + dnsConfig: + nameservers: + - 192.0.2.1 + asserts: + - exists: + path: spec.template.spec.dnsConfig + template: templates/daemonset.yaml + - exists: + path: spec.template.spec.dnsConfig + template: templates/daemonset-windows.yaml + + - it: daemonsets do not contain dnsConfig block when not provided + set: + licenseKey: nr_license_key + enableWindows: true + dnsConfig: {} + asserts: + - notExists: + path: spec.template.spec.dnsConfig + template: templates/daemonset.yaml + - notExists: + path: spec.template.spec.dnsConfig + template: templates/daemonset-windows.yaml + + - it: daemonsets contain provided dnsConfig options + set: + licenseKey: nr_license_key + enableWindows: true + dnsConfig: + options: + - name: ndots + value: "1" + asserts: + - equal: + path: spec.template.spec.dnsConfig.options[0].name + value: ndots + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.dnsConfig.options[0].value + value: "1" + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.dnsConfig.options[0].name + value: ndots + template: templates/daemonset-windows.yaml + - equal: + path: spec.template.spec.dnsConfig.options[0].value + value: "1" + template: templates/daemonset-windows.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/endpoint_region_selection_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/endpoint_region_selection_test.yaml new file mode 100644 index 0000000000..82e700d930 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/endpoint_region_selection_test.yaml @@ -0,0 +1,128 @@ +suite: test endpoint selection based on region settings +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: endpoint-selection-release + namespace: endpoint-selection-namespace +tests: + + - it: selects staging endpoints if nrStaging is enabled + set: + licenseKey: nr_license_key + nrStaging: true + enableWindows: true + asserts: + # Linux + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://staging-log-api.newrelic.com/log/v1" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "staging-metric-api.newrelic.com" + template: templates/daemonset.yaml + # Windows + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://staging-log-api.newrelic.com/log/v1" + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "staging-metric-api.newrelic.com" + template: templates/daemonset-windows.yaml + + - it: selects US endpoints for a US license key + set: + licenseKey: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaFFFFNRAL + enableWindows: true + asserts: + # Linux + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://log-api.newrelic.com/log/v1" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.newrelic.com" + template: templates/daemonset.yaml + # Windows + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://log-api.newrelic.com/log/v1" + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.newrelic.com" + template: templates/daemonset-windows.yaml + + - it: selects EU endpoints for a EU license key + set: + licenseKey: euaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaFFFFNRAL + enableWindows: true + asserts: + # Linux + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://log-api.eu.newrelic.com/log/v1" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.eu.newrelic.com" + template: templates/daemonset.yaml + # Windows + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "https://log-api.eu.newrelic.com/log/v1" + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.eu.newrelic.com" + template: templates/daemonset-windows.yaml + + + - it: selects custom logs endpoint if provided + set: + licenseKey: euaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaFFFFNRAL + endpoint: custom + enableWindows: true + asserts: + # Linux + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "custom" + template: templates/daemonset.yaml + # Windows + - contains: + path: spec.template.spec.containers[0].env + content: + name: ENDPOINT + value: "custom" + template: templates/daemonset-windows.yaml \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_k8logging_exclude_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_k8logging_exclude_test.yaml new file mode 100644 index 0000000000..446f829b00 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_k8logging_exclude_test.yaml @@ -0,0 +1,45 @@ +suite: test fluent-bit exclude logging +templates: + - templates/daemonset.yaml + - templates/configmap.yaml + - templates/persistentvolume.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: K8S_LOGGING_EXCLUDE set true + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + fluentBit: + k8sLoggingExclude: true + asserts: + - equal: + path: spec.template.spec.containers[0].env[?(@.name=="K8S_LOGGING_EXCLUDE")].value + value: "true" + template: templates/daemonset.yaml + - it: K8S_LOGGING_EXCLUDE set false + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + fluentBit: + k8sLoggingExclude: false + asserts: + - equal: + path: spec.template.spec.containers[0].env[?(@.name=="K8S_LOGGING_EXCLUDE")].value + value: "false" + template: templates/daemonset.yaml + - it: K8S_LOGGING_EXCLUDE set value xyz and expect it to be set + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + fluentBit: + k8sLoggingExclude: xyz + asserts: + - equal: + path: spec.template.spec.containers[0].env[?(@.name=="K8S_LOGGING_EXCLUDE")].value + value: "xyz" + template: templates/daemonset.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_persistence_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_persistence_test.yaml new file mode 100644 index 0000000000..67d14c7955 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_persistence_test.yaml @@ -0,0 +1,317 @@ +suite: test fluent-bit persistence options +templates: + - templates/daemonset.yaml + - templates/configmap.yaml + - templates/persistentvolume.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: default persistence is hostPath, DB is set properly and logs volume is read/write + set: + licenseKey: nr_license_key + asserts: + - contains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: logs + mountPath: /var + template: templates/daemonset.yaml + - notContains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: fb-db-pvc + mountPath: /db + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.volumes + content: + name: logs + hostPath: + path: /var + template: templates/daemonset.yaml + - notContains: + path: spec.template.spec.volumes + content: + name: fb-db-pvc + persistentVolumeClaim: + claimName: my-release-newrelic-logging-pvc + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: FB_DB + value: /var/log/flb_kube.db + template: templates/daemonset.yaml + - hasDocuments: + count: 0 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence set to none should keep FB_DB env empty and mount logs volume as read-only + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: none + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: FB_DB + value: "" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: logs + mountPath: /var + readOnly: true + template: templates/daemonset.yaml + - notContains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: fb-db-pvc + mountPath: /db + template: templates/daemonset.yaml + - notContains: + path: spec.template.spec.volumes + content: + name: fb-db-pvc + persistentVolumeClaim: + claimName: my-release-newrelic-logging-pvc + template: templates/daemonset.yaml + - hasDocuments: + count: 0 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence set to persistentVolume should create volume, add it to daemonset, add an initContainer to cleanup and set the FB_DB. Dynamic provisioning is enabled by default. + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: FB_DB + value: "/db/$(NODE_NAME)-fb.db" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: logs + mountPath: /var + readOnly: true + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: fb-db-pvc + mountPath: /db + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.volumes + content: + name: fb-db-pvc + persistentVolumeClaim: + claimName: my-release-newrelic-logging-pvc + template: templates/daemonset.yaml + - isNotNullOrEmpty: + path: spec.template.spec.initContainers + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.initContainers[0].volumeMounts + content: + name: fb-db-pvc + mountPath: /db + template: templates/daemonset.yaml + - hasDocuments: + count: 1 + template: templates/persistentvolume.yaml + - isKind: + of: PersistentVolumeClaim + template: templates/persistentvolume.yaml + - equal: + path: spec.accessModes + value: + - ReadWriteMany + template: templates/persistentvolume.yaml + - it: fluentBit.persistence.persistentVolume with non dynamic provisioning should create the PV and PVC + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + dynamicProvisioning: false + asserts: + - hasDocuments: + count: 2 + template: templates/persistentvolume.yaml + - isKind: + of: PersistentVolume + documentIndex: 0 + template: templates/persistentvolume.yaml + - isKind: + of: PersistentVolumeClaim + documentIndex: 1 + template: templates/persistentvolume.yaml + - equal: + path: spec.accessModes + value: + - ReadWriteMany + documentIndex: 0 + template: templates/persistentvolume.yaml + - equal: + path: spec.accessModes + value: + - ReadWriteMany + documentIndex: 1 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence storage class should be set properly on PV and PVC + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + dynamicProvisioning: false + storageClass: sample-storage-rwx + asserts: + - equal: + path: spec.storageClassName + value: sample-storage-rwx + documentIndex: 0 + template: templates/persistentvolume.yaml + - equal: + path: spec.storageClassName + value: sample-storage-rwx + documentIndex: 1 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence.persistentVolume size should be set properly on PV and PVC + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + dynamicProvisioning: false + size: 100Gi + asserts: + - equal: + path: spec.capacity.storage + value: 100Gi + documentIndex: 0 + template: templates/persistentvolume.yaml + - equal: + path: spec.resources.requests.storage + value: 100Gi + documentIndex: 1 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence.persistentVolume not dynamic provisioned but volumeName provided should use the volumeName and do not create a PV + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + dynamicProvisioning: false + existingVolume: existing-volume + asserts: + - hasDocuments: + count: 1 + template: templates/persistentvolume.yaml + - isKind: + of: PersistentVolumeClaim + template: templates/persistentvolume.yaml + - equal: + path: spec.volumeName + value: existing-volume + template: templates/persistentvolume.yaml + - it: fluentBit.persistence.persistentVolume if a existing claim is provided it's used and PV/PVC are not created + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + dynamicProvisioning: false + existingVolumeClaim: existing-claim + asserts: + - hasDocuments: + count: 0 + template: templates/persistentvolume.yaml + - contains: + path: spec.template.spec.volumes + content: + name: fb-db-pvc + persistentVolumeClaim: + claimName: existing-claim + template: templates/daemonset.yaml + - it: fluentBit.persistence.persistentVolume annotations for PV and PVC are used + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + annotations: + volume: + foo: bar + claim: + baz: qux + dynamicProvisioning: false + asserts: + - equal: + path: metadata.annotations.foo + value: bar + documentIndex: 0 + template: templates/persistentvolume.yaml + - equal: + path: metadata.annotations.baz + value: qux + documentIndex: 1 + template: templates/persistentvolume.yaml + - it: fluentBit.persistence.persistentVolume extra for PV and PVC are used + set: + licenseKey: nr_license_key + fluentBit: + persistence: + mode: persistentVolume + persistentVolume: + storageClass: sample-rwx + extra: + volume: + nfs: + path: /tmp/ + server: 1.1.1.1 + claim: + some: property + dynamicProvisioning: false + asserts: + - equal: + path: spec.nfs + value: + path: /tmp/ + server: 1.1.1.1 + documentIndex: 0 + template: templates/persistentvolume.yaml + - equal: + path: spec.some + value: property + documentIndex: 1 + template: templates/persistentvolume.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_pod_label_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_pod_label_test.yaml new file mode 100644 index 0000000000..86edd7ccd6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_pod_label_test.yaml @@ -0,0 +1,48 @@ +suite: test fluent-bit pods labels +templates: + - templates/daemonset.yaml + - templates/configmap.yaml + - templates/persistentvolume.yaml +release: + name: my-release + namespace: my-namespace +tests: +- it: multiple pod labels are set properly + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + podLabels: + key1: value1 + key2: value2 + asserts: + - equal: + path: spec.template.metadata.labels.key1 + value: value1 + template: templates/daemonset.yaml + - equal: + path: spec.template.metadata.labels.key2 + value: value2 + template: templates/daemonset.yaml +- it: single pod label set properly + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + podLabels: + key1: value1 + asserts: + - equal: + path: spec.template.metadata.labels.key1 + value: value1 + template: templates/daemonset.yaml +- it: pod labels are not set + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + asserts: + - notExists: + path: spec.template.metadata.labels.key1 + - notExists: + path: spec.template.metadata.labels.key2 \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_sendmetrics_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_sendmetrics_test.yaml new file mode 100644 index 0000000000..f320172cbc --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/fluentbit_sendmetrics_test.yaml @@ -0,0 +1,74 @@ +suite: test fluentbit send metrics +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: sendmetrics-release + namespace: sendmetrics-namespace +tests: + + - it: sets requirement environment variables to send metrics + set: + licenseKey: nr_license_key + enableWindows: true + fluentBit.sendMetrics: true + asserts: + # Linux + - contains: + path: spec.template.spec.containers[0].env + content: + name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: HOSTNAME + valueFrom: + fieldRef: + fieldPath: metadata.name + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: SEND_OUTPUT_PLUGIN_METRICS + value: "true" + template: templates/daemonset.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.newrelic.com" + template: templates/daemonset.yaml + # Windows + - contains: + path: spec.template.spec.containers[0].env + content: + name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: HOSTNAME + valueFrom: + fieldRef: + fieldPath: metadata.name + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: SEND_OUTPUT_PLUGIN_METRICS + value: "true" + template: templates/daemonset-windows.yaml + - contains: + path: spec.template.spec.containers[0].env + content: + name: METRICS_HOST + value: "metric-api.newrelic.com" + template: templates/daemonset-windows.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/host_network_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/host_network_test.yaml new file mode 100644 index 0000000000..612d1d9a5f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/host_network_test.yaml @@ -0,0 +1,46 @@ +suite: test hostNetwork options in fluent-bit pods +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: daemonsets does not contain hostNetwork block when not provided + set: + licenseKey: nr_license_key + enableWindows: true + asserts: + - notExists: + path: spec.template.spec.hostNetwork + template: templates/daemonset.yaml + - notExists: + path: spec.template.spec.hostNetwork + template: templates/daemonset-windows.yaml + - it: daemonsets does not contain hostNetwork block when provided as false + set: + licenseKey: nr_license_key + enableWindows: true + hostNetwork: false + asserts: + - notExists: + path: spec.template.spec.hostNetwork + template: templates/daemonset.yaml + - notExists: + path: spec.template.spec.hostNetwork + template: templates/daemonset-windows.yaml + - it: daemonsets does contain hostNetwork=true when provided as true + set: + licenseKey: nr_license_key + enableWindows: true + hostNetwork: true + asserts: + - equal: + path: spec.template.spec.hostNetwork + value: true + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.hostNetwork + value: true + template: templates/daemonset-windows.yaml \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/images_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/images_test.yaml new file mode 100644 index 0000000000..533a367c51 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/images_test.yaml @@ -0,0 +1,96 @@ +suite: test images settings +templates: + - templates/configmap.yaml + - templates/daemonset.yaml + - templates/daemonset-windows.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: image names are correct + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + asserts: + - equal: + path: spec.template.spec.containers[0].image + value: newrelic/newrelic-fluentbit-output:2.0.2 + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.containers[0].image + value: newrelic/newrelic-fluentbit-output:2.0.2-windows-ltsc-2019 + template: templates/daemonset-windows.yaml + documentIndex: 0 + - equal: + path: spec.template.spec.containers[0].image + value: newrelic/newrelic-fluentbit-output:2.0.2-windows-ltsc-2022 + template: templates/daemonset-windows.yaml + documentIndex: 1 + - it: global registry is used if set + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + global: + images: + registry: global_registry + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: global_registry/.* + - it: local registry overrides global + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + global: + images: + registry: global_registry + image: + registry: local_registry + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: local_registry/.* + - it: pullSecrets is used if defined + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + image: + pullSecrets: + - name: regsecret + asserts: + - equal: + path: spec.template.spec.imagePullSecrets[0].name + value: regsecret + - it: pullSecrets are merged + templates: + - templates/daemonset.yaml + - templates/daemonset-windows.yaml + set: + licenseKey: nr_license_key + enableWindows: true + global: + images: + pullSecrets: + - name: global_regsecret + image: + pullSecrets: + - name: regsecret + asserts: + - equal: + path: spec.template.spec.imagePullSecrets[0].name + value: global_regsecret + - equal: + path: spec.template.spec.imagePullSecrets[1].name + value: regsecret diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/linux_volume_mount_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/linux_volume_mount_test.yaml new file mode 100644 index 0000000000..83d2a2c118 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/linux_volume_mount_test.yaml @@ -0,0 +1,37 @@ +suite: test fluent-bit linux mount for logs +templates: + - templates/configmap.yaml + - templates/daemonset.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: is set to /var by default an + set: + licenseKey: nr_license_key + asserts: + - equal: + path: spec.template.spec.containers[0].volumeMounts[1].mountPath + value: /var + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.volumes[1].hostPath.path + value: /var + template: templates/daemonset.yaml + documentIndex: 0 + - it: is set to linuxMountPath if set + templates: + - templates/daemonset.yaml + set: + licenseKey: nr_license_key + fluentBit.linuxMountPath: /var/log + asserts: + - equal: + path: spec.template.spec.containers[0].volumeMounts[1].mountPath + value: /var/log + template: templates/daemonset.yaml + - equal: + path: spec.template.spec.volumes[1].hostPath.path + value: /var/log + template: templates/daemonset.yaml + documentIndex: 0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/rbac_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/rbac_test.yaml new file mode 100644 index 0000000000..a8d85da98a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/tests/rbac_test.yaml @@ -0,0 +1,48 @@ +suite: test RBAC creation +templates: + - templates/clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: template rbac if it is configured to do it + set: + rbac.create: true + asserts: + - hasDocuments: + count: 1 + + - it: don't template rbac if it is disabled + set: + rbac.create: false + asserts: + - hasDocuments: + count: 0 + + - it: RBAC points to the service account that is created by default + set: + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: subjects[0].name + value: my-release-newrelic-logging + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: subjects[0].name + value: sa-test + + - it: RBAC points to the default service account when serviceAccount is disabled + set: + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: subjects[0].name + value: default diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/values.yaml new file mode 100644 index 0000000000..c5d85b43f4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-logging/values.yaml @@ -0,0 +1,361 @@ +# IMPORTANT: Specify your New Relic API key here. +# licenseKey: +# +# Optionally, specify a cluster name and log records can +# be filtered by cluster. +# cluster: +# +# or Specify secret which contains New Relic API key +# customSecretName: secret_name +# customSecretLicenseKey: secret_key +# +# The previous values can also be set as global so that they +# can be shared by other newrelic product's charts +# +# global: +# licenseKey: +# cluster: +# customSecretName: +# customSecretLicenseKey: +# +# IMPORTANT: if you use a kubernetes secret to specify the license, +# you have to manually provide the correct endpoint depending on +# whether your account is for the EU region or not. +# +# endpoint: https://log-api.newrelic.com/log/v1 + +fluentBit: + logLevel: "info" + path: "/var/log/containers/*.log" + linuxMountPath: /var + windowsPath: "C:\\var\\log\\containers\\*.log" + db: "/var/log/flb_kube.db" + windowsDb: "C:\\var\\log\\flb_kube.db" + criEnabled: true + k8sBufferSize: "32k" + k8sLoggingExclude: "false" + retryLimit: 5 + sendMetrics: false + extraEnv: [] + # extraEnv: + # - name: HTTPS_PROXY + # value: http://example.com:3128 + # - name: METADATA_NAME + # valueFrom: + # fieldRef: + # fieldPath: metadata.name + + # Indicates how fluent-bit database is persisted + persistence: + # Define the persistent mode for fluent-bit db, allowed options are `hostPath` (default), `none`, `persistentVolume`. + # - `hostPath` will use hostPath to store the db file on the node disk. + # - `none` will disable the fluent-bit db file, this could cause log duplication or data loss in case fluent-bit gets restarted. + # - `persistentVolume` will use a ReadWriteMany persistent volume to store the db file. This will override `fluentBit.db` path and use `/db/${NODE_NAME}-fb.db` file instead. + mode: "hostPath" + + # In case persistence.mode is set to persistentVolume this will be needed + persistentVolume: + # The storage class should allow ReadWriteMany mode + storageClass: + # Volume and claim size. + size: 10Gi + # If dynamicProvisioning is enabled the chart will create only the PersistentVolumeClaim + dynamicProvisioning: true + # If an existingVolume is provided, we'll use it instead creating a new one + existingVolume: + # If an existingVolumeClaim is provided, we'll use it instead creating a new one + existingVolumeClaim: + # In case you need to add annotations to the created volume or claim + annotations: + volume: {} + claim: {} + # In case you need to specify any other option to your volume or claim + extra: + volume: + # nfs: + # path: /tmp/ + # server: 1.1.1.1 + claim: {} + + + # New Relic default configuration for fluent-bit.conf (service, inputs, filters, outputs) + # and parsers.conf (parsers). The configuration below is not configured for lowDataMode and will + # send all attributes. If custom configuration is required, update these variables. + config: + # Note that Prometheus metric collection needs the HTTP server to be online at port 2020 (see fluentBit.config.metricInstrumentation) + service: | + [SERVICE] + Flush 1 + Log_Level ${LOG_LEVEL} + Daemon off + Parsers_File parsers.conf + HTTP_Server On + HTTP_Listen 0.0.0.0 + HTTP_Port 2020 + + inputs: | + [INPUT] + Name tail + Alias pod-logs-tailer + Tag kube.* + Path ${PATH} + multiline.parser ${LOG_PARSER} + DB ${FB_DB} + Mem_Buf_Limit 7MB + Skip_Long_Lines On + Refresh_Interval 10 + +# extraInputs: | +# [INPUT] +# Name dummy +# Tag dummy.log + + filters: | + [FILTER] + Name kubernetes + Alias kubernetes-enricher + Match kube.* + # We need the full DNS suffix as Windows only supports resolving names with this suffix + # See: https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#dns-limitations + Kube_URL https://kubernetes.default.svc.cluster.local:443 + Buffer_Size ${K8S_BUFFER_SIZE} + K8S-Logging.Exclude ${K8S_LOGGING_EXCLUDE} + + [FILTER] + Name record_modifier + Alias node-attributes-enricher + Match * + Record cluster_name "${CLUSTER_NAME}" + +# extraFilters: | +# [FILTER] +# Name grep +# Match * +# Exclude log lvl=debug* + + lowDataModeFilters: | + [FILTER] + Name kubernetes + Match kube.* + Alias kubernetes-enricher + # We need the full DNS suffix as Windows only supports resolving names with this suffix + # See: https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#dns-limitations + Kube_URL https://kubernetes.default.svc.cluster.local:443 + Buffer_Size ${K8S_BUFFER_SIZE} + K8S-Logging.Exclude ${K8S_LOGGING_EXCLUDE} + Labels Off + Annotations Off + + [FILTER] + Name nest + Match * + Alias kubernetes-attribute-lifter + Operation lift + Nested_under kubernetes + + [FILTER] + Name record_modifier + Match * + Alias node-attributes-enricher-filter + Record cluster_name "${CLUSTER_NAME}" + Allowlist_key container_name + Allowlist_key namespace_name + Allowlist_key pod_name + Allowlist_key stream + Allowlist_key message + Allowlist_key log + + outputs: | + [OUTPUT] + Name newrelic + Match * + Alias newrelic-logs-forwarder + licenseKey ${LICENSE_KEY} + endpoint ${ENDPOINT} + lowDataMode ${LOW_DATA_MODE} + sendMetrics ${SEND_OUTPUT_PLUGIN_METRICS} + Retry_Limit ${RETRY_LIMIT} + +# extraOutputs: | +# [OUTPUT] +# Name null +# Match * + +# parsers: | +# [PARSER] +# Name my_custom_parser +# Format json +# Time_Key time +# Time_Format %Y-%m-%dT%H:%M:%S.%L +# Time_Keep On + metricInstrumentation: | + [INPUT] + name prometheus_scrape + Alias fb-metrics-collector + host 127.0.0.1 + port 2020 + tag fb_metrics + metrics_path /api/v2/metrics/prometheus + scrape_interval 10s + + [OUTPUT] + Name prometheus_remote_write + Match fb_metrics + Alias fb-metrics-forwarder + Host ${METRICS_HOST} + Port 443 + Uri /prometheus/v1/write?prometheus_server=${CLUSTER_NAME} + Header Authorization Bearer ${LICENSE_KEY} + Tls On + # Windows pods using prometheus_remote_write currently have issues if TLS verify is On + Tls.verify Off + # User-defined labels + add_label app fluent-bit + add_label cluster_name "${CLUSTER_NAME}" + add_label pod_name ${HOSTNAME} + add_label node_name ${NODE_NAME} + add_label source kubernetes + +image: + repository: newrelic/newrelic-fluentbit-output +# registry: my_registry + tag: "" + pullPolicy: IfNotPresent + ## See https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod + pullSecrets: [] +# - name: regsecret + +# By default, the Linux DaemonSet will always be deployed, while the Windows DaemonSet(s) won't. +enableLinux: true +enableWindows: false +# For every entry in this Windows OS list, we will create an independent DaemonSet which will get deployed +# on Windows nodes running each specific Windows version and build number. Note that +# Windows containers can only be executed on hosts running the exact same Windows version and build number, +# because Kubernetes only supports process isolation and not Hyper-V isolation (as of September 2021) +windowsOsList: + # We aim to support (limited to LTSC2019/LTSC2022 using GitHub actions, see https://github.com/actions/runner-images/tree/main/images/win): + # https://kubernetes.io/docs/setup/production-environment/windows/intro-windows-in-kubernetes/#windows-os-version-support + - version: ltsc2019 + imageTagSuffix: windows-ltsc-2019 + buildNumber: 10.0.17763 + - version: ltsc2022 + imageTagSuffix: windows-ltsc-2022 + buildNumber: 10.0.20348 + +# Default set of resources assigned to the DaemonSet pods +resources: + limits: + cpu: 500m + memory: 128Mi + requests: + cpu: 250m + memory: 64Mi + +rbac: + # Specifies whether RBAC resources should be created + create: true + pspEnabled: false + +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the fullname template + name: + # Specify any annotations to add to the ServiceAccount + annotations: {} + +# Optionally configure ports to expose metrics on /api/v1/metrics/prometheus +# See - https://docs.fluentbit.io/manual/administration/monitoring +exposedPorts: [] +# - containerPort: 2020 +# hostPort: 2020 +# name: metrics +# protocol: TCP + +# If you wish to provide additional labels to apply to the pod(s), specify +# them here +# podLabels: + +# Pod scheduling priority +# Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ +# priorityClassName: high-priority + +# Node affinity rules +# Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity +# +# IMPORTANT # +# ######### # +# When .Values.global.fargate == true, the chart will automatically add the required affinity rules to exclude +# the DaemonSet from Fargate nodes. There is no need to manually touch this property achieve this. +# This automatic exclusion will, however, not take place if this value is overridden: Setting this to a +# non-empty value WHEN deploying in EKS Fargate (global.fargate == true) requires the user to manually +# include in their custom ruleset an exclusion for nodes with "eks.amazonaws.com/compute-type: fargate", as +# the New Relic DaemonSet MUST NOT be deployed on fargate nodes, as the operator takes care of injecting it +# as a sidecar instead. +# Please refer to the daemonset.yaml template for more details on how to achieve this. +nodeAffinity: {} + +# Node labels for pod assignment +# Ref: https://kubernetes.io/docs/user-guide/node-selection/ +# Note that the Linux DaemonSet already contains a node selector label based on their OS (kubernetes.io/os: linux). +nodeSelector: {} + +# Note that the Windows DaemonSet already contains a node selector label based on their OS (kubernetes.io/os: windows). +# and build number (node.kubernetes.io/windows-build: {{ .buildNumber }}, to ensure that each version of the DaemonSet +# gets deployed only on those Windows nodes running the exact same Windows version and build number. Note that +# Windows containers can only be executed on hosts running the exact same Windows version and build number. +windowsNodeSelector: {} + +# These are default tolerations to be able to run the New Relic Kubernetes integration. +tolerations: + - operator: "Exists" + effect: "NoSchedule" + - operator: "Exists" + effect: "NoExecute" + +updateStrategy: RollingUpdate + +# Sends data to staging, can be set as a global. +# global.nrStaging +nrStaging: false + +daemonSet: + # Annotations to add to the DaemonSet. + annotations: {} + +# Annotations to add to the resulting Pods of the DaemonSet. +podAnnotations: {} + +# If host network should be enabled for fluentbit pods. +# There are some inputs like UDP which will require this setting to be true as they need to bind to the host network. +hostNetwork: + +# When low data mode is enabled only minimal attributes are added to the logs. Kubernetes labels and +# annotations are not included. The plugin.type, plugin.version and plugin.source attributes are minified +# into the plugin.source attribute. +# Can be set as a global: global.lowDataMode +# lowDataMode: false + +extraVolumes: [] +# - name: systemdlog +# hostPath: +# path: /run/log/journal + +extraVolumeMounts: [] +# - name: systemdlog +# mountPath: /run/log/journal + +initContainers: +# - name: init +# image: busybox +# command: ["sh", "-c", 'echo "hello world"'] + +windows: + initContainers: +# - name: init +# image: ... +# command: [...] + +# -- Sets pod dnsConfig. Can also be configured with `global.dnsConfig` +dnsConfig: {} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/Chart.yaml new file mode 100644 index 0000000000..acd3077d4a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/Chart.yaml @@ -0,0 +1,18 @@ +apiVersion: v1 +appVersion: 2.1.4 +description: A Helm chart for the New Relic Pixie integration. +home: https://hub.docker.com/u/newrelic +icon: https://newrelic.com/assets/newrelic/source/NewRelic-logo-square.svg +keywords: +- newrelic +- pixie +- monitoring +maintainers: +- name: nserrino +- name: philkuz +- name: htroisi +- name: vuqtran88 +name: newrelic-pixie +sources: +- https://github.com/newrelic/ +version: 2.1.4 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/README.md new file mode 100644 index 0000000000..228a3676d3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/README.md @@ -0,0 +1,166 @@ +# newrelic-pixie + +## Chart Details + +This chart will deploy the New Relic Pixie Integration. + +IMPORTANT: In order to retrieve the Pixie cluster id from the `pl-cluster-secrets` the integration needs to be deployed in the same namespace as Pixie. By default, Pixie is installed in the `pl` namespace. Alternatively the `clusterId` can be configured manually when installing the chart. In this case the integration can be deployed to any namespace. + +## Configuration + +| Parameter | Description | Default | +| ---------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------- | +| `global.cluster` - `cluster` | The cluster name for the Kubernetes cluster. Required. | | +| `global.licenseKey` - `licenseKey` | The New Relic license key (stored in a secret). Required. | | +| `global.lowDataMode` - `lowDataMode` | If `true`, the integration performs heavier sampling on the Pixie span data and sets the collect interval to 15 seconds instead of 10 seconds. | false | +| `global.nrStaging` - `nrStaging` | Send data to staging (requires a staging license key). | false | +| `apiKey` | The Pixie API key (stored in a secret). Required. | | +| `clusterId` | The Pixie cluster id. Optional. Read from the `pl-cluster-secrets` secret if empty. | | +| `endpoint` | The Pixie endpoint. Required when using Pixie Open Source. | | +| `verbose` | Whether the integration should run in verbose mode or not. | false | +| `global.customSecretName` - `customSecretName` | Name of an existing Secret object, not created by this chart, where the New Relic license is stored | | +| `global.customSecretLicenseKey` - `customSecretLicenseKey` | Key in the existing Secret object, indicated by `customSecretName`, where the New Relic license key is stored. | | +| `image.pullSecrets` | Image pull secrets. | `nil` | +| `customSecretApiKeyName` | Name of an existing Secret object, not created by this chart, where the Pixie API key is stored. | | +| `customSecretApiKeyKey` | Key in the existing Secret object, indicated by `customSecretApiKeyName`, where the Pixie API key is stored. | | +| `podLabels` | Labels added to each Job pod | `{}` | +| `podAnnotations` | Annotations added to each Job pod | `{}` | +| `job.annotations` | Annotations added to the `newrelic-pixie` Job resource | `{}` | +| `job.labels` | Annotations added to the `newrelic-pixie` Job resource | `{}` | +| `nodeSelector` | Node label to use for scheduling. | `{}` | +| `tolerations` | List of node taints to tolerate (requires Kubernetes >= 1.6). | `[]` | +| `affinity` | Node affinity to use for scheduling. | `{}` | +| `proxy` | Set proxy to connect to Pixie Cloud and New Relic. | | +| `customScripts` | YAML containing custom scripts for long-term data retention. The results of the custom scripts will be stored in New Relic. See [custom scripts](#custom-scripts) for YAML format. | `{}` | +| `customScriptsConfigMap` | Name of an existing ConfigMap object containing custom script for long-term data retention. This configuration takes precedence over `customScripts`. | | +| `excludeNamespacesRegex` | Observability data for namespaces matching this RE2 regex is not sent to New Relic. If empty, observability data for all namespaces is sent to New Relic. | | +| `excludePodsRegex` | Observability data for pods (across all namespaces) matching this RE2 regex is not sent to New Relic. If empty, observability data for all pods (in non-excluded namespaces) is sent to New Relic. | | + +## Example + +Make sure you have [added the New Relic chart repository.](../../README.md#installing-charts) + +Then, to install this chart, run the following command: + +```sh +helm install newrelic/newrelic-pixie \ + --set cluster= \ + --set licenseKey= \ + --set apiKey= \ + --namespace pl \ + --generate-name +``` + +## Globals + +**Important:** global parameters have higher precedence than locals with the same name. + +These are meant to be used when you are writing a chart with subcharts. It helps to avoid +setting values multiple times on different subcharts. + +More information on globals and subcharts can be found at [Helm's official documentation](https://helm.sh/docs/topics/chart_template_guide/subcharts_and_globals/). + +| Parameter | +| ------------------------------- | +| `global.cluster` | +| `global.licenseKey` | +| `global.customSecretName` | +| `global.customSecretLicenseKey` | +| `global.lowDataMode` | +| `global.nrStaging` | + +## Custom scripts + +Custom scripts can either be configured directly in `customScripts` or be provided through an existing ConfigMap `customScriptsConfigMap`. + +The entries in the ConfigMap should contain file-like keys with the `.yaml` extension. Each file in the ConfigMap should be valid YAML and contain the following keys: + + * name (string): the name of the script + * description (string): description of the script + * frequencyS (int): frequency to execute the script in seconds + * scripts (string): the actual PXL script to execute + * addExcludes (optional boolean, `false` by default): add pod and namespace excludes to the custom script + +For more detailed information about the custom scripts see [the New Relic Pixie integration repo](https://github.com/newrelic/newrelic-pixie-integration/). + +```yaml +customScripts: + custom1.yaml: | + name: "custom1" + description: "Custom script 1" + frequencyS: 60 + script: | + import px + + df = px.DataFrame(table='http_events', start_time=px.plugin.start_time) + + ns_prefix = df.ctx['namespace'] + '/' + df.container = df.ctx['container_name'] + df.pod = px.strip_prefix(ns_prefix, df.ctx['pod']) + df.service = px.strip_prefix(ns_prefix, df.ctx['service']) + df.namespace = df.ctx['namespace'] + + df.status_code = df.resp_status + + df = df.groupby(['status_code', 'pod', 'container','service', 'namespace']).agg( + latency_min=('latency', px.min), + latency_max=('latency', px.max), + latency_sum=('latency', px.sum), + latency_count=('latency', px.count), + time_=('time_', px.max), + ) + + df.latency_min = df.latency_min / 1000000 + df.latency_max = df.latency_max / 1000000 + df.latency_sum = df.latency_sum / 1000000 + + df.cluster_name = px.vizier_name() + df.cluster_id = px.vizier_id() + df.pixie = 'pixie' + + px.export( + df, px.otel.Data( + resource={ + 'service.name': df.service, + 'k8s.container.name': df.container, + 'service.instance.id': df.pod, + 'k8s.pod.name': df.pod, + 'k8s.namespace.name': df.namespace, + 'px.cluster.id': df.cluster_id, + 'k8s.cluster.name': df.cluster_name, + 'instrumentation.provider': df.pixie, + }, + data=[ + px.otel.metric.Summary( + name='http.server.duration', + description='measures the duration of the inbound HTTP request', + # Unit is not supported yet + # unit='ms', + count=df.latency_count, + sum=df.latency_sum, + quantile_values={ + 0.0: df.latency_min, + 1.0: df.latency_max, + }, + attributes={ + 'http.status_code': df.status_code, + }, + )], + ), + ) +``` + + +## Resources + +The default set of resources assigned to the pods is shown below: + +```yaml +resources: + limits: + memory: 250M + requests: + cpu: 100m + memory: 250M +``` + diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/ci/test-values.yaml new file mode 100644 index 0000000000..580f9b0ba1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/ci/test-values.yaml @@ -0,0 +1,5 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + apiKey: 1234567890abcdef + cluster: test-cluster +clusterId: foobar diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/NOTES.txt new file mode 100644 index 0000000000..d542838894 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/NOTES.txt @@ -0,0 +1,27 @@ +{{- if (include "newrelic-pixie.areValuesValid" .) }} + +Your deployment of the New Relic Pixie integration is complete. + +Please ensure this integration is deployed in the same namespace +as Pixie or manually specify the clusterId. +{{- else -}} +############################################################### +#### ERROR: You did not set all the required values. #### +############################################################### + +This deployment will be incomplete until you set all the required values: + +* Cluster name +* New Relic license key +* Pixie API key + +For a simple installation to be fixed, run: + + helm upgrade {{ .Release.Name }} \ + --set cluster=YOUR-CLUSTER-NAME \ + --set licenseKey=YOUR-LICENSE-KEY \ + --set apiKey=YOUR-API-KEY \ + -n {{ .Release.Namespace }} \ + newrelic/newrelic-pixie + +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/_helpers.tpl new file mode 100644 index 0000000000..40b9c68df1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/_helpers.tpl @@ -0,0 +1,172 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "newrelic-pixie.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{- define "newrelic-pixie.namespace" -}} +{{- if .Values.namespace -}} + {{- .Values.namespace -}} +{{- else -}} + {{- .Release.Namespace | default "pl" -}} +{{- end -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic-pixie.fullname" -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if ne $name .Release.Name -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s" $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} + +{{/* Generate basic labels */}} +{{- define "newrelic-pixie.labels" }} +app: {{ template "newrelic-pixie.name" . }} +app.kubernetes.io/name: {{ include "newrelic-pixie.name" . }} +chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} +heritage: {{.Release.Service }} +release: {{.Release.Name }} +{{- end }} + +{{- define "newrelic-pixie.cluster" -}} +{{- if .Values.global -}} + {{- if .Values.global.cluster -}} + {{- .Values.global.cluster -}} + {{- else -}} + {{- .Values.cluster | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.cluster | default "" -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-pixie.nrStaging" -}} +{{- if .Values.global }} + {{- if .Values.global.nrStaging }} + {{- .Values.global.nrStaging -}} + {{- end -}} +{{- else if .Values.nrStaging }} + {{- .Values.nrStaging -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-pixie.licenseKey" -}} +{{- if .Values.global}} + {{- if .Values.global.licenseKey }} + {{- .Values.global.licenseKey -}} + {{- else -}} + {{- .Values.licenseKey | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.licenseKey | default "" -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-pixie.apiKey" -}} +{{- if .Values.global}} + {{- if .Values.global.apiKey }} + {{- .Values.global.apiKey -}} + {{- else -}} + {{- .Values.apiKey | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.apiKey | default "" -}} +{{- end -}} +{{- end -}} + +{{- /* +adapted from https://github.com/newrelic/helm-charts/blob/af747af93fb5b912374196adc59b552965b6e133/library/common-library/templates/_low-data-mode.tpl +TODO: actually use common-library chart dep +*/ -}} +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic-pixie.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the customSecretName where the New Relic license is being stored. +*/}} +{{- define "newrelic-pixie.customSecretName" -}} +{{- if .Values.global }} + {{- if .Values.global.customSecretName }} + {{- .Values.global.customSecretName -}} + {{- else -}} + {{- .Values.customSecretName | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.customSecretName | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the customSecretApiKeyName where the Pixie API key is being stored. +*/}} +{{- define "newrelic-pixie.customSecretApiKeyName" -}} + {{- .Values.customSecretApiKeyName | default "" -}} +{{- end -}} + +{{/* +Return the customSecretLicenseKey +*/}} +{{- define "newrelic-pixie.customSecretLicenseKey" -}} +{{- if .Values.global }} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- else -}} + {{- .Values.customSecretLicenseKey | default "" -}} + {{- end -}} +{{- else -}} + {{- .Values.customSecretLicenseKey | default "" -}} +{{- end -}} +{{- end -}} + +{{/* +Return the customSecretApiKeyKey +*/}} +{{- define "newrelic-pixie.customSecretApiKeyKey" -}} + {{- .Values.customSecretApiKeyKey | default "" -}} +{{- end -}} + +{{/* +Returns if the template should render, it checks if the required values +licenseKey and cluster are set. +*/}} +{{- define "newrelic-pixie.areValuesValid" -}} +{{- $cluster := include "newrelic-pixie.cluster" . -}} +{{- $licenseKey := include "newrelic-pixie.licenseKey" . -}} +{{- $apiKey := include "newrelic-pixie.apiKey" . -}} +{{- $customSecretName := include "newrelic-pixie.customSecretName" . -}} +{{- $customSecretLicenseKey := include "newrelic-pixie.customSecretLicenseKey" . -}} +{{- $customSecretApiKeyKey := include "newrelic-pixie.customSecretApiKeyKey" . -}} +{{- and (or (and $licenseKey $apiKey) (and $customSecretName $customSecretLicenseKey $customSecretApiKeyKey)) $cluster}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/configmap.yaml new file mode 100644 index 0000000000..19f7fe61ac --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/configmap.yaml @@ -0,0 +1,12 @@ +{{- if (include "newrelic-pixie.areValuesValid" .) }} +{{- if .Values.customScripts }} +apiVersion: v1 +kind: ConfigMap +metadata: + namespace: {{ template "newrelic-pixie.namespace" . }} + labels: {{ include "newrelic-pixie.labels" . | indent 4 }} + name: {{ template "newrelic-pixie.fullname" . }}-scripts +data: +{{- toYaml .Values.customScripts | nindent 2 }} +{{- end }} +{{- end }} \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/job.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/job.yaml new file mode 100644 index 0000000000..89b97514fa --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/job.yaml @@ -0,0 +1,164 @@ +{{- if (include "newrelic-pixie.areValuesValid" .) }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ template "newrelic-pixie.fullname" . }} + namespace: {{ template "newrelic-pixie.namespace" . }} + labels: + {{- include "newrelic-pixie.labels" . | trim | nindent 4}} + {{- if ((.Values.job).labels) }} + {{- toYaml .Values.job.labels | nindent 4 }} + {{- end }} + {{- if ((.Values.job).annotations) }} + annotations: + {{ toYaml .Values.job.annotations | nindent 4 | trim }} + {{- end }} +spec: + backoffLimit: 4 + ttlSecondsAfterFinished: 600 + template: + metadata: + labels: + app.kubernetes.io/name: {{ template "newrelic-pixie.name" . }} + release: {{.Release.Name }} + {{- if .Values.podLabels }} + {{- toYaml .Values.podLabels | nindent 8 }} + {{- end }} + {{- if .Values.podAnnotations }} + annotations: + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + spec: + {{- if .Values.image.pullSecrets }} + imagePullSecrets: + {{- toYaml .Values.image.pullSecrets | nindent 8 }} + {{- end }} + restartPolicy: Never + initContainers: + - name: cluster-registration-wait + image: gcr.io/pixie-oss/pixie-dev-public/curl:1.0 + command: ['sh', '-c', 'set -x; + URL="https://${SERVICE_NAME}:${SERVICE_PORT}/readyz"; + until [ $(curl -m 0.5 -s -o /dev/null -w "%{http_code}" -k ${URL}) -eq 200 ]; do + echo "Waiting for cluster registration. If this takes too long check the vizier-cloud-connector logs." + sleep 2; + done; + '] + env: + # The name of the Pixie service which connects to Pixie Cloud for cluster registration. + - name: SERVICE_NAME + value: "vizier-cloud-connector-svc" + - name: SERVICE_PORT + value: "50800" + containers: + - name: {{ template "newrelic-pixie.name" . }} + image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}" + imagePullPolicy: "{{ .Values.image.pullPolicy }}" + env: + - name: CLUSTER_NAME + value: {{ template "newrelic-pixie.cluster" . }} + - name: NR_LICENSE_KEY + valueFrom: + secretKeyRef: + {{- if (include "newrelic-pixie.licenseKey" .) }} + name: {{ template "newrelic-pixie.fullname" . }}-secrets + key: newrelicLicenseKey + {{- else }} + name: {{ include "newrelic-pixie.customSecretName" . }} + key: {{ include "newrelic-pixie.customSecretLicenseKey" . }} + {{- end }} + - name: PIXIE_API_KEY + valueFrom: + secretKeyRef: + {{- if (include "newrelic-pixie.apiKey" .) }} + name: {{ template "newrelic-pixie.fullname" . }}-secrets + key: pixieApiKey + {{- else }} + name: {{ include "newrelic-pixie.customSecretApiKeyName" . }} + key: {{ include "newrelic-pixie.customSecretApiKeyKey" . }} + {{- end }} + - name: PIXIE_CLUSTER_ID + {{- if .Values.clusterId }} + value: {{ .Values.clusterId -}} + {{- else }} + valueFrom: + secretKeyRef: + key: cluster-id + name: pl-cluster-secrets + {{- end }} + {{- if .Values.verbose }} + - name: VERBOSE + value: "true" + {{- end }} + {{- if (include "newrelic-pixie.lowDataMode" .) }} + - name: COLLECT_INTERVAL_SEC + value: "15" + - name: HTTP_SPAN_LIMIT + value: "750" + - name: DB_SPAN_LIMIT + value: "250" + {{- else }} + - name: COLLECT_INTERVAL_SEC + value: "10" + - name: HTTP_SPAN_LIMIT + value: "1500" + - name: DB_SPAN_LIMIT + value: "500" + {{- end }} + {{- if (include "newrelic-pixie.nrStaging" .) }} + - name: NR_OTLP_HOST + value: "staging-otlp.nr-data.net:4317" + {{- end }} + {{- if or .Values.endpoint (include "newrelic-pixie.nrStaging" .) }} + - name: PIXIE_ENDPOINT + {{- if .Values.endpoint }} + value: {{ .Values.endpoint | quote }} + {{- else }} + value: "staging.withpixie.dev:443" + {{- end }} + {{- end }} + {{- if .Values.proxy }} + - name: HTTP_PROXY + value: {{ .Values.proxy | quote }} + - name: HTTPS_PROXY + value: {{ .Values.proxy | quote }} + {{- end }} + {{- if .Values.excludePodsRegex }} + - name: EXCLUDE_PODS_REGEX + value: {{ .Values.excludePodsRegex | quote }} + {{- end }} + {{- if .Values.excludeNamespacesRegex }} + - name: EXCLUDE_NAMESPACES_REGEX + value: {{ .Values.excludeNamespacesRegex | quote }} + {{- end }} + {{- if .Values.resources }} + resources: + {{- toYaml .Values.resources | nindent 10 }} + {{- end }} + {{- if or .Values.customScriptsConfigMap .Values.customScripts }} + volumeMounts: + - name: scripts + mountPath: "/scripts" + readOnly: true + volumes: + - name: scripts + configMap: + {{- if .Values.customScriptsConfigMap }} + name: {{ .Values.customScriptsConfigMap }} + {{- else }} + name: {{ template "newrelic-pixie.fullname" . }}-scripts + {{- end}} + {{- end }} + {{- if $.Values.nodeSelector }} + nodeSelector: + {{- toYaml $.Values.nodeSelector | nindent 8 }} + {{- end }} + {{- if .Values.tolerations }} + tolerations: + {{- toYaml .Values.tolerations | nindent 8 }} + {{- end }} + {{- if .Values.affinity }} + affinity: + {{- toYaml .Values.affinity | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/secret.yaml new file mode 100644 index 0000000000..4d95618772 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/templates/secret.yaml @@ -0,0 +1,20 @@ +{{- if (include "newrelic-pixie.areValuesValid" .) }} +{{- $licenseKey := include "newrelic-pixie.licenseKey" . -}} +{{- $apiKey := include "newrelic-pixie.apiKey" . -}} +{{- if or $apiKey $licenseKey}} +apiVersion: v1 +kind: Secret +metadata: + namespace: {{ template "newrelic-pixie.namespace" . }} + labels: {{ include "newrelic-pixie.labels" . | indent 4 }} + name: {{ template "newrelic-pixie.fullname" . }}-secrets +type: Opaque +data: + {{- if $licenseKey }} + newrelicLicenseKey: {{ $licenseKey | b64enc }} + {{- end }} + {{- if $apiKey }} + pixieApiKey: {{ include "newrelic-pixie.apiKey" . | b64enc -}} + {{- end }} +{{- end }} +{{- end}} \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/configmap.yaml new file mode 100644 index 0000000000..ecba6363bd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/configmap.yaml @@ -0,0 +1,44 @@ +suite: test custom scripts ConfigMap +templates: + - templates/configmap.yaml +tests: + - it: ConfigMap is created + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + customScripts: + custom1.yaml: | + name: "custom1" + description: "Custom script 1" + frequencyS: 60 + script: | + import px + df = px.DataFrame(table='http_events', start_time=px.plugin.start_time) + asserts: + - isKind: + of: ConfigMap + - equal: + path: data.custom1\.yaml + value: |- + name: "custom1" + description: "Custom script 1" + frequencyS: 60 + script: | + import px + df = px.DataFrame(table='http_events', start_time=px.plugin.start_time) + - equal: + path: metadata.name + value: RELEASE-NAME-newrelic-pixie-scripts + - equal: + path: metadata.namespace + value: NAMESPACE + - it: ConfigMap is empty + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + customScripts: {} + asserts: + - hasDocuments: + count: 0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/jobs.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/jobs.yaml new file mode 100644 index 0000000000..03a3d86b8f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/tests/jobs.yaml @@ -0,0 +1,138 @@ +suite: test job +templates: + - templates/job.yaml +tests: + - it: Test primary fields of job + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + image: + tag: "latest" + asserts: + - isKind: + of: Job + - equal: + path: "metadata.name" + value: "RELEASE-NAME-newrelic-pixie" + - equal: + path: "metadata.namespace" + value: "NAMESPACE" + - equal: + path: "spec.template.spec.containers[0].image" + value: "newrelic/newrelic-pixie-integration:latest" + - equal: + path: "spec.template.spec.containers[0].env" + value: + - name: CLUSTER_NAME + value: test-cluster + - name: NR_LICENSE_KEY + valueFrom: + secretKeyRef: + key: newrelicLicenseKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_API_KEY + valueFrom: + secretKeyRef: + key: pixieApiKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_CLUSTER_ID + valueFrom: + secretKeyRef: + key: cluster-id + name: pl-cluster-secrets + - isEmpty: + path: "spec.template.spec.containers[0].volumeMounts" + - isEmpty: + path: "spec.template.spec.volumes" + - it: Job with clusterId + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + clusterId: "cid123" + asserts: + - equal: + path: "spec.template.spec.containers[0].env" + value: + - name: CLUSTER_NAME + value: test-cluster + - name: NR_LICENSE_KEY + valueFrom: + secretKeyRef: + key: newrelicLicenseKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_API_KEY + valueFrom: + secretKeyRef: + key: pixieApiKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_CLUSTER_ID + value: "cid123" + - it: Job with Pixie endpoint + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + clusterId: "cid123" + endpoint: "withpixie.ai:443" + asserts: + - equal: + path: "spec.template.spec.containers[0].env" + value: + - name: CLUSTER_NAME + value: test-cluster + - name: NR_LICENSE_KEY + valueFrom: + secretKeyRef: + key: newrelicLicenseKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_API_KEY + valueFrom: + secretKeyRef: + key: pixieApiKey + name: RELEASE-NAME-newrelic-pixie-secrets + - name: PIXIE_CLUSTER_ID + value: "cid123" + - name: PIXIE_ENDPOINT + value: "withpixie.ai:443" + - it: Job with custom scripts + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + customScripts: + custom1.yaml: | + name: "custom1" + asserts: + - equal: + path: "spec.template.spec.containers[0].volumeMounts" + value: + - name: scripts + mountPath: "/scripts" + readOnly: true + - equal: + path: "spec.template.spec.volumes[0]" + value: + name: scripts + configMap: + name: RELEASE-NAME-newrelic-pixie-scripts + - it: Job with custom script in defined ConfigMap + set: + cluster: "test-cluster" + licenseKey: "license123" + apiKey: "api123" + customScriptsConfigMap: "myconfigmap" + asserts: + - equal: + path: "spec.template.spec.containers[0].volumeMounts" + value: + - name: scripts + mountPath: "/scripts" + readOnly: true + - equal: + path: "spec.template.spec.volumes[0]" + value: + name: scripts + configMap: + name: myconfigmap diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/values.yaml new file mode 100644 index 0000000000..4103d54e91 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-pixie/values.yaml @@ -0,0 +1,70 @@ +# IMPORTANT: The Kubernetes cluster name +# https://docs.newrelic.com/docs/kubernetes-monitoring-integration +# cluster: "" + +# The New Relic license key +# licenseKey: "" + +# The Pixie API key +# apiKey: "" + +# The Pixie Cluster Id +# clusterId: + +# The Pixie endpoint +# endpoint: + +# If you already have a secret where the New Relic license key is stored, indicate its name here +# customSecretName: +# The key in the customSecretName secret that contains the New Relic license key +# customSecretLicenseKey: +# If you already have a secret where the Pixie API key is stored, indicate its name here +# customSecretApiKeyName: +# The key in the customSecretApiKeyName secret that contains the Pixie API key +# customSecretApiKeyKey: + +image: + repository: newrelic/newrelic-pixie-integration + tag: "" + pullPolicy: IfNotPresent + pullSecrets: [] + # - name: regsecret + +resources: + limits: + memory: 250M + requests: + cpu: 100m + memory: 250M + +# -- Annotations to add to the pod. +podAnnotations: {} +# -- Additional labels for chart pods +podLabels: {} + +job: + # job.annotations -- Annotations to add to the Job. + annotations: {} + # job.labels -- Labels to add to the Job. + labels: {} + +proxy: {} + +nodeSelector: {} + +tolerations: [] + +affinity: {} + +customScripts: {} +# Optionally the scripts can be provided in an already existing ConfigMap: +# customScriptsConfigMap: + +excludeNamespacesRegex: +excludePodsRegex: + +# When low data mode is enabled the integration performs heavier sampling on the Pixie span data +# and sets the collect interval to 15 seconds instead of 10 seconds. +# Can be set as a global: global.lowDataMode or locally as newrelic-pixie.lowDataMode +# @default -- false +lowDataMode: diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.lock new file mode 100644 index 0000000000..1dc01d3a1d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T00:20:40.371047222Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.yaml new file mode 100644 index 0000000000..14e3dcee3e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/Chart.yaml @@ -0,0 +1,22 @@ +annotations: + configuratorVersion: 1.17.4 +apiVersion: v2 +appVersion: v2.37.8 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy Prometheus with New Relic Prometheus Configurator. +keywords: +- newrelic +- prometheus +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: newrelic-prometheus-agent +type: application +version: 1.14.4 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md new file mode 100644 index 0000000000..069b9a79b6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md @@ -0,0 +1,244 @@ +# newrelic-prometheus-agent + +A Helm chart to deploy Prometheus with New Relic Prometheus Configurator. + +# Description + +This chart deploys Prometheus Server in Agent mode configured by the `newrelic-prometheus-configurator`. + +The solution is deployed as a StatefulSet for sharding proposes. +Each Pod will execute the `newrelic-prometheus-configurator` init container which will convert the provided config to a config file in the Prometheus format. Once the init container finishes and saves the config in a shared volume, the container running Prometheus in Agent mode will start. + +```mermaid +graph LR + subgraph pod[Pod] + direction TB + subgraph volume[shared volume] + plain[Prometheus Config] + end + + subgraph init-container[init Container] + configurator[Configurator] --> plain[Prometheus Config] + end + + subgraph container[Main Container] + plain[Prometheus Config] --> prom-agent[Prometheus-Agent] + end + + end + + subgraph configMap + NewRelic-Config --> configurator[Configurator] + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef pod fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class configurator,init-container,container,prom-agent k8s; +class volume plain; +class pod pod; + +``` + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add newrelic-prometheus https://newrelic.github.io/newrelic-prometheus-configurator +helm upgrade --install newrelic newrelic-prometheus/newrelic-prometheus-agent -f your-custom-values.yaml +``` + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Configuration + +The configuration used is similar to the [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/), but it includes some syntactic sugar to make easy to set up some special use-cases like Kubernetes targets, sharding and some New Relic related settings like remote write endpoints. + +The configurator will create [scrape_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config), [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config), [remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) and other entries based on the defined configuration. + +As general rules: +- Configs parameters having the same name as the [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) should have similar behavior. For example, the `tls_config` defined inside a `Kubernetes.jobs` will have the same definition as [tls_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config) of Prometheus and will affect all targets scraped by that job. +- Configs starting with `extra_` prefix will be appended to the ones created by the Configurator. For example, the relabel configs defined in `extra_relabel_config` on the Kubernetes section will be appended to the end of the list that is already being generated by the Configurator for filtering, sharding, metadata decoration, etc. + +### Default Kubernetes jobs configuration + +By default, some Kubernetes objects are discovered and scraped by Prometheus. Taking into account the snippet from `values.yaml` below: + +```yaml + integrations_filter: + enabled: true + source_labels: ["app.kubernetes.io/name", "app.newrelic.io/name", "k8s-app"] + app_values: ["redis", "traefik", "calico", "nginx", "coredns", "etcd", "cockroachdb", "velero", "harbor", "argocd"] + jobs: + - job_name_prefix: default + target_discovery: + pod: true + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + - job_name_prefix: newrelic + integrations_filter: + enabled: false + target_discovery: + pod: true + endpoints: true + filter: + annotations: + newrelic.io/scrape: true +``` + +All pods and endpoints with the `newrelic.io/scrape: true` annotation will be scraped by default. + +Moreover, the solution will scrape as well all pods and endpoints with the `prometheus.io/scrape: true` annotations and +having one of the labels matching the integrations_filter configuration. + +Notice that at any point you can turn off the integrations filters and scrape all pods and services annotated with +`prometheus.io/scrape: true` by setting `config.kubernetes.integrations_filter.integrations_filter: false` or turning +it off in any specific job. + +### Kubernetes job examples + +#### API Server metrics +By default, the API Server Service named `kubernetes` is created in the `default` namespace. The following configuration will scrape metrics from all endpoints behind the mentioned service using the Prometheus Pod bearer token as Authorization Header: + +```yaml +config: + kubernetes: + jobs: + - job_name_prefix: apiserver + target_discovery: + endpoints: true + extra_relabel_config: + # Filter endpoints on `default` namespace associated to `kubernetes` service. + - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name] + action: keep + regex: default;kubernetes + + scheme: https + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + insecure_skip_verify: true + authorization: + credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token +``` + +### Metrics Filtering + +Check [docs](https://github.com/newrelic/newrelic-prometheus-configurator/blob/main/docs/MetricsFilters.md) for a detailed explanation and examples of how to filter metrics and labels. + +### Self metrics + +By default, it is defined as a job in `static_target.jobs` to obtain self-metrics. Particularly, a snippet like the one +below is used. If you define your own static_targets jobs, it is important to also include this kind of job in order +to keep getting self-metrics. + +```yaml +config: + static_targets: + jobs: + - job_name: self-metrics + targets: + - "localhost:9090" + extra_metric_relabel_config: + - source_labels: [__name__] + regex: "" + action: keep +``` + +### Low data mode + +There are two mechanisms to reduce the amount of data that this integration sends to New Relic. See this snippet from the `values.yaml` file: +```yaml +lowDataMode: false + +config: + common: + scrape_interval: 30s +``` + +You might set `lowDataMode` flag to `true` (it will filter some metrics which can also be collected using New Relic Kubernetes integration), check +`values.yaml` for details. + +It is also possible to adjust how frequently Prometheus scrapes the targets by setting up the` config.common.scrape_interval` value. + +### Affinities and tolerations + +The New Relic common library allows you to set affinities, tolerations, and node selectors globally using e.g. `.global.affinity` to ease the configuration +when you use this chart using `nri-bundle`. This chart has an extra level of granularity to the components that it deploys: +control plane, ksm, and kubelet. + +Take this snippet as an example: +```yaml +global: + affinity: {} +affinity: {} +``` + +The order to set the affinity is to set `affinity` field (at root level), if that value is empty, the chart fallbacks to `global.affinity`. + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Sets pod/node affinities set almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster`. Note it will be set as an external label in prometheus configuration, it will have precedence over `config.common.external_labels.cluster_name` and `customAttributes.cluster_name``. | +| config | object | See `values.yaml` | It holds the New Relic Prometheus configuration. Here you can easily set up Prometheus to get set metrics, discover ponds and endpoints Kubernetes and send metrics to New Relic using remote-write. | +| config.common | object | See `values.yaml` | Include global configuration for Prometheus agent. | +| config.common.scrape_interval | string | `"30s"` | How frequently to scrape targets by default, unless a different value is specified on the job. | +| config.extra_remote_write | object | `nil` | It includes additional remote-write configuration. Note this configuration is not parsed, so valid [prometheus remote_write configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) should be provided. | +| config.extra_scrape_configs | list | `[]` | It is possible to include extra scrape configuration in [prometheus format](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config). Please note, it should be a valid Prometheus configuration which will not be parsed by the chart. WARNING extra_scrape_configs is a raw Prometheus config. Therefore, the metrics collected thanks to it will not have by default the metadata (pod_name, service_name, ...) added by the configurator for the static or kubernetes jobs. This configuration should be used as a workaround whenever kubernetes and static job do not cover a particular use-case. | +| config.kubernetes | object | See `values.yaml` | It allows defining scrape jobs for Kubernetes in a simple way. | +| config.kubernetes.integrations_filter.app_values | list | `["redis","traefik","calico","nginx","coredns","kube-dns","etcd","cockroachdb","velero","harbor","argocd"]` | app_values used to create the regex used in the relabel config added by the integration filters configuration. Note that a single regex will be created from this list, example: '.*(?i)(app1|app2|app3).*' | +| config.kubernetes.integrations_filter.enabled | bool | `true` | enabling the integration filters, merely the targets having one of the specified labels matching one of the values of app_values are scraped. Each job configuration can override this default. | +| config.kubernetes.integrations_filter.source_labels | list | `["app.kubernetes.io/name","app.newrelic.io/name","k8s-app"]` | source_labels used to fetch label values in the relabel config added by the integration filters configuration | +| config.newrelic_remote_write | object | See `values.yaml` | Newrelic remote-write configuration settings. | +| config.static_targets | object | See `values.yaml`. | It allows defining scrape jobs for targets with static URLs. | +| config.static_targets.jobs | list | See `values.yaml`. | List of static target jobs. By default, it defines a job to get self-metrics. Please note, if you define `static_target.jobs` and would like to keep self-metrics you need to include a job like the one defined by default. | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| customAttributes | object | `{}` | Adds extra attributes to prometheus external labels. Can be configured also with `global.customAttributes`. Please note, values defined in `common.config.externar_labels` will have precedence over `customAttributes`. | +| customSecretLicenseKey | string | `""` | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` | +| customSecretName | string | `""` | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| extraVolumeMounts | list | `[]` | Defines where to mount volumes specified with `extraVolumes` | +| extraVolumes | list | `[]` | Volumes to mount in the containers | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | `false` | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| images.configurator | object | See `values.yaml` | Image for New Relic configurator. | +| images.prometheus | object | See `values.yaml` | Image for prometheus which is executed in agent mode. | +| images.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| labels | object | `{}` | Additional labels for chart objects. Can be configured also with `global.labels` | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| lowDataMode | bool | false | Reduces the number of metrics sent in order to reduce costs. It can be configured also with `global.lowDataMode`. Specifically, it makes Prometheus stop reporting some Kubernetes cluster-specific metrics, you can see details in `static/lowdatamodedefaults.yaml`. | +| metric_type_override | object | `{"enabled":true}` | It holds the configuration for metric type override. If enabled, a series of metric relabel configs will be added to `config.newrelic_remote_write.extra_write_relabel_configs`, you can check the whole list in `static/metrictyperelabeldefaults.yaml` | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| nrStaging | bool | `false` | Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` | +| podAnnotations | object | `{}` | Annotations to be added to all pods created by the integration. | +| podLabels | object | `{}` | Additional labels for chart pods. Can be configured also with `global.podLabels` | +| podSecurityContext | object | `{}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| rbac.create | bool | `true` | Whether the chart should automatically create the RBAC objects required to run. | +| rbac.pspEnabled | bool | `false` | Whether the chart should create Pod Security Policy objects. | +| resources | object | `{}` | Resource limits to be added to all pods created by the integration. | +| serviceAccount | object | See `values.yaml` | Settings controlling ServiceAccount creation. | +| serviceAccount.create | bool | `true` | Whether the chart should automatically create the ServiceAccount objects required to run. | +| sharding | string | See `values.yaml` | Set up Prometheus replicas to allow horizontal scalability. | +| tolerations | list | `[]` | Sets pod's tolerations to node taints almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) | +| verboseLog | bool | `false` | Sets the debug log to Prometheus and prometheus-configurator or all integrations if it is set globally. Can be configured also with `global.verboseLog` | + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md.gotmpl new file mode 100644 index 0000000000..8738b73297 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/README.md.gotmpl @@ -0,0 +1,209 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +# Description + +This chart deploys Prometheus Server in Agent mode configured by the `newrelic-prometheus-configurator`. + +The solution is deployed as a StatefulSet for sharding proposes. +Each Pod will execute the `newrelic-prometheus-configurator` init container which will convert the provided config to a config file in the Prometheus format. Once the init container finishes and saves the config in a shared volume, the container running Prometheus in Agent mode will start. + +```mermaid +graph LR + subgraph pod[Pod] + direction TB + subgraph volume[shared volume] + plain[Prometheus Config] + end + + subgraph init-container[init Container] + configurator[Configurator] --> plain[Prometheus Config] + end + + subgraph container[Main Container] + plain[Prometheus Config] --> prom-agent[Prometheus-Agent] + end + + end + + subgraph configMap + NewRelic-Config --> configurator[Configurator] + end + +classDef plain fill:#ddd,stroke:#fff,stroke-width:4px,color:#000; +classDef k8s fill:#326ce5,stroke:#fff,stroke-width:4px,color:#fff; +classDef pod fill:#fff,stroke:#bbb,stroke-width:2px,color:#326ce5; +class configurator,init-container,container,prom-agent k8s; +class volume plain; +class pod pod; + +``` + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add newrelic-prometheus https://newrelic.github.io/newrelic-prometheus-configurator +helm upgrade --install newrelic newrelic-prometheus/newrelic-prometheus-agent -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Configuration + +The configuration used is similar to the [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/), but it includes some syntactic sugar to make easy to set up some special use-cases like Kubernetes targets, sharding and some New Relic related settings like remote write endpoints. + +The configurator will create [scrape_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config), [relabel_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config), [remote_write](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) and other entries based on the defined configuration. + +As general rules: +- Configs parameters having the same name as the [Prometheus configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) should have similar behavior. For example, the `tls_config` defined inside a `Kubernetes.jobs` will have the same definition as [tls_config](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#tls_config) of Prometheus and will affect all targets scraped by that job. +- Configs starting with `extra_` prefix will be appended to the ones created by the Configurator. For example, the relabel configs defined in `extra_relabel_config` on the Kubernetes section will be appended to the end of the list that is already being generated by the Configurator for filtering, sharding, metadata decoration, etc. + +### Default Kubernetes jobs configuration + +By default, some Kubernetes objects are discovered and scraped by Prometheus. Taking into account the snippet from `values.yaml` below: + +```yaml + integrations_filter: + enabled: true + source_labels: ["app.kubernetes.io/name", "app.newrelic.io/name", "k8s-app"] + app_values: ["redis", "traefik", "calico", "nginx", "coredns", "etcd", "cockroachdb", "velero", "harbor", "argocd"] + jobs: + - job_name_prefix: default + target_discovery: + pod: true + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + - job_name_prefix: newrelic + integrations_filter: + enabled: false + target_discovery: + pod: true + endpoints: true + filter: + annotations: + newrelic.io/scrape: true +``` + +All pods and endpoints with the `newrelic.io/scrape: true` annotation will be scraped by default. + +Moreover, the solution will scrape as well all pods and endpoints with the `prometheus.io/scrape: true` annotations and +having one of the labels matching the integrations_filter configuration. + +Notice that at any point you can turn off the integrations filters and scrape all pods and services annotated with +`prometheus.io/scrape: true` by setting `config.kubernetes.integrations_filter.integrations_filter: false` or turning +it off in any specific job. + +### Kubernetes job examples + +#### API Server metrics +By default, the API Server Service named `kubernetes` is created in the `default` namespace. The following configuration will scrape metrics from all endpoints behind the mentioned service using the Prometheus Pod bearer token as Authorization Header: + +```yaml +config: + kubernetes: + jobs: + - job_name_prefix: apiserver + target_discovery: + endpoints: true + extra_relabel_config: + # Filter endpoints on `default` namespace associated to `kubernetes` service. + - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name] + action: keep + regex: default;kubernetes + + scheme: https + tls_config: + ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt + insecure_skip_verify: true + authorization: + credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token +``` + +### Metrics Filtering + +Check [docs](https://github.com/newrelic/newrelic-prometheus-configurator/blob/main/docs/MetricsFilters.md) for a detailed explanation and examples of how to filter metrics and labels. + +### Self metrics + +By default, it is defined as a job in `static_target.jobs` to obtain self-metrics. Particularly, a snippet like the one +below is used. If you define your own static_targets jobs, it is important to also include this kind of job in order +to keep getting self-metrics. + +```yaml +config: + static_targets: + jobs: + - job_name: self-metrics + targets: + - "localhost:9090" + extra_metric_relabel_config: + - source_labels: [__name__] + regex: "" + action: keep +``` + +### Low data mode + +There are two mechanisms to reduce the amount of data that this integration sends to New Relic. See this snippet from the `values.yaml` file: +```yaml +lowDataMode: false + +config: + common: + scrape_interval: 30s +``` + +You might set `lowDataMode` flag to `true` (it will filter some metrics which can also be collected using New Relic Kubernetes integration), check +`values.yaml` for details. + +It is also possible to adjust how frequently Prometheus scrapes the targets by setting up the` config.common.scrape_interval` value. + + +### Affinities and tolerations + +The New Relic common library allows you to set affinities, tolerations, and node selectors globally using e.g. `.global.affinity` to ease the configuration +when you use this chart using `nri-bundle`. This chart has an extra level of granularity to the components that it deploys: +control plane, ksm, and kubelet. + +Take this snippet as an example: +```yaml +global: + affinity: {} +affinity: {} +``` + +The order to set the affinity is to set `affinity` field (at root level), if that value is empty, the chart fallbacks to `global.affinity`. + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/ci/test-values.yaml new file mode 100644 index 0000000000..ac5ed6bb0c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/ci/test-values.yaml @@ -0,0 +1,6 @@ +licenseKey: fakeLicenseKey +cluster: test-cluster-name +images: + configurator: + repository: ct/prometheus-configurator + tag: ct diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/lowdatamodedefaults.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/lowdatamodedefaults.yaml new file mode 100644 index 0000000000..726815755d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/lowdatamodedefaults.yaml @@ -0,0 +1,6 @@ +# This file contains an entry of the array `extra_write_relabel_configs` to filter +# metrics on Low Data Mode. These metrics are already collected by the New Relic Kubernetes Integration. +low_data_mode: +- action: drop + source_labels: [__name__] + regex: "kube_.+|container_.+|machine_.+|cadvisor_.+" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/metrictyperelabeldefaults.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/metrictyperelabeldefaults.yaml new file mode 100644 index 0000000000..c0a2774094 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/static/metrictyperelabeldefaults.yaml @@ -0,0 +1,17 @@ +# This file contains an entry of the array `extra_write_relabel_configs` to override metric types. +# https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-remote-write/set-your-prometheus-remote-write-integration#override-mapping +metrics_type_relabel: +- source_labels: [__name__] + separator: ; + regex: timeseries_write_(.*) # Cockroach + target_label: newrelic_metric_type + replacement: counter + action: replace +- source_labels: [__name__] + separator: ; + regex: sql_byte(.*) # Cockroach + target_label: newrelic_metric_type + replacement: counter + action: replace +# Note that adding more elements to this list could cause a possible breaking change to users already leveraging affected metrics. +# Therefore, before adding new entries check if any users is relying already on those metrics and warn them. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/_helpers.tpl new file mode 100644 index 0000000000..6cc58e251d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/_helpers.tpl @@ -0,0 +1,165 @@ +{{- /* Return the newrelic-prometheus configuration */ -}} + +{{- /* it builds the common configuration from configurator config, cluster name and custom attributes */ -}} +{{- define "newrelic-prometheus.configurator.common" -}} +{{- $tmp := dict "external_labels" (dict "cluster_name" (include "newrelic.common.cluster" . )) -}} + +{{- if .Values.config -}} + {{- if .Values.config.common -}} + {{- $_ := mustMerge $tmp .Values.config.common -}} + {{- end -}} +{{- end -}} + +{{- $tmpCustomAttribute := dict "external_labels" (include "newrelic.common.customAttributes" . | fromYaml ) -}} +{{- $tmp = mustMerge $tmp $tmpCustomAttribute -}} + +common: +{{- $tmp | toYaml | nindent 2 -}} + +{{- end -}} + + +{{- /* it builds the newrelic_remote_write configuration from configurator config */ -}} +{{- define "newrelic-prometheus.configurator.newrelic_remote_write" -}} +{{- $tmp := dict -}} + +{{- if include "newrelic.common.nrStaging" . -}} + {{- $_ := set $tmp "staging" true -}} +{{- end -}} + +{{- if include "newrelic.common.fedramp.enabled" . -}} + {{- $_ := set $tmp "fedramp" (dict "enabled" true) -}} +{{- end -}} + +{{- $extra_write_relabel_configs :=(include "newrelic-prometheus.configurator.extra_write_relabel_configs" . | fromYaml) -}} +{{- if ne (len $extra_write_relabel_configs.list) 0 -}} + {{- $_ := set $tmp "extra_write_relabel_configs" $extra_write_relabel_configs.list -}} +{{- end -}} + +{{- if .Values.config -}} +{{- if .Values.config.newrelic_remote_write -}} + {{- $tmp = mustMerge $tmp .Values.config.newrelic_remote_write -}} +{{- end -}} +{{- end -}} + +{{- if not (empty $tmp) -}} + {{- dict "newrelic_remote_write" $tmp | toYaml -}} +{{- end -}} + +{{- end -}} + +{{- /* it builds the extra_write_relabel_configs configuration merging: lowdatamode, user ones, and metrictyperelabeldefaults */ -}} +{{- define "newrelic-prometheus.configurator.extra_write_relabel_configs" -}} + +{{- $extra_write_relabel_configs := list -}} +{{- if (include "newrelic.common.lowDataMode" .) -}} + {{- $lowDataModeRelabelConfig := .Files.Get "static/lowdatamodedefaults.yaml" | fromYaml -}} + {{- $extra_write_relabel_configs = concat $extra_write_relabel_configs $lowDataModeRelabelConfig.low_data_mode -}} +{{- end -}} + +{{- if .Values.metric_type_override -}} + {{- if .Values.metric_type_override.enabled -}} + {{- $metricTypeOverride := .Files.Get "static/metrictyperelabeldefaults.yaml" | fromYaml -}} + {{- $extra_write_relabel_configs = concat $extra_write_relabel_configs $metricTypeOverride.metrics_type_relabel -}} + {{- end -}} +{{- end -}} + +{{- if .Values.config -}} +{{- if .Values.config.newrelic_remote_write -}} + {{- /* it concatenates the defined 'extra_write_relabel_configs' to the ones defined in lowDataMode */ -}} + {{- if .Values.config.newrelic_remote_write.extra_write_relabel_configs -}} + {{- $extra_write_relabel_configs = concat $extra_write_relabel_configs .Values.config.newrelic_remote_write.extra_write_relabel_configs -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{- /* sadly in helm we cannot pass back a list without putting it into a tmp dict */ -}} +{{ dict "list" $extra_write_relabel_configs | toYaml}} + +{{- end -}} + + +{{- /* it builds the extra_remote_write configuration from configurator config */ -}} +{{- define "newrelic-prometheus.configurator.extra_remote_write" -}} +{{- if .Values.config -}} + {{- if .Values.config.extra_remote_write -}} +extra_remote_write: + {{- .Values.config.extra_remote_write | toYaml | nindent 2 -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-prometheus.configurator.static_targets" -}} +{{- if .Values.config -}} + {{- if .Values.config.static_targets -}} +static_targets: + {{- .Values.config.static_targets | toYaml | nindent 2 -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-prometheus.configurator.extra_scrape_configs" -}} +{{- if .Values.config -}} + {{- if .Values.config.extra_scrape_configs -}} +extra_scrape_configs: + {{- .Values.config.extra_scrape_configs | toYaml | nindent 2 -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-prometheus.configurator.kubernetes" -}} +{{- if .Values.config -}} +{{- if .Values.config.kubernetes -}} +kubernetes: + {{- if .Values.config.kubernetes.jobs }} + jobs: + {{- .Values.config.kubernetes.jobs | toYaml | nindent 2 -}} + {{- end -}} + + {{- if .Values.config.kubernetes.integrations_filter }} + integrations_filter: + {{- .Values.config.kubernetes.integrations_filter | toYaml | nindent 4 -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{- define "newrelic-prometheus.configurator.sharding" -}} + {{- if .Values.sharding -}} +sharding: + total_shards_count: {{ include "newrelic-prometheus.configurator.replicas" . }} + {{- end -}} +{{- end -}} + +{{- define "newrelic-prometheus.configurator.replicas" -}} + {{- if .Values.sharding -}} +{{- .Values.sharding.total_shards_count | default 1 }} + {{- else -}} +1 + {{- end -}} +{{- end -}} + +{{- /* +Return the proper configurator image name +{{ include "newrelic-prometheus.configurator.images.configurator_image" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic-prometheus.configurator.configurator_image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "context" .context) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic-prometheus.configurator.configurator_image.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + +{{- /* +Return the proper image tag for the configurator image +{{ include "newrelic-prometheus.configurator.configurator_image.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic-prometheus.configurator.configurator_image.tag" -}} + {{- .imageRoot.tag | default .context.Chart.Annotations.configuratorVersion | toString -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrole.yaml new file mode 100644 index 0000000000..e9d4208e2f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrole.yaml @@ -0,0 +1,24 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - endpoints + - services + - pods + - services + verbs: + - get + - list + - watch + - nonResourceURLs: + - "/metrics" + verbs: + - get +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..44244653ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }} +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/configmap.yaml new file mode 100644 index 0000000000..b775aca746 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/configmap.yaml @@ -0,0 +1,31 @@ +kind: ConfigMap +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +apiVersion: v1 +data: + config.yaml: |- + # Configuration for newrelic-prometheus-configurator + {{- with (include "newrelic-prometheus.configurator.newrelic_remote_write" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.extra_remote_write" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.static_targets" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.extra_scrape_configs" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.common" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.kubernetes" . ) -}} + {{- . | nindent 4 }} + {{- end -}} + {{- with (include "newrelic-prometheus.configurator.sharding" . ) -}} + {{- . | nindent 4 }} + {{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/secret.yaml new file mode 100644 index 0000000000..f558ee86c9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/secret.yaml @@ -0,0 +1,2 @@ +{{- /* Common library will take care of creating the secret or not. */}} +{{- include "newrelic.common.license.secret" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/serviceaccount.yaml new file mode 100644 index 0000000000..b1e74523e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- if include "newrelic.common.serviceAccount.annotations" . }} + annotations: + {{- include "newrelic.common.serviceAccount.annotations" . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/statefulset.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/statefulset.yaml new file mode 100644 index 0000000000..846c41c233 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/templates/statefulset.yaml @@ -0,0 +1,157 @@ +apiVersion: apps/v1 +kind: StatefulSet +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} +spec: + serviceName: {{ include "newrelic.common.naming.fullname" . }}-headless + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + replicas: {{ include "newrelic-prometheus.configurator.replicas" . }} + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.images.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + {{- if include "newrelic.common.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} + + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + + initContainers: + - name: configurator + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 12 }} + {{- end }} + image: {{ include "newrelic-prometheus.configurator.configurator_image" ( dict "imageRoot" .Values.images.configurator "context" .) }} + imagePullPolicy: {{ .Values.images.configurator.pullPolicy }} + args: + - --input=/etc/configurator/config.yaml + - --output=/etc/prometheus/config/config.yaml + {{- if include "newrelic.common.verboseLog" . }} + - --verbose=true + {{- end }} + {{- with .Values.resources.configurator }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: configurator-config + mountPath: /etc/configurator/ + - name: prometheus-config + mountPath: /etc/prometheus/config + env: + - name: NR_PROM_DATA_SOURCE_NAME + valueFrom: + fieldRef: + fieldPath: metadata.name + - name: NR_PROM_LICENSE_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + - name: NR_PROM_CHART_VERSION + value: {{ .Chart.Version }} + + containers: + - name: prometheus + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 12 }} + {{- end }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.prometheus "context" .) }} + imagePullPolicy: {{ .Values.images.prometheus.pullPolicy }} + ports: + - containerPort: 9090 + protocol: TCP + livenessProbe: + httpGet: + path: /-/healthy + port: 9090 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 15 + timeoutSeconds: 10 + failureThreshold: 3 + successThreshold: 1 + readinessProbe: + httpGet: + path: /-/ready + port: 9090 + scheme: HTTP + initialDelaySeconds: 10 + periodSeconds: 5 + timeoutSeconds: 4 + failureThreshold: 3 + successThreshold: 1 + args: + - --config.file=/etc/prometheus/config/config.yaml + - --enable-feature=agent,expand-external-labels + - --storage.agent.retention.max-time=30m + - --storage.agent.wal-truncate-frequency=30m + - --storage.agent.path=/etc/prometheus/storage + {{- if include "newrelic.common.verboseLog" . }} + - --log.level=debug + {{- end }} + {{- with .Values.resources.prometheus }} + resources: {{ toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: prometheus-config + mountPath: /etc/prometheus/config + - name: prometheus-storage + mountPath: /etc/prometheus/storage + {{- with .Values.extraVolumeMounts }} + {{- toYaml . | nindent 12 }} + {{- end }} + + volumes: + - name: configurator-config + configMap: + name: {{ include "newrelic.common.naming.fullname" . }} + - name: prometheus-config + emptyDir: {} + - name: prometheus-storage + emptyDir: {} + {{- with .Values.extraVolumes }} + {{- toYaml . | nindent 8 }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configmap_test.yaml new file mode 100644 index 0000000000..f2dd0468ea --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configmap_test.yaml @@ -0,0 +1,572 @@ +suite: test configmap +templates: + - templates/configmap.yaml +tests: + - it: config with defaults + set: + licenseKey: license-key-test + cluster: cluster-test + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: replace + regex: timeseries_write_(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + - action: replace + regex: sql_byte(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + static_targets: + jobs: + - extra_metric_relabel_config: + - action: keep + regex: prometheus_agent_active_series|prometheus_target_interval_length_seconds|prometheus_target_scrape_pool_targets|prometheus_remote_storage_samples_pending|prometheus_remote_storage_samples_in_total|prometheus_remote_storage_samples_retried_total|prometheus_agent_corruptions_total|prometheus_remote_storage_shards|prometheus_sd_kubernetes_events_total|prometheus_agent_checkpoint_creations_failed_total|prometheus_agent_checkpoint_deletions_failed_total|prometheus_remote_storage_samples_dropped_total|prometheus_remote_storage_samples_failed_total|prometheus_sd_kubernetes_http_request_total|prometheus_agent_truncate_duration_seconds_sum|prometheus_build_info|process_resident_memory_bytes|process_virtual_memory_bytes|process_cpu_seconds_total|prometheus_remote_storage_bytes_total + source_labels: + - __name__ + job_name: self-metrics + skip_sharding: true + targets: + - localhost:9090 + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + kubernetes: + jobs: + - job_name_prefix: default + target_discovery: + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + pod: true + - integrations_filter: + enabled: false + job_name_prefix: newrelic + target_discovery: + endpoints: true + filter: + annotations: + newrelic.io/scrape: true + pod: true + integrations_filter: + app_values: + - redis + - traefik + - calico + - nginx + - coredns + - kube-dns + - etcd + - cockroachdb + - velero + - harbor + - argocd + enabled: true + source_labels: + - app.kubernetes.io/name + - app.newrelic.io/name + - k8s-app + + - it: staging is enabled + set: + licenseKey: license-key-test + cluster: cluster-test + nrStaging: true + metric_type_override: + enabled: false + config: + static_targets: # Set empty to make this test simple + asserts: + - matchRegex: + path: data["config.yaml"] + pattern: "newrelic_remote_write:\n staging: true" # We do not want to test the whole YAML + + - it: fedramp is enabled + set: + licenseKey: license-key-test + cluster: cluster-test + fedramp: + enabled: true + metric_type_override: + enabled: false + config: + static_targets: # Set empty to make this test simple + asserts: + - matchRegex: + path: data["config.yaml"] + pattern: "newrelic_remote_write:\n fedramp:\n enabled: true" # We do not want to test the whole YAML + + - it: config including remote_write most possible sections + set: + licenseKey: license-key-test + cluster: cluster-test + nrStaging: true + config: + newrelic_remote_write: + proxy_url: http://proxy.url + remote_timeout: 30s + tls_config: + insecure_skip_verify: true + queue_config: + retry_on_http_429: false + extra_write_relabel_configs: + - source_labels: + - __name__ + - instance + regex: node_memory_active_bytes;localhost:9100 + action: drop + extra_remote_write: + - url: "https://second.remote.write" + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: replace + regex: timeseries_write_(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + - action: replace + regex: sql_byte(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + proxy_url: http://proxy.url + queue_config: + retry_on_http_429: false + remote_timeout: 30s + staging: true + tls_config: + insecure_skip_verify: true + extra_remote_write: + - url: https://second.remote.write + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: config including remote_write.extra_write_relabel_configs and not metric relabels + set: + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + newrelic_remote_write: + extra_write_relabel_configs: + - source_labels: + - __name__ + - instance + regex: node_memory_active_bytes;localhost:9100 + action: drop + + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: cluster_name is set from global + set: + licenseKey: license-key-test + global: + cluster: "test" + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: test + scrape_interval: 30s + - it: cluster_name local value has precedence over global precedence + set: + licenseKey: license-key-test + global: + cluster: "test" + cluster: "test2" + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: test2 + scrape_interval: 30s + - it: cluster_name is not overwritten from customAttributes + set: + licenseKey: license-key-test + global: + cluster: "test" + cluster: "test2" + customAttributes: + cluster_name: "test3" + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: test2 + scrape_interval: 30s + + - it: cluster_name has precedence over extra labels has precedence over customAttributes + set: + licenseKey: license-key-test + cluster: test + customAttributes: + attribute: "value" + one: error + cluster_name: "different" + metric_type_override: + enabled: false + config: + common: + external_labels: + one: two + cluster_name: "different" + scrape_interval: 15 + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + attribute: value + cluster_name: test + one: two + scrape_interval: 15 + + - it: config including static_targets overwritten with most possible sections + set: + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + static_targets: + jobs: + - job_name: my-custom-target-authorization-full + targets: + - "192.168.3.1:2379" + params: + q: [ "puppies" ] + oe: [ "utf8" ] + scheme: "https" + body_size_limit: 100MiB + sample_limit: 2000 + target_limit: 2000 + label_limit: 2000 + label_name_length_limit: 2000 + label_value_length_limit: 2000 + scrape_interval: 15s + scrape_timeout: 15s + tls_config: + insecure_skip_verify: true + ca_file: /path/to/ca.crt + key_file: /path/to/key.crt + cert_file: /path/to/cert.crt + server_name: server.name + min_version: TLS12 + authorization: + type: Bearer + credentials: "fancy-credentials" + extra_relabel_config: + - source_labels: [ '__name__', 'instance' ] + regex: node_memory_active_bytes;localhost:9100 + action: drop + extra_metric_relabel_config: + - source_labels: [ '__name__', 'instance' ] + regex: node_memory_active_bytes;localhost:9100 + action: drop + extra_scrape_configs: + - job_name: extra-scrape-config + static_configs: + - targets: + - "192.168.3.1:2379" + labels: + label1: value1 + label2: value2 + scrape_interval: 15s + scrape_timeout: 15s + tls_config: + insecure_skip_verify: true + ca_file: /path/to/ca.crt + key_file: /path/to/key.crt + cert_file: /path/to/cert.crt + server_name: server.name + min_version: TLS12 + authorization: + type: Bearer + credentials: "fancy-credentials" + relabel_configs: + - source_labels: [ '__name__', 'instance' ] + regex: node_memory_active_bytes;localhost:9100 + action: drop + metric_relabel_configs: + - source_labels: [ '__name__', 'instance' ] + regex: node_memory_active_bytes;localhost:9100 + action: drop + # Set empty to make this test simple + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + static_targets: + jobs: + - authorization: + credentials: fancy-credentials + type: Bearer + body_size_limit: 100MiB + extra_metric_relabel_config: + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + extra_relabel_config: + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + job_name: my-custom-target-authorization-full + label_limit: 2000 + label_name_length_limit: 2000 + label_value_length_limit: 2000 + params: + oe: + - utf8 + q: + - puppies + sample_limit: 2000 + scheme: https + scrape_interval: 15s + scrape_timeout: 15s + target_limit: 2000 + targets: + - 192.168.3.1:2379 + tls_config: + ca_file: /path/to/ca.crt + cert_file: /path/to/cert.crt + insecure_skip_verify: true + key_file: /path/to/key.crt + min_version: TLS12 + server_name: server.name + extra_scrape_configs: + - authorization: + credentials: fancy-credentials + type: Bearer + job_name: extra-scrape-config + metric_relabel_configs: + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + relabel_configs: + - action: drop + regex: node_memory_active_bytes;localhost:9100 + source_labels: + - __name__ + - instance + scrape_interval: 15s + scrape_timeout: 15s + static_configs: + - labels: + label1: value1 + label2: value2 + targets: + - 192.168.3.1:2379 + tls_config: + ca_file: /path/to/ca.crt + cert_file: /path/to/cert.crt + insecure_skip_verify: true + key_file: /path/to/key.crt + min_version: TLS12 + server_name: server.name + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: kubernetes config section custom values + set: + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + kubernetes: + integrations_filter: + enabled: false + jobs: + - job_name_prefix: pod-job + target_discovery: + pod: true + endpoints: false + filter: + annotations: + custom/scrape-pod: true + - job_name_prefix: endpoints-job + target_discovery: + pod: false + endpoints: true + filter: + annotations: + custom/scrape-endpoints: true + # Set empty to make this test simple + static_targets: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + kubernetes: + jobs: + - job_name_prefix: pod-job + target_discovery: + endpoints: false + filter: + annotations: + custom/scrape-pod: true + pod: true + - job_name_prefix: endpoints-job + target_discovery: + endpoints: true + filter: + annotations: + custom/scrape-endpoints: true + pod: false + integrations_filter: + app_values: + - redis + - traefik + - calico + - nginx + - coredns + - kube-dns + - etcd + - cockroachdb + - velero + - harbor + - argocd + enabled: false + source_labels: + - app.kubernetes.io/name + - app.newrelic.io/name + - k8s-app + + - it: sharding empty not propagated + set: + licenseKey: license-key-test + cluster: cluster-test + sharding: + metric_type_override: + enabled: false + config: + kubernetes: + static_targets: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: sharding config custom values + set: + licenseKey: license-key-test + cluster: cluster-test + sharding: + total_shards_count: 2 + metric_type_override: + enabled: false + config: + kubernetes: + static_targets: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + sharding: + total_shards_count: 2 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configurator_image_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configurator_image_test.yaml new file mode 100644 index 0000000000..0f5da69bf8 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/configurator_image_test.yaml @@ -0,0 +1,57 @@ +suite: test image +templates: + - templates/statefulset.yaml + - templates/configmap.yaml +tests: + - it: configurator image is set + set: + licenseKey: license-key-test + cluster: cluster-test + images: + configurator: + tag: "test" + pullPolicy: Never + prometheus: + tag: "test-2" + asserts: + - template: templates/statefulset.yaml + equal: + path: spec.template.spec.initContainers[0].image + value: "newrelic/newrelic-prometheus-configurator:test" + - equal: + path: spec.template.spec.initContainers[0].imagePullPolicy + value: "Never" + template: templates/statefulset.yaml + - template: templates/statefulset.yaml + equal: + path: spec.template.spec.containers[0].image + value: "quay.io/prometheus/prometheus:test-2" + - equal: + path: spec.template.spec.containers[0].imagePullPolicy + value: "IfNotPresent" + template: templates/statefulset.yaml + + - it: has a linux node selector by default + set: + licenseKey: license-key-test + cluster: my-cluster + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + template: templates/statefulset.yaml + + - it: has a linux node selector and additional selectors + set: + licenseKey: license-key-test + cluster: my-cluster + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue + template: templates/statefulset.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/integration_filters_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/integration_filters_test.yaml new file mode 100644 index 0000000000..d1813f1352 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/integration_filters_test.yaml @@ -0,0 +1,119 @@ +suite: test configmap with IntegrationFilter +templates: + - templates/configmap.yaml +tests: + - it: config with IntegrationFilter true + set: + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + kubernetes: + integrations_filter: + enabled: true + # Set empty to make this test simple + static_targets: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + kubernetes: + jobs: + - job_name_prefix: default + target_discovery: + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + pod: true + - integrations_filter: + enabled: false + job_name_prefix: newrelic + target_discovery: + endpoints: true + filter: + annotations: + newrelic.io/scrape: true + pod: true + integrations_filter: + app_values: + - redis + - traefik + - calico + - nginx + - coredns + - kube-dns + - etcd + - cockroachdb + - velero + - harbor + - argocd + enabled: true + source_labels: + - app.kubernetes.io/name + - app.newrelic.io/name + - k8s-app + + - it: config with IntegrationFilter false + set: + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + kubernetes: + integrations_filter: + enabled: false + # Set empty to make this test simple + static_targets: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + kubernetes: + jobs: + - job_name_prefix: default + target_discovery: + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + pod: true + - integrations_filter: + enabled: false + job_name_prefix: newrelic + target_discovery: + endpoints: true + filter: + annotations: + newrelic.io/scrape: true + pod: true + integrations_filter: + app_values: + - redis + - traefik + - calico + - nginx + - coredns + - kube-dns + - etcd + - cockroachdb + - velero + - harbor + - argocd + enabled: false + source_labels: + - app.kubernetes.io/name + - app.newrelic.io/name + - k8s-app diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/lowdatamode_configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/lowdatamode_configmap_test.yaml new file mode 100644 index 0000000000..ac3953df6d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/tests/lowdatamode_configmap_test.yaml @@ -0,0 +1,138 @@ +suite: test configmap with LowDataMode +templates: + - templates/configmap.yaml +tests: + - it: config with lowDataMode true + set: + licenseKey: license-key-test + cluster: cluster-test + lowDataMode: true + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: kube_.+|container_.+|machine_.+|cadvisor_.+ + source_labels: + - __name__ + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: config with lowDataMode and nrStaging true + set: + licenseKey: license-key-test + cluster: cluster-test + lowDataMode: true + nrStaging: true + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: kube_.+|container_.+|machine_.+|cadvisor_.+ + source_labels: + - __name__ + staging: true + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: config with lowDataMode true from global config + set: + global: + lowDataMode: true + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: false + config: + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: kube_.+|container_.+|machine_.+|cadvisor_.+ + source_labels: + - __name__ + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s + + - it: existing relabel configs are appended to low data mode and metric_type_override relabel configs. + set: + lowDataMode: true + licenseKey: license-key-test + cluster: cluster-test + metric_type_override: + enabled: true + config: + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: my_custom_metric_relabel_config + source_labels: + - __name__ + # Set empty to make this test simple + static_targets: + kubernetes: + asserts: + - equal: + path: data["config.yaml"] + value: |- + # Configuration for newrelic-prometheus-configurator + newrelic_remote_write: + extra_write_relabel_configs: + - action: drop + regex: kube_.+|container_.+|machine_.+|cadvisor_.+ + source_labels: + - __name__ + - action: replace + regex: timeseries_write_(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + - action: replace + regex: sql_byte(.*) + replacement: counter + separator: ; + source_labels: + - __name__ + target_label: newrelic_metric_type + - action: drop + regex: my_custom_metric_relabel_config + source_labels: + - __name__ + common: + external_labels: + cluster_name: cluster-test + scrape_interval: 30s diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/values.yaml new file mode 100644 index 0000000000..2fb3ed7bca --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/newrelic-prometheus-agent/values.yaml @@ -0,0 +1,473 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster`. +# Note it will be set as an external label in prometheus configuration, it will have precedence over `config.common.external_labels.cluster_name` +# and `customAttributes.cluster_name``. +cluster: "" +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" +# -- In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` +customSecretName: "" +# -- In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` +customSecretLicenseKey: "" + +# -- Adds extra attributes to prometheus external labels. Can be configured also with `global.customAttributes`. Please note, values defined +# in `common.config.externar_labels` will have precedence over `customAttributes`. +customAttributes: {} + +# Images used by the chart for prometheus and New Relic configurator. +# @default See `values.yaml` +images: + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + + # -- Image for New Relic configurator. + # @default -- See `values.yaml` + configurator: + registry: "" + repository: newrelic/newrelic-prometheus-configurator + pullPolicy: IfNotPresent + # @default It defaults to `annotation.configuratorVersion` in `Chart.yaml`. + tag: "" + # -- Image for prometheus which is executed in agent mode. + # @default -- See `values.yaml` + prometheus: + registry: "" + repository: quay.io/prometheus/prometheus + pullPolicy: IfNotPresent + # @default It defaults to `appVersion` in `Chart.yaml`. + tag: "" + +# -- Volumes to mount in the containers +extraVolumes: [] +# -- Defines where to mount volumes specified with `extraVolumes` +extraVolumeMounts: [] + +# -- Settings controlling ServiceAccount creation. +# @default -- See `values.yaml` +serviceAccount: + # -- Whether the chart should automatically create the ServiceAccount objects required to run. + create: true + annotations: {} + # If not set and create is true, a name is generated using the full name template + name: "" + +# -- Additional labels for chart objects. Can be configured also with `global.labels` +labels: {} +# -- Annotations to be added to all pods created by the integration. +podAnnotations: {} +# -- Additional labels for chart pods. Can be configured also with `global.podLabels` +podLabels: {} + +# -- Resource limits to be added to all pods created by the integration. +# @default -- `{}` +resources: + prometheus: {} + +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- `false` +hostNetwork: +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: {} +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} + +# Settings controlling RBAC objects creation. +rbac: + # -- Whether the chart should automatically create the RBAC objects required to run. + create: true + # -- Whether the chart should create Pod Security Policy objects. + pspEnabled: false + +# -- Sets pod/node affinities set almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +affinity: {} +# -- Sets pod's node selector almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +nodeSelector: {} +# -- Sets pod's tolerations to node taints almost globally. (See [Affinities and tolerations](README.md#affinities-and-tolerations)) +tolerations: [] + +# -- (bool) Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` +# @default -- `false` +nrStaging: + +# -- (bool) Reduces the number of metrics sent in order to reduce costs. It can be configured also with `global.lowDataMode`. +# Specifically, it makes Prometheus stop reporting some Kubernetes cluster-specific metrics, you can see details in `static/lowdatamodedefaults.yaml`. +# @default -- false +lowDataMode: + +# -- It holds the configuration for metric type override. If enabled, a series of metric relabel configs will be added to +# `config.newrelic_remote_write.extra_write_relabel_configs`, you can check the whole list in `static/metrictyperelabeldefaults.yaml` +metric_type_override: + enabled: true + +# -- Set up Prometheus replicas to allow horizontal scalability. +# @default -- See `values.yaml` +sharding: + # -- Sets the number of Prometheus instances running on sharding mode. + # @default -- `1` + # total_shards_count: + +# -- (bool) Sets the debug log to Prometheus and prometheus-configurator or all integrations if it is set globally. Can be configured also with `global.verboseLog` +# @default -- `false` +verboseLog: + +# -- It holds the New Relic Prometheus configuration. Here you can easily set up Prometheus to get set metrics, discover +# ponds and endpoints Kubernetes and send metrics to New Relic using remote-write. +# @default -- See `values.yaml` +config: + # -- Include global configuration for Prometheus agent. + # @default -- See `values.yaml` + common: + # -- The labels to add to any timeseries that this Prometheus instance scrapes. + # @default -- `{}` + # external_labels: + # label_key_example: foo-bar + # -- How frequently to scrape targets by default, unless a different value is specified on the job. + scrape_interval: 30s + # -- The default timeout when scraping targets. + # @default -- `10s` + # scrape_timeout: + + # -- (object) Newrelic remote-write configuration settings. + # @default -- See `values.yaml` + newrelic_remote_write: + # # -- Includes additional [relabel configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#relabel_config) + # # for the New Relic remote write. + # # @default -- `[]` + # extra_write_relabel_configs: [] + + # # Enable the extra_write_relabel_configs below for backwards compatibility with legacy POMI labels. + # # This helpful when migrating from POMI to ensure that Prometheus metrics will contain both labels (e.g. cluster_name and clusterName). + # # For more migration info, please visit the [migration guide](https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-prometheus-agent/migration-guide/). + # - source_labels: [namespace] + # action: replace + # target_label: namespaceName + # - source_labels: [node] + # action: replace + # target_label: nodeName + # - source_labels: [pod] + # action: replace + # target_label: podName + # - source_labels: [service] + # action: replace + # target_label: serviceName + # - source_labels: [cluster_name] + # action: replace + # target_label: clusterName + # - source_labels: [job] + # action: replace + # target_label: scrapedTargetKind + # - source_labels: [instance] + # action: replace + # target_label: scrapedTargetInstance + + # -- Set up the proxy used to send metrics to New Relic. + # @default -- `""` + # proxy_url: + + # -- # Timeout for requests to the remote write endpoint. + # @default -- `30s` + # remote_timeout: + + # -- Fine-tune remote-write behavior: . + # queue_config: + # -- Remote Write shard capacity. + # @default -- `2500` + # capacity: + # -- Maximum number of shards. + # @default -- `200` + # max_shards: + # -- Minimum number of shards. + # @default -- `1` + # min_shards: + # -- Maximum number of samples per send. + # @default -- `500` + # max_samples_per_send: + # -- Maximum time a sample will wait in the buffer. + # @default -- `5s` + # batch_send_deadline: + # -- Initial retry delay. Gets doubled for every retry. + # @default -- `30ms` + # min_backoff: + # -- Maximum retry delay. + # @default -- `5s` + # max_backoff: + # -- Retry upon receiving a 429 status code from the remote-write storage. + # @default -- `false` + # retry_on_http_429: + + # -- (object) It includes additional remote-write configuration. Note this configuration is not parsed, so valid + # [prometheus remote_write configuration](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#remote_write) + # should be provided. + extra_remote_write: + + # -- It allows defining scrape jobs for Kubernetes in a simple way. + # @default -- See `values.yaml` + kubernetes: + # NewRelic provides a list of Dashboards, alerts and entities for several Services. The integrations_filter configuration + # allows to scrape only the targets having this experience out of the box. + # If integrations_filter is enabled, then the jobs scrape merely the targets having one of the specified labels matching + # one of the values of app_values. + # Under the hood, a relabel_configs with 'action=keep' are generated, consider it in case any custom extra_relabel_config is needed. + integrations_filter: + # -- enabling the integration filters, merely the targets having one of the specified labels matching + # one of the values of app_values are scraped. Each job configuration can override this default. + enabled: true + # -- source_labels used to fetch label values in the relabel config added by the integration filters configuration + source_labels: ["app.kubernetes.io/name", "app.newrelic.io/name", "k8s-app"] + # -- app_values used to create the regex used in the relabel config added by the integration filters configuration. + # Note that a single regex will be created from this list, example: '.*(?i)(app1|app2|app3).*' + app_values: ["redis", "traefik", "calico", "nginx", "coredns", "kube-dns", "etcd", "cockroachdb", "velero", "harbor", "argocd"] + + # Kubernetes jobs define [kubernetes_sd_configs](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) + # to discover and scrape Kubernetes objects. Besides, a set of relabel_configs are included in order to include some Kubernetes metadata as + # Labels. For example, address, metrics_path, URL scheme, prometheus_io_parameters, namespace, pod name, service name and labels are taken + # to set the corresponding labels. + # Please note, the relabeling allows configuring the pod/endpoints scrape using the following annotations: + # - `prometheus.io/scheme`: If the metrics endpoint is secured then you will need to set this to `https` + # - `prometheus.io/path`: If the metrics path is not `/metrics` override this. + # - `prometheus.io/port`: If the metrics are exposed on a different port to the service for service endpoints or to + # the default 9102 for pods. + # - `prometheus.io/param_`: To include additional parameters in the scrape URL. + jobs: + # 'default' scrapes all targets having 'prometheus.io/scrape: true'. + # Out of the box, since kubernetes.integrations_filter.enabled=true then only targets selected by the integration filters are considered. + - job_name_prefix: default + target_discovery: + pod: true + endpoints: true + filter: + annotations: + prometheus.io/scrape: true + # -- integrations_filter configuration for this specific job. It overrides kubernetes.integrations_filter configuration + # integrations_filter: + + # 'newrelic' scrapes all targets having 'newrelic.io/scrape: true'. + # This is useful to extend the targets scraped by the 'default' job allowlisting services leveraging `newrelic.io/scrape` annotation + - job_name_prefix: newrelic + integrations_filter: + enabled: false + target_discovery: + pod: true + endpoints: true + filter: + annotations: + newrelic.io/scrape: true + + # -- Set up the job name prefix. The final Prometheus `job` name will be composed of + the target discovery kind. ie: `default-pod` + # @default -- `""` + # - job_name_prefix: + + # -- The target discovery field allows customizing how Kubernetes discovery works. + # target_discovery: + + # -- Whether pods should be discovered. + # @default -- `false` + # pod: + + # -- Whether endpoints should be discovered. + # @default -- `false` + # endpoints: + + # -- Defines filtering criteria, it is possible to set labels and/or annotations. All filters will apply (defined + # filters are taken into account as an "AND operation"). + # @default -- `{}` + # filter: + # -- Map of annotations that the targets should have. If only the annotation name is defined, the filter only checks if exists. + # @default -- `{}` + # annotations: + + # -- Map of labels that the targets should have. If only the label name is defined, the filter only checks if exists. + # @default -- `{}` + # labels: + + # -- Advanced configs of the Kubernetes service discovery `kuberentes_sd_config` options, + # check [prometheus documentation](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config) for details. + # Notice that using `filter` is the recommended way to filter targets to avoid adding load to the API Server. + # additional_config: + # kubeconfig_file: "" + # namespaces: {} + # selectors: {} + # attach_metadata: {} + + + # -- The HTTP resource path on which to fetch metrics from targets. + # Use `prometheus.io/path` pod/service annotation to override this or modify it here. + # @default -- `/metrics` + # metrics_path: + + # -- Optional HTTP URL parameters. + # Use `prometheus.io/param_` pod/service annotation to include additional parameters in the scrape url or modify it here. + # @default -- `{}` + # params: + + # -- Configures the protocol scheme used for requests. + # Annotate the service/pod with `prometheus.io/scheme=https` if the secured port is used or modify it here. + # @default -- `http` + # scheme: + + # -- How frequently to scrape targets from this job. + # @default -- defined in `common.scrape_interval` + # scrape_interval: + + # -- Per-scrape timeout when scraping this job. + # @default -- defined in `common.scrape_timeout` + # scrape_timeout: + + # -- Configures the scrape request's TLS settings. + # @default -- `{}` + # tls_config: + # -- CA certificate file path to validate API server certificate with. + # @default -- `""` + # ca_file: + + # -- Certificate and key files path for client cert authentication to the server. + # @default -- `""` + # cert_file: + # key_file: + + # Disable validation of the server certificate. + # @default -- `false` + # insecure_skip_verify: + + # -- Sets the `Authorization` Bearer token header on every scrape request + # @default -- `{}` + # authorization: + # Sets the credentials to the credentials read from the configured file. + # @default -- `""` + # credentials_file: + + # -- Sets the `Authorization` header on every scrape request with the configured username and password. + # @default -- `{}` + # basic_auth: + # username: + # password_file: + + # -- List of relabeling configurations. Used if needed to add any special filter or label manipulation before the scrape takes place. + # @default -- `[]` + # extra_relabel_config: + + # -- List of metric relabel configurations. Used it to filter metrics and labels after scrape. + # @default -- `[]` + # extra_metric_relabel_config: + + + # -- It allows defining scrape jobs for targets with static URLs. + # @default -- See `values.yaml`. + static_targets: + # -- List of static target jobs. By default, it defines a job to get self-metrics. Please note, if you define `static_target.jobs` and would like to keep + # self-metrics you need to include a job like the one defined by default. + # @default -- See `values.yaml`. + jobs: + - job_name: self-metrics + skip_sharding: true # sharding is skipped to obtain self-metrics from all Prometheus servers. + targets: + - "localhost:9090" + extra_metric_relabel_config: + - source_labels: [__name__] + regex: "\ + prometheus_agent_active_series|\ + prometheus_target_interval_length_seconds|\ + prometheus_target_scrape_pool_targets|\ + prometheus_remote_storage_samples_pending|\ + prometheus_remote_storage_samples_in_total|\ + prometheus_remote_storage_samples_retried_total|\ + prometheus_agent_corruptions_total|\ + prometheus_remote_storage_shards|\ + prometheus_sd_kubernetes_events_total|\ + prometheus_agent_checkpoint_creations_failed_total|\ + prometheus_agent_checkpoint_deletions_failed_total|\ + prometheus_remote_storage_samples_dropped_total|\ + prometheus_remote_storage_samples_failed_total|\ + prometheus_sd_kubernetes_http_request_total|\ + prometheus_agent_truncate_duration_seconds_sum|\ + prometheus_build_info|\ + process_resident_memory_bytes|\ + process_virtual_memory_bytes|\ + process_cpu_seconds_total|\ + prometheus_remote_storage_bytes_total" + action: keep + + # -- The job name assigned to scraped metrics by default. + # @default -- `""`. + # - job_name: + # -- List of target URLs to be scraped by this job. + # @default -- `[]`. + # targets: + + # -- Labels assigned to all metrics scraped from the targets. + # @default -- `{}`. + # labels: + + # -- The HTTP resource path on which to fetch metrics from targets. + # @default -- `/metrics` + # metrics_path: + + # -- Optional HTTP URL parameters. + # @default -- `{}` + # params: + + # -- Configures the protocol scheme used for requests. + # @default -- `http` + # scheme: + + # -- How frequently to scrape targets from this job. + # @default -- defined in `common.scrape_interval` + # scrape_interval: + + # -- Per-scrape timeout when scraping this job. + # @default -- defined in `common.scrape_timeout` + # scrape_timeout: + + # -- Configures the scrape request's TLS settings. + # @default -- `{}` + # tls_config: + # -- CA certificate file path to validate API server certificate with. + # @default -- `""` + # ca_file: + + # -- Certificate and key files path for client cert authentication to the server. + # @default -- `""` + # cert_file: + # key_file: + + # Disable validation of the server certificate. + # @default -- `false` + # insecure_skip_verify: + + # -- Sets the `Authorization` Bearer token header on every scrape request + # @default -- `{}` + # authorization: + # Sets the credentials to the credentials read from the configured file. + # @default -- `""` + # credentials_file: + + # -- Sets the `Authorization` header on every scrape request with the configured username and password. + # @default -- `{}` + # basic_auth: + # username: + # password_file: + + # -- List of relabeling configurations. Used if needed to add any special filter or label manipulation before the scrape takes place. + # @default -- `[]` + # extra_relabel_config: + + # -- List of metric relabel configurations. Used it to filter metrics and labels after scrape. + # @default -- `[]` + # extra_metric_relabel_config: + + + # -- It is possible to include extra scrape configuration in [prometheus format](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config). + # Please note, it should be a valid Prometheus configuration which will not be parsed by the chart. + # WARNING extra_scrape_configs is a raw Prometheus config. Therefore, the metrics collected thanks to it will not have by default the metadata (pod_name, service_name, ...) added by the configurator for the static or kubernetes jobs. + # This configuration should be used as a workaround whenever kubernetes and static job do not cover a particular use-case. + # @default -- `[]` + extra_scrape_configs: [] diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.lock new file mode 100644 index 0000000000..d524c92920 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T23:46:01.668441447Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.yaml new file mode 100644 index 0000000000..6f1031d968 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/Chart.yaml @@ -0,0 +1,26 @@ +apiVersion: v2 +appVersion: 2.10.8 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic Kube Events router +home: https://docs.newrelic.com/docs/integrations/kubernetes-integration/kubernetes-events/install-kubernetes-events-integration +icon: https://newrelic.com/themes/custom/curio/assets/mediakit/NR_logo_Horizontal.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: nri-kube-events +sources: +- https://github.com/newrelic/nri-kube-events/ +- https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events +- https://github.com/newrelic/infrastructure-agent/ +version: 3.10.8 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md new file mode 100644 index 0000000000..a2236f984f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md @@ -0,0 +1,79 @@ +# nri-kube-events + +![Version: 3.10.8](https://img.shields.io/badge/Version-3.10.8-informational?style=flat-square) ![AppVersion: 2.10.8](https://img.shields.io/badge/AppVersion-2.10.8-informational?style=flat-square) + +A Helm chart to deploy the New Relic Kube Events router + +**Homepage:** + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-kube-events https://newrelic.github.io/nri-kube-events +helm upgrade --install nri-kube-events/nri-kube-events -f your-custom-values.yaml +``` + +## Source Code + +* +* +* + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Sets pod/node affinities. Can be configured also with `global.affinity` | +| agentHTTPTimeout | string | `"30s"` | Amount of time to wait until timeout to send metrics to the metric forwarder | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Mandatory. Can be configured also with `global.cluster` | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| customAttributes | object | `{}` | Adds extra attributes to the cluster and all the metrics emitted to the backend. Can be configured also with `global.customAttributes` | +| customSecretLicenseKey | string | `""` | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` | +| customSecretName | string | `""` | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` | +| deployment.annotations | object | `{}` | Annotations to add to the Deployment. | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| fedramp.enabled | bool | `false` | Enables FedRAMP. Can be configured also with `global.fedramp.enabled` | +| forwarder | object | `{"resources":{}}` | Resources for the forwarder sidecar container | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | `false` | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| images | object | See `values.yaml` | Images used by the chart for the integration and agents | +| images.agent | object | See `values.yaml` | Image for the New Relic Infrastructure Agent sidecar | +| images.integration | object | See `values.yaml` | Image for the New Relic Kubernetes integration | +| images.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| labels | object | `{}` | Additional labels for chart objects | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector. Can be configured also with `global.nodeSelector` | +| nrStaging | bool | `false` | Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` | +| podAnnotations | object | `{}` | Annotations to add to the pod. | +| podLabels | object | `{}` | Additional labels for chart pods | +| podSecurityContext | object | `{}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| proxy | string | `""` | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` | +| rbac.create | bool | `true` | Specifies whether RBAC resources should be created | +| resources | object | `{}` | Resources for the integration container | +| scrapers | object | See `values.yaml` | Configure the various kinds of scrapers that should be run. | +| serviceAccount | object | See `values.yaml` | Settings controlling ServiceAccount creation | +| serviceAccount.create | bool | `true` | Specifies whether a ServiceAccount should be created | +| sinks | object | See `values.yaml` | Configure where will the metrics be written. Mostly for debugging purposes. | +| sinks.newRelicInfra | bool | `true` | The newRelicInfra sink sends all events to New Relic. | +| sinks.stdout | bool | `false` | Enable the stdout sink to also see all events in the logs. | +| tolerations | list | `[]` | Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` | +| verboseLog | bool | `false` | Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` | + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md.gotmpl new file mode 100644 index 0000000000..e77eb7f14a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/README.md.gotmpl @@ -0,0 +1,43 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.badgesSection" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-kube-events https://newrelic.github.io/nri-kube-events +helm upgrade --install nri-kube-events/nri-kube-events -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-bare-minimum-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-bare-minimum-values.yaml new file mode 100644 index 0000000000..3fb7df0506 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-bare-minimum-values.yaml @@ -0,0 +1,3 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-map.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-map.yaml new file mode 100644 index 0000000000..9fec33dc63 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-map.yaml @@ -0,0 +1,12 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +customAttributes: + test_tag_label: test_tag_value + +image: + kubeEvents: + repository: e2e/nri-kube-events + tag: test + pullPolicy: IfNotPresent diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-string.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-string.yaml new file mode 100644 index 0000000000..e12cba339c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-custom-attributes-as-string.yaml @@ -0,0 +1,11 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +customAttributes: '{"test_tag_label": "test_tag_value"}' + +image: + kubeEvents: + repository: e2e/nri-kube-events + tag: test + pullPolicy: IfNotPresent diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-values.yaml new file mode 100644 index 0000000000..4e517d6667 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/ci/test-values.yaml @@ -0,0 +1,60 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +sinks: + # Enable the stdout sink to also see all events in the logs. + stdout: true + # The newRelicInfra sink sends all events to New relic. + newRelicInfra: true + +customAttributes: + test_tag_label: test_tag_value + +config: + accountID: 111 + region: EU + +rbac: + create: true + +serviceAccount: + create: true + +podAnnotations: + annotation1: "annotation" + +nodeSelector: + kubernetes.io/os: linux + +tolerations: + - key: "key1" + effect: "NoSchedule" + operator: "Exists" + +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + +hostNetwork: true + +dnsConfig: + nameservers: + - 1.2.3.4 + searches: + - my.dns.search.suffix + options: + - name: ndots + value: "1" + +image: + kubeEvents: + repository: e2e/nri-kube-events + tag: test + pullPolicy: IfNotPresent diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/NOTES.txt new file mode 100644 index 0000000000..3fd06b4a29 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/NOTES.txt @@ -0,0 +1,3 @@ +{{ include "nri-kube-events.compatibility.message.securityContext.runAsUser" . }} + +{{ include "nri-kube-events.compatibility.message.images" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers.tpl new file mode 100644 index 0000000000..5d0b8d257d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers.tpl @@ -0,0 +1,45 @@ +{{/* vim: set filetype=mustache: */}} + +{{- define "nri-kube-events.securityContext.pod" -}} +{{- $defaults := fromYaml ( include "nriKubernetes.securityContext.podDefaults" . ) -}} +{{- $compatibilityLayer := include "nri-kube-events.compatibility.securityContext.pod" . | fromYaml -}} +{{- $commonLibrary := fromYaml ( include "newrelic.common.securityContext.pod" . ) -}} + +{{- $finalSecurityContext := dict -}} +{{- if $commonLibrary -}} + {{- $finalSecurityContext = mustMergeOverwrite $commonLibrary $compatibilityLayer -}} +{{- else -}} + {{- $finalSecurityContext = mustMergeOverwrite $defaults $compatibilityLayer -}} +{{- end -}} +{{- toYaml $finalSecurityContext -}} +{{- end -}} + + + +{{- /* These are the defaults that are used for all the containers in this chart */ -}} +{{- define "nriKubernetes.securityContext.podDefaults" -}} +runAsUser: 1000 +runAsNonRoot: true +{{- end -}} + + + +{{- define "nri-kube-events.securityContext.container" -}} +{{- if include "newrelic.common.securityContext.container" . -}} +{{- include "newrelic.common.securityContext.container" . -}} +{{- else -}} +privileged: false +allowPrivilegeEscalation: false +readOnlyRootFilesystem: true +{{- end -}} +{{- end -}} + + + +{{- /* */ -}} +{{- define "nri-kube-events.agentConfig" -}} +is_forward_only: true +http_server_enabled: true +http_server_port: 8001 +{{ include "newrelic.common.agentConfig.defaults" . }} +{{- end -}} \ No newline at end of file diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers_compatibility.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers_compatibility.tpl new file mode 100644 index 0000000000..059cfff12a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/_helpers_compatibility.tpl @@ -0,0 +1,262 @@ +{{/* +Returns a dictionary with legacy runAsUser config. +We know that it only has "one line" but it is separated from the rest of the helpers because it is a temporary things +that we should EOL. The EOL time of this will be marked when we GA the deprecation of Helm v2. +*/}} +{{- define "nri-kube-events.compatibility.securityContext.pod" -}} +{{- if .Values.runAsUser -}} +runAsUser: {{ .Values.runAsUser }} +{{- end -}} +{{- end -}} + + + +{{- /* +Functions to get values from the globals instead of the common library +We make this because there could be difficult to see what is going under +the hood if we use the common-library here. So it is easy to read something +like: +{{- $registry := $oldRegistry | default $newRegistry | default $globalRegistry -}} +*/ -}} +{{- define "nri-kube-events.compatibility.global.registry" -}} + {{- if .Values.global -}} + {{- if .Values.global.images -}} + {{- if .Values.global.images.registry -}} + {{- .Values.global.images.registry -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Functions to fetch integration image configuration from the old .Values.image */ -}} +{{- /* integration's old registry */ -}} +{{- define "nri-kube-events.compatibility.old.integration.registry" -}} + {{- if .Values.image -}} + {{- if .Values.image.kubeEvents -}} + {{- if .Values.image.kubeEvents.registry -}} + {{- .Values.image.kubeEvents.registry -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* integration's old repository */ -}} +{{- define "nri-kube-events.compatibility.old.integration.repository" -}} + {{- if .Values.image -}} + {{- if .Values.image.kubeEvents -}} + {{- if .Values.image.kubeEvents.repository -}} + {{- .Values.image.kubeEvents.repository -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* integration's old tag */ -}} +{{- define "nri-kube-events.compatibility.old.integration.tag" -}} + {{- if .Values.image -}} + {{- if .Values.image.kubeEvents -}} + {{- if .Values.image.kubeEvents.tag -}} + {{- .Values.image.kubeEvents.tag -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* integration's old imagePullPolicy */ -}} +{{- define "nri-kube-events.compatibility.old.integration.pullPolicy" -}} + {{- if .Values.image -}} + {{- if .Values.image.kubeEvents -}} + {{- if .Values.image.kubeEvents.pullPolicy -}} + {{- .Values.image.kubeEvents.pullPolicy -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Functions to fetch agent image configuration from the old .Values.image */ -}} +{{- /* agent's old registry */ -}} +{{- define "nri-kube-events.compatibility.old.agent.registry" -}} + {{- if .Values.image -}} + {{- if .Values.image.infraAgent -}} + {{- if .Values.image.infraAgent.registry -}} + {{- .Values.image.infraAgent.registry -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* agent's old repository */ -}} +{{- define "nri-kube-events.compatibility.old.agent.repository" -}} + {{- if .Values.image -}} + {{- if .Values.image.infraAgent -}} + {{- if .Values.image.infraAgent.repository -}} + {{- .Values.image.infraAgent.repository -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* agent's old tag */ -}} +{{- define "nri-kube-events.compatibility.old.agent.tag" -}} + {{- if .Values.image -}} + {{- if .Values.image.infraAgent -}} + {{- if .Values.image.infraAgent.tag -}} + {{- .Values.image.infraAgent.tag -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* agent's old imagePullPolicy */ -}} +{{- define "nri-kube-events.compatibility.old.agent.pullPolicy" -}} + {{- if .Values.image -}} + {{- if .Values.image.infraAgent -}} + {{- if .Values.image.infraAgent.pullPolicy -}} + {{- .Values.image.infraAgent.pullPolicy -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{/* +Creates the image string needed to pull the integration image respecting the breaking change we made in the values file +*/}} +{{- define "nri-kube-events.compatibility.images.integration" -}} +{{- $globalRegistry := include "nri-kube-events.compatibility.global.registry" . -}} +{{- $oldRegistry := include "nri-kube-events.compatibility.old.integration.registry" . -}} +{{- $newRegistry := .Values.images.integration.registry -}} +{{- $registry := $oldRegistry | default $newRegistry | default $globalRegistry -}} + +{{- $oldRepository := include "nri-kube-events.compatibility.old.integration.repository" . -}} +{{- $newRepository := .Values.images.integration.repository -}} +{{- $repository := $oldRepository | default $newRepository }} + +{{- $oldTag := include "nri-kube-events.compatibility.old.integration.tag" . -}} +{{- $newTag := .Values.images.integration.tag -}} +{{- $tag := $oldTag | default $newTag | default .Chart.AppVersion -}} + +{{- if $registry -}} + {{- printf "%s/%s:%s" $registry $repository $tag -}} +{{- else -}} + {{- printf "%s:%s" $repository $tag -}} +{{- end -}} +{{- end -}} + + + +{{/* +Creates the image string needed to pull the agent's image respecting the breaking change we made in the values file +*/}} +{{- define "nri-kube-events.compatibility.images.agent" -}} +{{- $globalRegistry := include "nri-kube-events.compatibility.global.registry" . -}} +{{- $oldRegistry := include "nri-kube-events.compatibility.old.agent.registry" . -}} +{{- $newRegistry := .Values.images.agent.registry -}} +{{- $registry := $oldRegistry | default $newRegistry | default $globalRegistry -}} + +{{- $oldRepository := include "nri-kube-events.compatibility.old.agent.repository" . -}} +{{- $newRepository := .Values.images.agent.repository -}} +{{- $repository := $oldRepository | default $newRepository }} + +{{- $oldTag := include "nri-kube-events.compatibility.old.agent.tag" . -}} +{{- $newTag := .Values.images.agent.tag -}} +{{- $tag := $oldTag | default $newTag -}} + +{{- if $registry -}} + {{- printf "%s/%s:%s" $registry $repository $tag -}} +{{- else -}} + {{- printf "%s:%s" $repository $tag -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the pull policy for the integration image taking into account that we made a breaking change on the values path. +*/}} +{{- define "nri-kube-events.compatibility.images.pullPolicy.integration" -}} +{{- $old := include "nri-kube-events.compatibility.old.integration.pullPolicy" . -}} +{{- $new := .Values.images.integration.pullPolicy -}} + +{{- $old | default $new -}} +{{- end -}} + + + +{{/* +Returns the pull policy for the agent image taking into account that we made a breaking change on the values path. +*/}} +{{- define "nri-kube-events.compatibility.images.pullPolicy.agent" -}} +{{- $old := include "nri-kube-events.compatibility.old.agent.pullPolicy" . -}} +{{- $new := .Values.images.agent.pullPolicy -}} + +{{- $old | default $new -}} +{{- end -}} + + + +{{/* +Returns a merged list of pull secrets ready to be used +*/}} +{{- define "nri-kube-events.compatibility.images.renderPullSecrets" -}} +{{- $list := list -}} + +{{- if .Values.image -}} + {{- if .Values.image.pullSecrets -}} + {{- $list = append $list .Values.image.pullSecrets }} + {{- end -}} +{{- end -}} + +{{- if .Values.images.pullSecrets -}} + {{- $list = append $list .Values.images.pullSecrets -}} +{{- end -}} + +{{- include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" $list "context" .) }} +{{- end -}} + + + +{{- /* Messege to show to the user saying that image value is not supported anymore */ -}} +{{- define "nri-kube-events.compatibility.message.images" -}} +{{- $oldIntegrationRegistry := include "nri-kube-events.compatibility.old.integration.registry" . -}} +{{- $oldIntegrationRepository := include "nri-kube-events.compatibility.old.integration.repository" . -}} +{{- $oldIntegrationTag := include "nri-kube-events.compatibility.old.integration.tag" . -}} +{{- $oldIntegrationPullPolicy := include "nri-kube-events.compatibility.old.integration.pullPolicy" . -}} +{{- $oldAgentRegistry := include "nri-kube-events.compatibility.old.agent.registry" . -}} +{{- $oldAgentRepository := include "nri-kube-events.compatibility.old.agent.repository" . -}} +{{- $oldAgentTag := include "nri-kube-events.compatibility.old.agent.tag" . -}} +{{- $oldAgentPullPolicy := include "nri-kube-events.compatibility.old.agent.pullPolicy" . -}} + +{{- if or $oldIntegrationRegistry $oldIntegrationRepository $oldIntegrationTag $oldIntegrationPullPolicy $oldAgentRegistry $oldAgentRepository $oldAgentTag $oldAgentPullPolicy }} +Configuring image repository an tag under 'image' is no longer supported. +This is the list values that we no longer support: + - image.kubeEvents.registry + - image.kubeEvents.repository + - image.kubeEvents.tag + - image.kubeEvents.pullPolicy + - image.infraAgent.registry + - image.infraAgent.repository + - image.infraAgent.tag + - image.infraAgent.pullPolicy + +Please set: + - images.agent.* to configure the infrastructure-agent forwarder. + - images.integration.* to configure the image in charge of scraping k8s data. + +------ +{{- end }} +{{- end -}} + + + +{{- /* Messege to show to the user saying that image value is not supported anymore */ -}} +{{- define "nri-kube-events.compatibility.message.securityContext.runAsUser" -}} +{{- if .Values.runAsUser }} +WARNING: `runAsUser` is deprecated +================================== + +We have automatically translated your `runAsUser` setting to the new format, but this shimming will be removed in the +future. Please migrate your configs to the new format in the `securityContext` key. +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/agent-configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/agent-configmap.yaml new file mode 100644 index 0000000000..02bf8306be --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/agent-configmap.yaml @@ -0,0 +1,12 @@ +{{- if .Values.sinks.newRelicInfra -}} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }}-agent-config + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: | + {{- include "nri-kube-events.agentConfig" . | nindent 4 }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrole.yaml new file mode 100644 index 0000000000..cbfd5d9cef --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrole.yaml @@ -0,0 +1,42 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }} +rules: +- apiGroups: + - "" + resources: + - events + - namespaces + - nodes + - jobs + - persistentvolumes + - persistentvolumeclaims + - pods + - services + verbs: + - get + - watch + - list +- apiGroups: + - apps + resources: + - daemonsets + - deployments + verbs: + - get + - watch + - list +- apiGroups: + - batch + resources: + - cronjobs + - jobs + verbs: + - get + - watch + - list +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..fc5dfb8dae --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }} +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/configmap.yaml new file mode 100644 index 0000000000..9e4e35f6b3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/configmap.yaml @@ -0,0 +1,23 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.naming.fullname" . }}-config + namespace: {{ .Release.Namespace }} +data: + config.yaml: |- + sinks: + {{- if .Values.sinks.stdout }} + - name: stdout + {{- end }} + {{- if .Values.sinks.newRelicInfra }} + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: {{ include "newrelic.common.cluster" . }} + agentHTTPTimeout: {{ .Values.agentHTTPTimeout }} + {{- end }} + captureDescribe: {{ .Values.scrapers.descriptions.enabled }} + describeRefresh: {{ .Values.scrapers.descriptions.resyncPeriod | default "24h" }} + captureEvents: {{ .Values.scrapers.events.enabled }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/deployment.yaml new file mode 100644 index 0000000000..7ba9eaea9c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/deployment.yaml @@ -0,0 +1,124 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + annotations: + {{- if .Values.deployment.annotations }} + {{- toYaml .Values.deployment.annotations | nindent 4 }} + {{- end }} +spec: + selector: + matchLabels: + app.kubernetes.io/name: {{ include "newrelic.common.naming.name" . }} + template: + metadata: + {{- if .Values.podAnnotations }} + annotations: + {{- toYaml .Values.podAnnotations | nindent 8}} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + {{- with include "nri-kube-events.compatibility.images.renderPullSecrets" . }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + {{- with include "nri-kube-events.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: kube-events + image: {{ include "nri-kube-events.compatibility.images.integration" . }} + imagePullPolicy: {{ include "nri-kube-events.compatibility.images.pullPolicy.integration" . }} + {{- with include "nri-kube-events.securityContext.container" . }} + securityContext: + {{- . | nindent 12 }} + {{- end }} + {{- if .Values.resources }} + resources: + {{- toYaml .Values.resources | nindent 12 }} + {{- end }} + args: ["-config", "/app/config/config.yaml", "-loglevel", "debug"] + volumeMounts: + - name: config-volume + mountPath: /app/config + {{- if .Values.sinks.newRelicInfra }} + - name: forwarder + image: {{ include "nri-kube-events.compatibility.images.agent" . }} + imagePullPolicy: {{ include "nri-kube-events.compatibility.images.pullPolicy.agent" . }} + {{- with include "nri-kube-events.securityContext.container" . }} + securityContext: + {{- . | nindent 12 }} + {{- end }} + ports: + - containerPort: {{ get (fromYaml (include "nri-kube-events.agentConfig" .)) "http_server_port" }} + env: + - name: NRIA_LICENSE_KEY + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + + - name: NRIA_OVERRIDE_HOSTNAME_SHORT + valueFrom: + fieldRef: + apiVersion: v1 + fieldPath: spec.nodeName + + volumeMounts: + - mountPath: /var/db/newrelic-infra/data + name: tmpfs-data + - mountPath: /var/db/newrelic-infra/user_data + name: tmpfs-user-data + - mountPath: /tmp + name: tmpfs-tmp + - name: config + mountPath: /etc/newrelic-infra.yml + subPath: newrelic-infra.yml + {{- if ((.Values.forwarder).resources) }} + resources: + {{- toYaml .Values.forwarder.resources | nindent 12 }} + {{- end }} + {{- end }} + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + volumes: + {{- if .Values.sinks.newRelicInfra }} + - name: config + configMap: + name: {{ include "newrelic.common.naming.fullname" . }}-agent-config + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + {{- end }} + - name: config-volume + configMap: + name: {{ include "newrelic.common.naming.fullname" . }}-config + - name: tmpfs-data + emptyDir: {} + - name: tmpfs-user-data + emptyDir: {} + - name: tmpfs-tmp + emptyDir: {} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/secret.yaml new file mode 100644 index 0000000000..f558ee86c9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/secret.yaml @@ -0,0 +1,2 @@ +{{- /* Common library will take care of creating the secret or not. */}} +{{- include "newrelic.common.license.secret" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/serviceaccount.yaml new file mode 100644 index 0000000000..07e818da00 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/templates/serviceaccount.yaml @@ -0,0 +1,11 @@ +{{- if include "newrelic.common.serviceAccount.create" . }} +apiVersion: v1 +kind: ServiceAccount +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} + annotations: +{{ include "newrelic.common.serviceAccount.annotations" . | indent 4 }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/agent_configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/agent_configmap_test.yaml new file mode 100644 index 0000000000..831b0c5aa1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/agent_configmap_test.yaml @@ -0,0 +1,46 @@ +suite: test configmap for newrelic infra agent +templates: + - templates/agent-configmap.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: has the correct default values + set: + cluster: test-cluster + licenseKey: us-whatever + asserts: + - equal: + path: data["newrelic-infra.yml"] + value: | + is_forward_only: true + http_server_enabled: true + http_server_port: 8001 + + - it: integrates properly with the common library + set: + cluster: test-cluster + licenseKey: us-whatever + fedramp.enabled: true + verboseLog: true + asserts: + - equal: + path: data["newrelic-infra.yml"] + value: | + is_forward_only: true + http_server_enabled: true + http_server_port: 8001 + + log: + level: trace + fedramp: true + + - it: does not template if the http sink is disabled + set: + cluster: test-cluster + licenseKey: us-whatever + sinks: + newRelicInfra: false + asserts: + - hasDocuments: + count: 0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/configmap_test.yaml new file mode 100644 index 0000000000..68ad53a579 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/configmap_test.yaml @@ -0,0 +1,139 @@ +suite: test configmap for sinks +templates: + - templates/configmap.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: has the correct sinks when default values used + set: + licenseKey: us-whatever + cluster: a-cluster + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: a-cluster + agentHTTPTimeout: 30s + captureDescribe: true + describeRefresh: 24h + captureEvents: true + + - it: honors agentHTTPTimeout + set: + licenseKey: us-whatever + cluster: a-cluster + agentHTTPTimeout: 10s + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: a-cluster + agentHTTPTimeout: 10s + captureDescribe: true + describeRefresh: 24h + captureEvents: true + + - it: has the correct sinks defined in local values + set: + licenseKey: us-whatever + cluster: a-cluster + sinks: + stdout: true + newRelicInfra: false + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: stdout + captureDescribe: true + describeRefresh: 24h + captureEvents: true + + - it: allows enabling/disabling event scraping + set: + licenseKey: us-whatever + cluster: a-cluster + scrapers: + events: + enabled: false + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: a-cluster + agentHTTPTimeout: 30s + captureDescribe: true + describeRefresh: 24h + captureEvents: false + + - it: allows enabling/disabling description scraping + set: + licenseKey: us-whatever + cluster: a-cluster + scrapers: + descriptions: + enabled: false + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: a-cluster + agentHTTPTimeout: 30s + captureDescribe: false + describeRefresh: 24h + captureEvents: true + + - it: allows changing description resync intervals + set: + licenseKey: us-whatever + cluster: a-cluster + scrapers: + descriptions: + resyncPeriod: 4h + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + - name: newRelicInfra + config: + agentEndpoint: http://localhost:8001/v1/data + clusterName: a-cluster + agentHTTPTimeout: 30s + captureDescribe: true + describeRefresh: 4h + captureEvents: true + + - it: has another document generated with the proper config set + set: + licenseKey: us-whatever + cluster: a-cluster + sinks: + stdout: false + newRelicInfra: false + asserts: + - equal: + path: data["config.yaml"] + value: |- + sinks: + captureDescribe: true + describeRefresh: 24h + captureEvents: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/deployment_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/deployment_test.yaml new file mode 100644 index 0000000000..702917bcea --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/deployment_test.yaml @@ -0,0 +1,104 @@ +suite: test deployment images +templates: + - templates/deployment.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: deployment image uses pullSecrets + set: + cluster: my-cluster + licenseKey: us-whatever + images: + pullSecrets: + - name: regsecret + asserts: + - equal: + path: spec.template.spec.imagePullSecrets + value: + - name: regsecret + + - it: deployment images use the proper image tag + set: + cluster: test-cluster + licenseKey: us-whatever + images: + integration: + repository: newrelic/nri-kube-events + tag: "latest" + agent: + repository: newrelic/k8s-events-forwarder + tag: "latest" + asserts: + - matchRegex: + path: spec.template.spec.containers[0].image + pattern: .*newrelic/nri-kube-events:latest$ + - matchRegex: + path: spec.template.spec.containers[1].image + pattern: .*newrelic/k8s-events-forwarder:latest$ + + + - it: by default the agent forwarder templates + set: + cluster: test-cluster + licenseKey: us-whatever + asserts: + - contains: + path: spec.template.spec.containers + any: true + content: + name: forwarder + - contains: + path: spec.template.spec.volumes + content: + name: config + configMap: + name: my-release-nri-kube-events-agent-config + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + + - it: agent does not template if the sink is disabled + set: + cluster: test-cluster + licenseKey: us-whatever + sinks: + newRelicInfra: false + asserts: + - notContains: + path: spec.template.spec.containers + any: true + content: + name: forwarder + - notContains: + path: spec.template.spec.volumes + content: + name: config + configMap: + name: my-release-nri-kube-events-agent-config + items: + - key: newrelic-infra.yml + path: newrelic-infra.yml + + - it: has a linux node selector by default + set: + cluster: my-cluster + licenseKey: us-whatever + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + + - it: has a linux node selector and additional selectors + set: + cluster: my-cluster + licenseKey: us-whatever + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/images_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/images_test.yaml new file mode 100644 index 0000000000..361be582b5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/images_test.yaml @@ -0,0 +1,168 @@ +suite: test image compatibility layer +templates: + - templates/deployment.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: by default the tag is not nil + set: + cluster: test-cluster + licenseKey: us-whatever + asserts: + - notMatchRegex: + path: spec.template.spec.containers[0].image + pattern: ".*nil.*" + - notMatchRegex: + path: spec.template.spec.containers[1].image + pattern: ".*nil.*" + + - it: templates image correctly from the new values + set: + cluster: test-cluster + licenseKey: us-whatever + images: + integration: + registry: ireg + repository: irep + tag: itag + agent: + registry: areg + repository: arep + tag: atag + asserts: + - equal: + path: spec.template.spec.containers[0].image + value: ireg/irep:itag + - equal: + path: spec.template.spec.containers[1].image + value: areg/arep:atag + + - it: templates image correctly from old values + set: + cluster: test-cluster + licenseKey: us-whatever + image: + kubeEvents: + registry: ireg + repository: irep + tag: itag + infraAgent: + registry: areg + repository: arep + tag: atag + asserts: + - equal: + path: spec.template.spec.containers[0].image + value: ireg/irep:itag + - equal: + path: spec.template.spec.containers[1].image + value: areg/arep:atag + + - it: old image values take precedence + set: + cluster: test-cluster + licenseKey: us-whatever + images: + integration: + registry: inew + repository: inew + tag: inew + agent: + registry: anew + repository: anew + tag: anew + image: + kubeEvents: + registry: iold + repository: iold + tag: iold + infraAgent: + registry: aold + repository: aold + tag: aold + asserts: + - equal: + path: spec.template.spec.containers[0].image + value: iold/iold:iold + - equal: + path: spec.template.spec.containers[1].image + value: aold/aold:aold + + - it: pullImagePolicy templates correctly from the new values + set: + cluster: test-cluster + licenseKey: us-whatever + images: + integration: + pullPolicy: new + agent: + pullPolicy: new + asserts: + - equal: + path: spec.template.spec.containers[0].imagePullPolicy + value: new + - equal: + path: spec.template.spec.containers[1].imagePullPolicy + value: new + + - it: pullImagePolicy templates correctly from old values + set: + cluster: test-cluster + licenseKey: us-whatever + image: + kubeEvents: + pullPolicy: old + infraAgent: + pullPolicy: old + asserts: + - equal: + path: spec.template.spec.containers[0].imagePullPolicy + value: old + - equal: + path: spec.template.spec.containers[1].imagePullPolicy + value: old + + - it: old imagePullPolicy values take precedence + set: + cluster: test-cluster + licenseKey: us-whatever + images: + integration: + pullPolicy: new + agent: + pullPolicy: new + image: + kubeEvents: + pullPolicy: old + infraAgent: + pullPolicy: old + asserts: + - equal: + path: spec.template.spec.containers[0].imagePullPolicy + value: old + - equal: + path: spec.template.spec.containers[1].imagePullPolicy + value: old + + - it: imagePullSecrets merge properly + set: + cluster: test-cluster + licenseKey: us-whatever + global: + images: + pullSecrets: + - global: global + images: + pullSecrets: + - images: images + image: + pullSecrets: + - image: image + asserts: + - equal: + path: spec.template.spec.imagePullSecrets + value: + - global: global + - image: image + - images: images diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/security_context_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/security_context_test.yaml new file mode 100644 index 0000000000..b2b710331f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/tests/security_context_test.yaml @@ -0,0 +1,77 @@ +suite: test deployment security context +templates: + - templates/deployment.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: pod securityContext set to defaults when no values provided + set: + cluster: my-cluster + licenseKey: us-whatever + asserts: + - equal: + path: spec.template.spec.securityContext + value: + runAsUser: 1000 + runAsNonRoot: true + - it: pod securityContext set common-library values + set: + cluster: test-cluster + licenseKey: us-whatever + podSecurityContext: + foobar: true + asserts: + - equal: + path: spec.template.spec.securityContext.foobar + value: true + - it: pod securityContext compatibility layer overrides values from common-library + set: + cluster: test-cluster + licenseKey: us-whatever + runAsUser: 1001 + podSecurityContext: + runAsUser: 1000 + runAsNonRoot: false + asserts: + - equal: + path: spec.template.spec.securityContext + value: + runAsUser: 1001 + runAsNonRoot: false + - it: pod securityContext compatibility layer overrides defaults + set: + cluster: test-cluster + licenseKey: us-whatever + runAsUser: 1001 + asserts: + - equal: + path: spec.template.spec.securityContext.runAsUser + value: 1001 + - it: set to defaults when no containerSecurityContext set + set: + cluster: my-cluster + licenseKey: us-whatever + asserts: + - equal: + path: spec.template.spec.containers[0].securityContext + value: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + - equal: + path: spec.template.spec.containers[1].securityContext + value: + allowPrivilegeEscalation: false + privileged: false + readOnlyRootFilesystem: true + - it: set containerSecurityContext custom values + set: + cluster: test-cluster + licenseKey: us-whatever + containerSecurityContext: + foobar: true + asserts: + - equal: + path: spec.template.spec.containers[0].securityContext.foobar + value: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/values.yaml new file mode 100644 index 0000000000..fc473991df --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-kube-events/values.yaml @@ -0,0 +1,135 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Mandatory. Can be configured also with `global.cluster` +cluster: "" +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" +# -- In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` +customSecretName: "" +# -- In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` +customSecretLicenseKey: "" + +# -- Images used by the chart for the integration and agents +# @default -- See `values.yaml` +images: + # -- Image for the New Relic Kubernetes integration + # @default -- See `values.yaml` + integration: + registry: + repository: newrelic/nri-kube-events + tag: + pullPolicy: IfNotPresent + # -- Image for the New Relic Infrastructure Agent sidecar + # @default -- See `values.yaml` + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.57.2 + pullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + +# -- Resources for the integration container +resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +# -- Resources for the forwarder sidecar container +forwarder: + resources: {} + # limits: + # cpu: 100m + # memory: 128Mi + # requests: + # cpu: 100m + # memory: 128Mi + +rbac: + # -- Specifies whether RBAC resources should be created + create: true + +# -- Settings controlling ServiceAccount creation +# @default -- See `values.yaml` +serviceAccount: + # serviceAccount.create -- (bool) Specifies whether a ServiceAccount should be created + # @default -- `true` + create: + # If not set and create is true, a name is generated using the fullname template + name: "" + # Specify any annotations to add to the ServiceAccount + annotations: + +# -- Annotations to add to the pod. +podAnnotations: {} +deployment: + # deployment.annotations -- Annotations to add to the Deployment. + annotations: {} +# -- Additional labels for chart pods +podLabels: {} +# -- Additional labels for chart objects +labels: {} + +# -- Amount of time to wait until timeout to send metrics to the metric forwarder +agentHTTPTimeout: "30s" + +# -- Configure where will the metrics be written. Mostly for debugging purposes. +# @default -- See `values.yaml` +sinks: + # -- Enable the stdout sink to also see all events in the logs. + stdout: false + # -- The newRelicInfra sink sends all events to New Relic. + newRelicInfra: true + +# -- Configure the various kinds of scrapers that should be run. +# @default -- See `values.yaml` +scrapers: + descriptions: + enabled: true + resyncPeriod: "24h" + events: + enabled: true + +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- `false` +hostNetwork: +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: {} +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +# -- Sets pod/node affinities. Can be configured also with `global.affinity` +affinity: {} +# -- Sets pod's node selector. Can be configured also with `global.nodeSelector` +nodeSelector: {} +# -- Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` +tolerations: [] + +# -- Adds extra attributes to the cluster and all the metrics emitted to the backend. Can be configured also with `global.customAttributes` +customAttributes: {} + +# -- Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` +proxy: "" + +# -- (bool) Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` +# @default -- `false` +nrStaging: +fedramp: + # -- (bool) Enables FedRAMP. Can be configured also with `global.fedramp.enabled` + # @default -- `false` + enabled: + +# -- (bool) Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` +# @default -- `false` +verboseLog: diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/.helmignore new file mode 100644 index 0000000000..f62b5519e5 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/.helmignore @@ -0,0 +1 @@ +templates/admission-webhooks/job-patch/README.md diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.lock new file mode 100644 index 0000000000..d442841bfa --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-08-30T00:58:04.140696675Z" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.yaml new file mode 100644 index 0000000000..5cc9d4a85b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/Chart.yaml @@ -0,0 +1,25 @@ +apiVersion: v2 +appVersion: 1.29.2 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic metadata injection webhook. +home: https://hub.docker.com/r/newrelic/k8s-metadata-injection +icon: https://newrelic.com/assets/newrelic/source/NewRelic-logo-square.svg +keywords: +- infrastructure +- newrelic +- monitoring +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +name: nri-metadata-injection +sources: +- https://github.com/newrelic/k8s-metadata-injection +- https://github.com/newrelic/k8s-metadata-injection/tree/master/charts/nri-metadata-injection +version: 4.21.2 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md new file mode 100644 index 0000000000..dd922ef13a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md @@ -0,0 +1,68 @@ +# nri-metadata-injection + +A Helm chart to deploy the New Relic metadata injection webhook. + +**Homepage:** + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-metadata-injection https://newrelic.github.io/k8s-metadata-injection +helm upgrade --install nri-metadata-injection/nri-metadata-injection -f your-custom-values.yaml +``` + +## Source Code + +* +* + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Sets pod/node affinities. Can be configured also with `global.affinity` | +| certManager.enabled | bool | `false` | Use cert manager for webhook certs | +| certManager.rootCertificateDuration | string | `"43800h"` | Sets the root certificate duration. Defaults to 43800h (5 years). | +| certManager.webhookCertificateDuration | string | `"8760h"` | Sets certificate duration. Defaults to 8760h (1 year). | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| customTLSCertificate | bool | `false` | Use custom tls certificates for the webhook, or let the chart handle it automatically. Ref: https://docs.newrelic.com/docs/integrations/kubernetes-integration/link-your-applications/link-your-applications-kubernetes#configure-injection | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | false | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| image | object | See `values.yaml` | Image for the New Relic Metadata Injector | +| image.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| injectOnlyLabeledNamespaces | bool | `false` | Enable the metadata decoration only for pods living in namespaces labeled with 'newrelic-metadata-injection=enabled'. | +| jobImage | object | See `values.yaml` | Image for creating the needed certificates of this webhook to work | +| jobImage.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| jobImage.volumeMounts | list | `[]` | Volume mounts to add to the job, you might want to mount tmp if Pod Security Policies Enforce a read-only root. | +| jobImage.volumes | list | `[]` | Volumes to add to the job container | +| labels | object | `{}` | Additional labels for chart objects. Can be configured also with `global.labels` | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector. Can be configured also with `global.nodeSelector` | +| podAnnotations | object | `{}` | Annotations to be added to all pods created by the integration. | +| podLabels | object | `{}` | Additional labels for chart pods. Can be configured also with `global.podLabels` | +| podSecurityContext | object | `{}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| rbac.pspEnabled | bool | `false` | Whether the chart should create Pod Security Policy objects. | +| replicas | int | `1` | | +| resources | object | 100m/30M -/80M | Image for creating the needed certificates of this webhook to work | +| timeoutSeconds | int | `28` | Webhook timeout Ref: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts | +| tolerations | list | `[]` | Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` | + +## Maintainers + +* [juanjjaramillo](https://github.com/juanjjaramillo) +* [csongnr](https://github.com/csongnr) +* [dbudziwojskiNR](https://github.com/dbudziwojskiNR) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md.gotmpl new file mode 100644 index 0000000000..752ba8aae8 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/README.md.gotmpl @@ -0,0 +1,41 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-metadata-injection https://newrelic.github.io/k8s-metadata-injection +helm upgrade --install nri-metadata-injection/nri-metadata-injection -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/ci/test-values.yaml new file mode 100644 index 0000000000..6f79dea930 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/ci/test-values.yaml @@ -0,0 +1,5 @@ +cluster: test-cluster + +image: + repository: e2e/metadata-injection + tag: test # Defaults to AppVersion diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/NOTES.txt b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/NOTES.txt new file mode 100644 index 0000000000..544124d11e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/NOTES.txt @@ -0,0 +1,23 @@ +Your deployment of the New Relic metadata injection webhook is complete. You can check on the progress of this by running the following command: + + kubectl get deployments -o wide -w --namespace {{ .Release.Namespace }} {{ template "newrelic.common.naming.fullname" . }} + +{{- if .Values.customTLSCertificate }} +You have configure the chart to use a custom tls certificate, make sure to read the 'Manage custom certificates' section of the official docs to find the instructions on how to finish setting up the webhook. + +https://docs.newrelic.com/docs/integrations/kubernetes-integration/link-your-applications/link-your-applications-kubernetes#configure-injection +{{- end }} + +To validate the injection of metadata create a dummy pod containing Busybox by running: + + kubectl create -f https://git.io/vPieo + +Check if New Relic environment variables were injected: + + kubectl exec busybox0 -- env | grep NEW_RELIC_METADATA_KUBERNETES + + NEW_RELIC_METADATA_KUBERNETES_CLUSTER_NAME=fsi + NEW_RELIC_METADATA_KUBERNETES_NODE_NAME=nodea + NEW_RELIC_METADATA_KUBERNETES_NAMESPACE_NAME=default + NEW_RELIC_METADATA_KUBERNETES_POD_NAME=busybox0 + NEW_RELIC_METADATA_KUBERNETES_CONTAINER_NAME=busybox diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/_helpers.tpl new file mode 100644 index 0000000000..54a23e9811 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/_helpers.tpl @@ -0,0 +1,72 @@ +{{/* vim: set filetype=mustache: */}} + +{{- /* Allow to change pod defaults dynamically */ -}} +{{- define "nri-metadata-injection.securityContext.pod" -}} +{{- if include "newrelic.common.securityContext.pod" . -}} +{{- include "newrelic.common.securityContext.pod" . -}} +{{- else -}} +fsGroup: 1001 +runAsUser: 1001 +runAsGroup: 1001 +{{- end -}} +{{- end -}} + +{{- /* +Naming helpers +*/ -}} + +{{- define "nri-metadata-injection.name.admission" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.admission" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.admission.serviceAccount" -}} +{{- if include "newrelic.common.serviceAccount.create" . -}} + {{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission") }} +{{- else -}} + {{ include "newrelic.common.serviceAccount.name" . }} +{{- end -}} +{{- end -}} + +{{- define "nri-metadata-injection.name.admission-create" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission-create") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.admission-create" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission-create") }} +{{- end -}} + +{{- define "nri-metadata-injection.name.admission-patch" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "admission-patch") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.admission-patch" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "admission-patch") }} +{{- end -}} + +{{- define "nri-metadata-injection.name.self-signed-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "self-signed-issuer") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.self-signed-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "self-signed-issuer") }} +{{- end -}} + +{{- define "nri-metadata-injection.name.root-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "root-issuer") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.root-issuer" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "root-issuer") }} +{{- end -}} + +{{- define "nri-metadata-injection.name.webhook-cert" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.name" .) "suffix" "webhook-cert") }} +{{- end -}} + +{{- define "nri-metadata-injection.fullname.webhook-cert" -}} +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" (dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "webhook-cert") }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrole.yaml new file mode 100644 index 0000000000..275b597c8d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrole.yaml @@ -0,0 +1,27 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "nri-metadata-injection.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "newrelic.common.naming.name" $ }}-admission + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - admissionregistration.k8s.io + resources: + - mutatingwebhookconfigurations + verbs: + - get + - update +{{- if .Values.rbac.pspEnabled }} + - apiGroups: ['policy'] + resources: ['podsecuritypolicies'] + verbs: ['use'] + resourceNames: + - {{ include "nri-metadata-injection.fullname.admission" . }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrolebinding.yaml new file mode 100644 index 0000000000..cf846745ed --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/clusterrolebinding.yaml @@ -0,0 +1,20 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "nri-metadata-injection.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + app: {{ include "nri-metadata-injection.name.admission" . }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "nri-metadata-injection.fullname.admission" . }} +subjects: + - kind: ServiceAccount + name: {{ include "nri-metadata-injection.fullname.admission.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-createSecret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-createSecret.yaml new file mode 100644 index 0000000000..a04f279353 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-createSecret.yaml @@ -0,0 +1,61 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ include "nri-metadata-injection.fullname.admission-create" . }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission-create" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "nri-metadata-injection.fullname.admission-create" . }} + {{- if .Values.podAnnotations }} + annotations: + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + labels: + app: {{ include "nri-metadata-injection.name.admission-create" . }} + {{- include "newrelic.common.labels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.jobImage.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 -}} + {{- end }} + containers: + - name: create + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.jobImage "context" .) }} + imagePullPolicy: {{ .Values.jobImage.pullPolicy }} + args: + - create + - --host={{ include "newrelic.common.naming.fullname" . }},{{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }}.svc + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "nri-metadata-injection.fullname.admission" . }} + - --cert-name=tls.crt + - --key-name=tls.key + {{- if .Values.jobImage.volumeMounts }} + volumeMounts: + {{- .Values.jobImage.volumeMounts | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.jobImage.volumes }} + volumes: + {{- .Values.jobImage.volumes | toYaml | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "nri-metadata-injection.fullname.admission.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- if .Values.tolerations }} + tolerations: + {{- toYaml .Values.tolerations | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-patchWebhook.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-patchWebhook.yaml new file mode 100644 index 0000000000..99374ef355 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/job-patchWebhook.yaml @@ -0,0 +1,61 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ include "nri-metadata-injection.fullname.admission-patch" . }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission-patch" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + template: + metadata: + name: {{ include "nri-metadata-injection.fullname.admission-patch" . }} + {{- if .Values.podAnnotations }} + annotations: + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + labels: + app: {{ include "nri-metadata-injection.name.admission-patch" . }} + {{- include "newrelic.common.labels" . | nindent 8 }} + spec: + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.jobImage.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 -}} + {{- end }} + containers: + - name: patch + image: {{ include "newrelic.common.images.image" ( dict "defaultRegistry" "registry.k8s.io" "imageRoot" .Values.jobImage "context" .) }} + imagePullPolicy: {{ .Values.jobImage.pullPolicy }} + args: + - patch + - --webhook-name={{ include "newrelic.common.naming.fullname" . }} + - --namespace={{ .Release.Namespace }} + - --secret-name={{ include "nri-metadata-injection.fullname.admission" . }} + - --patch-failure-policy=Ignore + - --patch-validating=false + {{- if .Values.jobImage.volumeMounts }} + volumeMounts: + {{- .Values.jobImage.volumeMounts | toYaml | nindent 10 }} + {{- end }} + {{- if .Values.jobImage.volumes }} + volumes: + {{- .Values.jobImage.volumes | toYaml | nindent 8 }} + {{- end }} + restartPolicy: OnFailure + serviceAccountName: {{ include "nri-metadata-injection.fullname.admission.serviceAccount" . }} + securityContext: + runAsGroup: 2000 + runAsNonRoot: true + runAsUser: 2000 + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- if .Values.tolerations }} + tolerations: + {{- toYaml .Values.tolerations | nindent 8 }} + {{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/psp.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/psp.yaml new file mode 100644 index 0000000000..899ac95fe1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/psp.yaml @@ -0,0 +1,50 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled) (.Values.rbac.pspEnabled) (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy")) }} +apiVersion: policy/v1beta1 +kind: PodSecurityPolicy +metadata: + name: {{ include "nri-metadata-injection.fullname.admission" . }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + privileged: false + # Required to prevent escalations to root. + # allowPrivilegeEscalation: false + # This is redundant with non-root + disallow privilege escalation, + # but we can provide it for defense in depth. + #requiredDropCapabilities: + # - ALL + # Allow core volume types. + volumes: + - 'configMap' + - 'emptyDir' + - 'projected' + - 'secret' + - 'downwardAPI' + - 'persistentVolumeClaim' + hostNetwork: false + hostIPC: false + hostPID: false + runAsUser: + # Permits the container to run with root privileges as well. + rule: 'RunAsAny' + seLinux: + # This policy assumes the nodes are using AppArmor rather than SELinux. + rule: 'RunAsAny' + supplementalGroups: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + fsGroup: + rule: 'MustRunAs' + ranges: + # Forbid adding the root group. + - min: 0 + max: 65535 + readOnlyRootFilesystem: false +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/role.yaml new file mode 100644 index 0000000000..e426702579 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/role.yaml @@ -0,0 +1,21 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "nri-metadata-injection.fullname.admission" . }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: + - apiGroups: + - "" + resources: + - secrets + verbs: + - get + - create +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/rolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/rolebinding.yaml new file mode 100644 index 0000000000..e73bf472cd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/rolebinding.yaml @@ -0,0 +1,21 @@ +{{- if (and (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "nri-metadata-injection.fullname.admission" . }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "nri-metadata-injection.fullname.admission" . }} +subjects: + - kind: ServiceAccount + name: {{ include "nri-metadata-injection.fullname.admission.serviceAccount" . }} + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/serviceaccount.yaml new file mode 100644 index 0000000000..027a59089a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/job-patch/serviceaccount.yaml @@ -0,0 +1,14 @@ +{{- $createServiceAccount := include "newrelic.common.serviceAccount.create" . -}} +{{- if (and $createServiceAccount (not .Values.customTLSCertificate) (not .Values.certManager.enabled)) -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "nri-metadata-injection.fullname.admission.serviceAccount" . }} + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-install,post-upgrade + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded + labels: + app: {{ include "nri-metadata-injection.name.admission" . }} + {{- include "newrelic.common.labels" . | nindent 4 }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/mutatingWebhookConfiguration.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/mutatingWebhookConfiguration.yaml new file mode 100644 index 0000000000..b196d4f593 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/admission-webhooks/mutatingWebhookConfiguration.yaml @@ -0,0 +1,36 @@ +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} +{{- if .Values.certManager.enabled }} + annotations: + certmanager.k8s.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} + cert-manager.io/inject-ca-from: {{ printf "%s/%s-root-cert" .Release.Namespace (include "newrelic.common.naming.fullname" .) | quote }} +{{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +webhooks: +- name: metadata-injection.newrelic.com + clientConfig: + service: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + path: "/mutate" +{{- if not .Values.certManager.enabled }} + caBundle: "" +{{- end }} + rules: + - operations: ["CREATE"] + apiGroups: [""] + apiVersions: ["v1"] + resources: ["pods"] +{{- if .Values.injectOnlyLabeledNamespaces }} + scope: Namespaced + namespaceSelector: + matchLabels: + newrelic-metadata-injection: enabled +{{- end }} + failurePolicy: Ignore + timeoutSeconds: {{ .Values.timeoutSeconds }} + sideEffects: None + admissionReviewVersions: ["v1", "v1beta1"] diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/cert-manager.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/cert-manager.yaml new file mode 100644 index 0000000000..502fa44bb2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/cert-manager.yaml @@ -0,0 +1,53 @@ +{{ if .Values.certManager.enabled }} +--- +# Create a selfsigned Issuer, in order to create a root CA certificate for +# signing webhook serving certificates +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: {{ include "nri-metadata-injection.fullname.self-signed-issuer" . }} + namespace: {{ .Release.Namespace }} +spec: + selfSigned: {} +--- +# Generate a CA Certificate used to sign certificates for the webhook +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ include "newrelic.common.naming.fullname" . }}-root-cert + namespace: {{ .Release.Namespace }} +spec: + secretName: {{ include "newrelic.common.naming.fullname" . }}-root-cert + duration: {{ .Values.certManager.rootCertificateDuration}} + issuerRef: + name: {{ include "nri-metadata-injection.fullname.self-signed-issuer" . }} + commonName: "ca.webhook.nri" + isCA: true +--- +# Create an Issuer that uses the above generated CA certificate to issue certs +apiVersion: cert-manager.io/v1 +kind: Issuer +metadata: + name: {{ include "nri-metadata-injection.fullname.root-issuer" . }} + namespace: {{ .Release.Namespace }} +spec: + ca: + secretName: {{ include "newrelic.common.naming.fullname" . }}-root-cert +--- + +# Finally, generate a serving certificate for the webhook to use +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ include "nri-metadata-injection.fullname.webhook-cert" . }} + namespace: {{ .Release.Namespace }} +spec: + secretName: {{ include "nri-metadata-injection.fullname.admission" . }} + duration: {{ .Values.certManager.webhookCertificateDuration }} + issuerRef: + name: {{ include "nri-metadata-injection.fullname.root-issuer" . }} + dnsNames: + - {{ include "newrelic.common.naming.fullname" . }} + - {{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }} + - {{ include "newrelic.common.naming.fullname" . }}.{{ .Release.Namespace }}.svc +{{ end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/deployment.yaml new file mode 100644 index 0000000000..4974dbbc1c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/deployment.yaml @@ -0,0 +1,85 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + replicas: {{ .Values.replicas }} + selector: + matchLabels: + {{- /* We cannot use the common library here because of a legacy issue */}} + {{- /* `selector` is immutable and the previous chart did not have all the idiomatic labels */}} + app.kubernetes.io/name: {{ include "newrelic.common.naming.name" . }} + template: + metadata: + {{- if .Values.podAnnotations }} + annotations: + {{- toYaml .Values.podAnnotations | nindent 8 }} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + {{- with include "nri-metadata-injection.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 -}} + {{- end }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} + {{- if include "newrelic.common.hostNetwork" . }} + dnsPolicy: ClusterFirstWithHostNet + {{- end }} + + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" ( list .Values.image.pullSecrets ) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 -}} + {{- end }} + containers: + - name: {{ include "newrelic.common.naming.name" . }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.image "context" .) }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 10 }} + {{- end }} + env: + - name: clusterName + value: {{ include "newrelic.common.cluster" . }} + ports: + - containerPort: 8443 + protocol: TCP + volumeMounts: + - name: tls-key-cert-pair + mountPath: /etc/tls-key-cert-pair + readinessProbe: + httpGet: + path: /health + port: 8080 + initialDelaySeconds: 1 + periodSeconds: 1 + {{- if .Values.resources }} + resources: + {{ toYaml .Values.resources | nindent 10 }} + {{- end }} + volumes: + - name: tls-key-cert-pair + secret: + secretName: {{ include "nri-metadata-injection.fullname.admission" . }} + nodeSelector: + kubernetes.io/os: linux + {{ include "newrelic.common.nodeSelector" . | nindent 8 }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 -}} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 -}} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/service.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/service.yaml new file mode 100644 index 0000000000..e4a57587c6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/templates/service.yaml @@ -0,0 +1,13 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + ports: + - port: 443 + targetPort: 8443 + selector: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 4 }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/cluster_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/cluster_test.yaml new file mode 100644 index 0000000000..a28487a06e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/cluster_test.yaml @@ -0,0 +1,39 @@ +suite: test cluster environment variable setup +templates: + - templates/deployment.yaml +release: + name: release + namespace: ns +tests: + - it: clusterName env is properly set + set: + cluster: my-cluster + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: clusterName + value: my-cluster + - it: fail when cluster is not defined + asserts: + - failedTemplate: + errorMessage: There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required. + - it: has a linux node selector by default + set: + cluster: my-cluster + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + - it: has a linux node selector and additional selectors + set: + cluster: my-cluster + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/job_serviceaccount_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/job_serviceaccount_test.yaml new file mode 100644 index 0000000000..63b6f0534b --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/job_serviceaccount_test.yaml @@ -0,0 +1,59 @@ +suite: test job' serviceAccount +templates: + - templates/admission-webhooks/job-patch/job-createSecret.yaml + - templates/admission-webhooks/job-patch/job-patchWebhook.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: my-release-nri-metadata-injection-admission + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: spec.template.spec.serviceAccountName + value: default + + - it: has a linux node selector by default + set: + cluster: my-cluster + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + + - it: has a linux node selector and additional selectors + set: + cluster: my-cluster + nodeSelector: + aCoolTestLabel: aCoolTestValue + asserts: + - equal: + path: spec.template.spec.nodeSelector + value: + kubernetes.io/os: linux + aCoolTestLabel: aCoolTestValue diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/rbac_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/rbac_test.yaml new file mode 100644 index 0000000000..5a69191df9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/rbac_test.yaml @@ -0,0 +1,38 @@ +suite: test RBAC creation +templates: + - templates/admission-webhooks/job-patch/rolebinding.yaml + - templates/admission-webhooks/job-patch/clusterrolebinding.yaml +release: + name: my-release + namespace: my-namespace +tests: + - it: RBAC points to the service account that is created by default + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: true + asserts: + - equal: + path: subjects[0].name + value: my-release-nri-metadata-injection-admission + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: false + serviceAccount.name: sa-test + asserts: + - equal: + path: subjects[0].name + value: sa-test + + - it: RBAC points to the service account the user supplies when serviceAccount is disabled + set: + cluster: test-cluster + rbac.create: true + serviceAccount.create: false + asserts: + - equal: + path: subjects[0].name + value: default diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/volume_mounts_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/volume_mounts_test.yaml new file mode 100644 index 0000000000..4a3c1327dd --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/tests/volume_mounts_test.yaml @@ -0,0 +1,30 @@ +suite: check volume mounts is properly set +templates: + - templates/admission-webhooks/job-patch/job-createSecret.yaml + - templates/admission-webhooks/job-patch/job-patchWebhook.yaml +release: + name: release + namespace: ns +tests: + - it: clusterName env is properly set + set: + cluster: my-cluster + jobImage: + volumeMounts: + - name: test-volume + volumePath: /test-volume + volumes: + - name: test-volume-container + emptyDir: {} + + asserts: + - contains: + path: spec.template.spec.containers[0].volumeMounts + content: + name: test-volume + volumePath: /test-volume + - contains: + path: spec.template.spec.volumes + content: + name: test-volume-container + emptyDir: {} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/values.yaml new file mode 100644 index 0000000000..849135c357 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-metadata-injection/values.yaml @@ -0,0 +1,102 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` +cluster: "" + +# -- Image for the New Relic Metadata Injector +# @default -- See `values.yaml` +image: + registry: + repository: newrelic/k8s-metadata-injection + tag: "" # Defaults to chart's appVersion + pullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + +# -- Image for creating the needed certificates of this webhook to work +# @default -- See `values.yaml` +jobImage: + registry: # Defaults to registry.k8s.io + repository: ingress-nginx/kube-webhook-certgen + tag: v1.3.0 + pullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + + # -- Volume mounts to add to the job, you might want to mount tmp if Pod Security Policies + # Enforce a read-only root. + volumeMounts: [] + # - name: tmp + # mountPath: /tmp + + # -- Volumes to add to the job container + volumes: [] + # - name: tmp + # emptyDir: {} + +rbac: + # rbac.pspEnabled -- Whether the chart should create Pod Security Policy objects. + pspEnabled: false + +replicas: 1 + +# -- Additional labels for chart objects. Can be configured also with `global.labels` +labels: {} +# -- Annotations to be added to all pods created by the integration. +podAnnotations: {} +# -- Additional labels for chart pods. Can be configured also with `global.podLabels` +podLabels: {} + +# -- Image for creating the needed certificates of this webhook to work +# @default -- 100m/30M -/80M +resources: + limits: + memory: 80M + requests: + cpu: 100m + memory: 30M + +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- false +hostNetwork: +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: {} +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +certManager: + # certManager.enabled -- Use cert manager for webhook certs + enabled: false + # -- Sets the root certificate duration. Defaults to 43800h (5 years). + rootCertificateDuration: 43800h + # -- Sets certificate duration. Defaults to 8760h (1 year). + webhookCertificateDuration: 8760h + +# -- Sets pod/node affinities. Can be configured also with `global.affinity` +affinity: {} +# -- Sets pod's node selector. Can be configured also with `global.nodeSelector` +nodeSelector: {} +# -- Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` +tolerations: [] + +# -- Enable the metadata decoration only for pods living in namespaces labeled +# with 'newrelic-metadata-injection=enabled'. +injectOnlyLabeledNamespaces: false + +# -- Use custom tls certificates for the webhook, or let the chart handle it +# automatically. +# Ref: https://docs.newrelic.com/docs/integrations/kubernetes-integration/link-your-applications/link-your-applications-kubernetes#configure-injection +customTLSCertificate: false + +# -- Webhook timeout +# Ref: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#timeouts +timeoutSeconds: 28 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/.helmignore new file mode 100644 index 0000000000..50af031725 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/.helmignore @@ -0,0 +1,22 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.lock b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.lock new file mode 100644 index 0000000000..934f6dcbcc --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +digest: sha256:2e1da613fd8a52706bde45af077779c5d69e9e1641bdf5c982eaf6d1ac67a443 +generated: "2024-09-30T16:12:25.200183-07:00" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.yaml new file mode 100644 index 0000000000..d33f07729a --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/Chart.yaml @@ -0,0 +1,29 @@ +apiVersion: v2 +appVersion: 2.21.4 +dependencies: +- name: common-library + repository: https://helm-charts.newrelic.com + version: 1.3.0 +description: A Helm chart to deploy the New Relic Prometheus OpenMetrics integration +home: https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-openmetrics/configure-prometheus-openmetrics-integrations/ +icon: https://newrelic.com/themes/custom/curio/assets/mediakit/new_relic_logo_vertical.svg +keywords: +- prometheus +- newrelic +- monitoring +maintainers: +- name: alvarocabanas + url: https://github.com/alvarocabanas +- name: sigilioso + url: https://github.com/sigilioso +- name: gsanchezgavier + url: https://github.com/gsanchezgavier +- name: kang-makes + url: https://github.com/kang-makes +- name: paologallinaharbur + url: https://github.com/paologallinaharbur +name: nri-prometheus +sources: +- https://github.com/newrelic/nri-prometheus +- https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus +version: 2.1.19 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md new file mode 100644 index 0000000000..0287b2b2a2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md @@ -0,0 +1,116 @@ +# nri-prometheus + +A Helm chart to deploy the New Relic Prometheus OpenMetrics integration + +**Homepage:** + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-prometheus https://newrelic.github.io/nri-prometheus +helm upgrade --install newrelic-prometheus nri-prometheus/nri-prometheus -f your-custom-values.yaml +``` + +## Source Code + +* +* + +## Scraping services and endpoints + +When a service is labeled or annotated with `scrape_enabled_label` (defaults to `prometheus.io/scrape`), +`nri-prometheus` will attempt to hit the service directly, rather than the endpoints behind it. + +This is the default behavior for compatibility reasons, but is known to cause issues if more than one endpoint +is behind the service, as metric queries will be load-balanced as well leading to inaccurate histograms. + +In order to change this behaviour set `scrape_endpoints` to `true` and `scrape_services` to `false`. +This will instruct `nri-prometheus` to scrape the underlying endpoints, as Prometheus server does. + +Existing users that are switching to this behavior should note that, depending on the number of endpoints +behind the services in the cluster the load and the metrics reported by those, data ingestion might see +an increase when flipping this option. Resource requirements might also be impacted, again depending on the number of new targets. + +While it is technically possible to set both `scrape_services` and `scrape_endpoints` to true, we do no recommend +doing so as it will lead to redundant metrics being processed, + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Low data mode +See this snippet from the `values.yaml` file: +```yaml +global: + lowDataMode: false +lowDataMode: false +``` + +To reduce the amount ot metrics we send to New Relic, enabling the `lowDataMode` will add [these transformations](static/lowdatamodedefaults.yaml): +```yaml +transformations: + - description: "Low data mode defaults" + ignore_metrics: + # Ignore the following metrics. + # These metrics are already collected by the New Relic Kubernetes Integration. + - prefixes: + - kube_ + - container_ + - machine_ + - cadvisor_ +``` + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| affinity | object | `{}` | Sets pod/node affinities. Can be configured also with `global.affinity` | +| cluster | string | `""` | Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` | +| config | object | See `values.yaml` | Provides your own `config.yaml` for this integration. Ref: https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-openmetrics/configure-prometheus-openmetrics-integrations/#example-configuration-file | +| containerSecurityContext | object | `{}` | Sets security context (at container level). Can be configured also with `global.containerSecurityContext` | +| customSecretLicenseKey | string | `""` | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` | +| customSecretName | string | `""` | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` | +| dnsConfig | object | `{}` | Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` | +| fedramp.enabled | bool | false | Enables FedRAMP. Can be configured also with `global.fedramp.enabled` | +| fullnameOverride | string | `""` | Override the full name of the release | +| hostNetwork | bool | `false` | Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` | +| image | object | See `values.yaml` | Image for the New Relic Kubernetes integration | +| image.pullSecrets | list | `[]` | The secrets that are needed to pull images from a custom registry. | +| labels | object | `{}` | Additional labels for chart objects. Can be configured also with `global.labels` | +| licenseKey | string | `""` | This set this license key to use. Can be configured also with `global.licenseKey` | +| lowDataMode | bool | false | Reduces number of metrics sent in order to reduce costs. Can be configured also with `global.lowDataMode` | +| nameOverride | string | `""` | Override the name of the chart | +| nodeSelector | object | `{}` | Sets pod's node selector. Can be configured also with `global.nodeSelector` | +| nrStaging | bool | false | Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` | +| podAnnotations | object | `{}` | Annotations to be added to all pods created by the integration. | +| podLabels | object | `{}` | Additional labels for chart pods. Can be configured also with `global.podLabels` | +| podSecurityContext | object | `{}` | Sets security context (at pod level). Can be configured also with `global.podSecurityContext` | +| priorityClassName | string | `""` | Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` | +| proxy | string | `""` | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` | +| rbac.create | bool | `true` | Specifies whether RBAC resources should be created | +| resources | object | `{}` | | +| serviceAccount.annotations | object | `{}` | Add these annotations to the service account we create. Can be configured also with `global.serviceAccount.annotations` | +| serviceAccount.create | bool | `true` | Configures if the service account should be created or not. Can be configured also with `global.serviceAccount.create` | +| serviceAccount.name | string | `nil` | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. Can be configured also with `global.serviceAccount.name` | +| tolerations | list | `[]` | Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` | +| verboseLog | bool | false | Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` | + +## Maintainers + +* [alvarocabanas](https://github.com/alvarocabanas) +* [carlossscastro](https://github.com/carlossscastro) +* [sigilioso](https://github.com/sigilioso) +* [gsanchezgavier](https://github.com/gsanchezgavier) +* [kang-makes](https://github.com/kang-makes) +* [marcsanmi](https://github.com/marcsanmi) +* [paologallinaharbur](https://github.com/paologallinaharbur) +* [roobre](https://github.com/roobre) diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md.gotmpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md.gotmpl new file mode 100644 index 0000000000..5c1da45775 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/README.md.gotmpl @@ -0,0 +1,83 @@ +{{ template "chart.header" . }} +{{ template "chart.deprecationWarning" . }} + +{{ template "chart.description" . }} + +{{ template "chart.homepageLine" . }} + +# Helm installation + +You can install this chart using [`nri-bundle`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle) located in the +[helm-charts repository](https://github.com/newrelic/helm-charts) or directly from this repository by adding this Helm repository: + +```shell +helm repo add nri-prometheus https://newrelic.github.io/nri-prometheus +helm upgrade --install newrelic-prometheus nri-prometheus/nri-prometheus -f your-custom-values.yaml +``` + +{{ template "chart.sourcesSection" . }} + +## Scraping services and endpoints + +When a service is labeled or annotated with `scrape_enabled_label` (defaults to `prometheus.io/scrape`), +`nri-prometheus` will attempt to hit the service directly, rather than the endpoints behind it. + +This is the default behavior for compatibility reasons, but is known to cause issues if more than one endpoint +is behind the service, as metric queries will be load-balanced as well leading to inaccurate histograms. + +In order to change this behaviour set `scrape_endpoints` to `true` and `scrape_services` to `false`. +This will instruct `nri-prometheus` to scrape the underlying endpoints, as Prometheus server does. + +Existing users that are switching to this behavior should note that, depending on the number of endpoints +behind the services in the cluster the load and the metrics reported by those, data ingestion might see +an increase when flipping this option. Resource requirements might also be impacted, again depending on the number of new targets. + +While it is technically possible to set both `scrape_services` and `scrape_endpoints` to true, we do no recommend +doing so as it will lead to redundant metrics being processed, + +## Values managed globally + +This chart implements the [New Relic's common Helm library](https://github.com/newrelic/helm-charts/tree/master/library/common-library) which +means that it honors a wide range of defaults and globals common to most New Relic Helm charts. + +Options that can be defined globally include `affinity`, `nodeSelector`, `tolerations`, `proxy` and others. The full list can be found at +[user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md). + +## Chart particularities + +### Low data mode +See this snippet from the `values.yaml` file: +```yaml +global: + lowDataMode: false +lowDataMode: false +``` + +To reduce the amount ot metrics we send to New Relic, enabling the `lowDataMode` will add [these transformations](static/lowdatamodedefaults.yaml): +```yaml +transformations: + - description: "Low data mode defaults" + ignore_metrics: + # Ignore the following metrics. + # These metrics are already collected by the New Relic Kubernetes Integration. + - prefixes: + - kube_ + - container_ + - machine_ + - cadvisor_ +``` + +{{ template "chart.valuesSection" . }} + +{{ if .Maintainers }} +## Maintainers +{{ range .Maintainers }} +{{- if .Name }} +{{- if .Url }} +* [{{ .Name }}]({{ .Url }}) +{{- else }} +* {{ .Name }} +{{- end }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/.helmignore b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/.helmignore new file mode 100644 index 0000000000..0e8a0eb36f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/.helmignore @@ -0,0 +1,23 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*.orig +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/Chart.yaml new file mode 100644 index 0000000000..f2ee5497ee --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/Chart.yaml @@ -0,0 +1,17 @@ +apiVersion: v2 +description: Provides helpers to provide consistency on all the charts +keywords: +- newrelic +- chart-library +maintainers: +- name: juanjjaramillo + url: https://github.com/juanjjaramillo +- name: csongnr + url: https://github.com/csongnr +- name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR +- name: kang-makes + url: https://github.com/kang-makes +name: common-library +type: library +version: 1.3.0 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/DEVELOPERS.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/DEVELOPERS.md new file mode 100644 index 0000000000..7208c673ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/DEVELOPERS.md @@ -0,0 +1,747 @@ +# Functions/templates documented for chart writers +Here is some rough documentation separated by the file that contains the function, the function +name and how to use it. We are not covering functions that start with `_` (e.g. +`newrelic.common.license._licenseKey`) because they are used internally by this library for +other helpers. Helm does not have the concept of "public" or "private" functions/templates so +this is a convention of ours. + +## _naming.tpl +These functions are used to name objects. + +### `newrelic.common.naming.name` +This is the same as the idiomatic `CHART-NAME.name` that is created when you use `helm create`. + +It honors `.Values.nameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.name" . }} +``` + +### `newrelic.common.naming.fullname` +This is the same as the idiomatic `CHART-NAME.fullname` that is created when you use `helm create` + +It honors `.Values.fullnameOverride`. + +Usage: +```mustache +{{ include "newrelic.common.naming.fullname" . }} +``` + +### `newrelic.common.naming.chart` +This is the same as the idiomatic `CHART-NAME.chart` that is created when you use `helm create`. + +It is mostly useless for chart writers. It is used internally for templating the labels but there +is no reason to keep it "private". + +Usage: +```mustache +{{ include "newrelic.common.naming.chart" . }} +``` + +### `newrelic.common.naming.truncateToDNS` +This is a useful template that could be used to trim a string to 63 chars and does not end with a dash (`-`). +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $truncatedName := include "newrelic.common.naming.truncateToDNS" $nameToTruncate }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-string-that-should-be */ -}} +``` + +### `newrelic.common.naming.truncateToDNSWithSuffix` +This template function is the same as the above but instead of receiving a string you should give a `dict` +with a `name` and a `suffix`. This function will join them with a dash (`-`) and trim the `name` so the +result of `name-suffix` is no more than 63 chars + +Usage: +```mustache +{{ $nameToTruncate := "a-really-really-really-really-REALLY-long-string-that-should-be-truncated-because-it-is-enought-long-to-brak-something" +{{- $suffix := "A-NOT-SO-LONG-SUFFIX" }} +{{- $truncatedName := include "truncateToDNSWithSuffix" (dict "name" $nameToTruncate "suffix" $suffix) }} +{{- $truncatedName }} +{{- /* This should print: a-really-really-really-really-REALLY-long-A-NOT-SO-LONG-SUFFIX */ -}} +``` + + + +## _labels.tpl +### `newrelic.common.labels`, `newrelic.common.labels.selectorLabels` and `newrelic.common.labels.podLabels` +These are functions that are used to label objects. They are configured by this `values.yaml` +```yaml +global: + podLabels: {} # included in all the pods of all the charts that implement this library + labels: {} # included in all the objects of all the charts that implement this library +podLabels: {} # included in all the pods of this chart +labels: {} # included in all the objects of this chart +``` + +label maps are merged from global to local values. + +And chart writer should use them like this: +```mustache +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + selector: + matchLabels: + {{- include "newrelic.common.labels.selectorLabels" . | nindent 6 }} + template: + metadata: + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} +``` + +`newrelic.common.labels.podLabels` includes `newrelic.common.labels.selectorLabels` automatically. + + + +## _priority-class-name.tpl +### `newrelic.common.priorityClassName` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + priorityClassName: "" +priorityClassName: "" +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `priorityClassName` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} +``` + + + +## _hostnetwork.tpl +### `newrelic.common.hostNetwork` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + hostNetwork: # Note that this is empty (nil) +hostNetwork: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `hostNetwork` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.hostNetwork" . }} + hostNetwork: {{ . }} + {{- end }} +``` + +### `newrelic.common.hostNetwork.value` +This function is an abstraction of the function above but this returns directly "true" or "false". + +Be careful with using this with an `if` as Helm does evaluate "false" (string) as `true`. + +Usage (example in a pod spec): +```mustache +spec: + hostNetwork: {{ include "newrelic.common.hostNetwork.value" . }} +``` + + + +## _dnsconfig.tpl +### `newrelic.common.dnsConfig` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + dnsConfig: {} +dnsConfig: {} +``` + +Be careful: chart writers should put an empty string (or any kind of Helm falsiness) for this +library to work properly. If in your values a non-falsy `dnsConfig` is found, the global +one is going to be always ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _images.tpl +These functions help us to deal with how images are templated. This allows setting `registries` +where to fetch images globally while being flexible enough to fit in different maps of images +and deployments with one or more images. This is the example of a complex `values.yaml` that +we are going to use during the documentation of these functions: + +```yaml +global: + images: + registry: nexus-3-instance.internal.clients-domain.tld +jobImage: + registry: # defaults to "example.tld" when empty in these examples + repository: ingress-nginx/kube-webhook-certgen + tag: v1.1.1 + pullPolicy: IfNotPresent + pullSecrets: [] +images: + integration: + registry: + repository: newrelic/nri-kube-events + tag: 1.8.0 + pullPolicy: IfNotPresent + agent: + registry: + repository: newrelic/k8s-events-forwarder + tag: 1.22.0 + pullPolicy: IfNotPresent + pullSecrets: [] +``` + +### `newrelic.common.images.image` +This will return a string with the image ready to be downloaded that includes the registry, the image and the tag. +`defaultRegistry` is used to keep `registry` field empty in `values.yaml` so you can override the image using +`global.images.registry`, your local `jobImage.registry` and be able to fallback to a registry that is not `docker.io` +(Or the default repository that the client could have set in the CRI). + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.image" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.registry` +It returns the registry from the global or local values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For the integration */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.images.agent "context" .) }} +{{- /* For jobImage */}} +{{ include "newrelic.common.images.registry" ( dict "defaultRegistry" "example.tld" "imageRoot" .Values.jobImage "context" .) }} +``` + +### `newrelic.common.images.repository` +It returns the image from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.repository" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.tag` +It returns the image's tag from the values. You should avoid using this helper to create your image +URL and use `newrelic.common.images.image` instead, but it is there to be used in case it is needed. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.jobImage "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.integration "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.images.agent "context" .) }} +``` + +### `newrelic.common.images.renderPullSecrets` +If returns a merged map that contains the pull secrets from the global configuration and the local one. + +Usage: +```mustache +{{- /* For jobImage */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.jobImage.pullSecrets "context" .) }} +{{- /* For the integration */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +{{- /* For the agent */}} +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" .Values.images.pullSecrets "context" .) }} +``` + + + +## _serviceaccount.tpl +These functions are used to evaluate if the service account should be created, with which name and add annotations to it. + +The functions that the common library has implemented for service accounts are: +* `newrelic.common.serviceAccount.create` +* `newrelic.common.serviceAccount.name` +* `newrelic.common.serviceAccount.annotations` + +Usage: +```mustache +{{- if include "newrelic.common.serviceAccount.create" . -}} +apiVersion: v1 +kind: ServiceAccount +metadata: + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end }} +``` + + + +## _affinity.tpl, _nodeselector.tpl and _tolerations.tpl +These three files are almost the same and they follow the idiomatic way of `helm create`. + +Each function also looks if there is a global value like the other helpers. +```yaml +global: + affinity: {} + nodeSelector: {} + tolerations: [] +affinity: {} +nodeSelector: {} +tolerations: [] +``` + +The values here are replaced instead of be merged. If a value at root level is found, the global one is ignored. + +Usage (example in a pod spec): +```mustache +spec: + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 4 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 4 }} + {{- end }} +``` + + + +## _agent-config.tpl +### `newrelic.common.agentConfig.defaults` +This returns a YAML that the agent can use directly as a config that includes other options from the values file like verbose mode, +custom attributes, FedRAMP and such. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + name: {{ include newrelic.common.naming.truncateToDNSWithSuffix (dict "name" (include "newrelic.common.naming.fullname" .) suffix "agent-config") }} + namespace: {{ .Release.Namespace }} +data: + newrelic-infra.yml: |- + # This is the configuration file for the infrastructure agent. See: + # https://docs.newrelic.com/docs/infrastructure/install-infrastructure-agent/configuration/infrastructure-agent-configuration-settings/ + {{- include "newrelic.common.agentConfig.defaults" . | nindent 4 }} +``` + + + +## _cluster.tpl +### `newrelic.common.cluster` +Returns the cluster name + +Usage: +```mustache +{{ include "newrelic.common.cluster" . }} +``` + + + +## _custom-attributes.tpl +### `newrelic.common.customAttributes` +Return custom attributes in YAML format. + +Usage: +```mustache +apiVersion: v1 +kind: ConfigMap +metadata: + name: example +data: + custom-attributes.yaml: | + {{- include "newrelic.common.customAttributes" . | nindent 4 }} + custom-attributes.json: | + {{- include "newrelic.common.customAttributes" . | fromYaml | toJson | nindent 4 }} +``` + + + +## _fedramp.tpl +### `newrelic.common.fedramp.enabled` +Returns true if FedRAMP is enabled or an empty string if not. It can be safely used in conditionals as an empty string is a Helm falsiness. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled" . }} +``` + +### `newrelic.common.fedramp.enabled.value` +Returns true if FedRAMP is enabled or false if not. This is to have the value of FedRAMP ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.fedramp.enabled.value" . }} +``` + + + +## _license.tpl +### `newrelic.common.license.secretName` and ### `newrelic.common.license.secretKeyName` +Returns the secret and key inside the secret where to read the license key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the license key. + +To create the secret use `newrelic.common.license.secret`. + +Usage: +```mustache +{{- if and (.Values.controlPlane.enabled) (not (include "newrelic.fargate" .)) }} +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + containers: + - name: agent + env: + - name: "NRIA_LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} +``` + + + +## _license_secret.tpl +### `newrelic.common.license.secret` +This function templates the secret that is used by agents and integrations with the license Key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any license key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.license.secret" . -}} +``` + + + +## _insights.tpl +### `newrelic.common.insightsKey.secretName` and ### `newrelic.common.insightsKey.secretKeyName` +Returns the secret and key inside the secret where to read the insights key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.insightsKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "INSIGHTS_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + key: {{ include "newrelic.common.insightsKey.secretKeyName" . }} +``` + + + +## _insights_secret.tpl +### `newrelic.common.insightsKey.secret` +This function templates the secret that is used by agents and integrations with the insights key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any insights key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.insightsKey.secret" . -}} +``` + + + +## _userkey.tpl +### `newrelic.common.userKey.secretName` and ### `newrelic.common.userKey.secretKeyName` +Returns the secret and key inside the secret where to read a user key. + +The common library will take care of using a user-provided custom secret or creating a secret that contains the insights key. + +To create the secret use `newrelic.common.userKey.secret`. + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: statsd +spec: + containers: + - name: statsd + env: + - name: "API_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.userKey.secretName" . }} + key: {{ include "newrelic.common.userKey.secretKeyName" . }} +``` + + + +## _userkey_secret.tpl +### `newrelic.common.userKey.secret` +This function templates the secret that is used by agents and integrations with a user key provided by the user. It will +template nothing (empty string) if the user provides a custom pair of secret name and key. + +This template also fails in case the user has not provided any API key or custom secret so no safety checks have to be done +by chart writers. + +You just must have a template with these two lines: +```mustache +{{- /* Common library will take care of creating the secret or not. */ -}} +{{- include "newrelic.common.userKey.secret" . -}} +``` + + + +## _region.tpl +### `newrelic.common.region.validate` +Given a string, return a normalized name for the region if valid. + +This function does not need the context of the chart, only the value to be validated. The region returned +honors the region [definition of the newrelic-client-go implementation](https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21) +so (as of 2024/09/14) it returns the region as "US", "EU", "Staging", or "Local". + +In case the region provided does not match these 4, the helper calls `fail` and abort the templating. + +Usage: +```mustache +{{ include "newrelic.common.region.validate" "us" }} +``` + +### `newrelic.common.region` +It reads global and local variables for `region`: +```yaml +global: + region: # Note that this can be empty (nil) or "" (empty string) +region: # Note that this can be empty (nil) or "" (empty string) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in your +values a `region` is defined, the global one is going to be always ignored. + +This function gives protection so it enforces users to give the license key as a value in their +`values.yaml` or specify a global or local `region` value. To understand how the `region` value +works, read the documentation of `newrelic.common.region.validate`. + +The function will change the region from US, EU or Staging based of the license key and the +`nrStaging` toggle. Whichever region is computed from the license/toggle can be overridden by +the `region` value. + +Usage: +```mustache +{{ include "newrelic.common.region" . }} +``` + + + +## _low-data-mode.tpl +### `newrelic.common.lowDataMode` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + lowDataMode: # Note that this is empty (nil) +lowDataMode: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `lowdataMode` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.lowDataMode" . }} +``` + + + +## _privileged.tpl +### `newrelic.common.privileged` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + privileged: # Note that this is empty (nil) +privileged: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `privileged` is defined, the global one is going to be always ignored. + +Chart writers could override this and put directly a `true` in the `values.yaml` to override the +default of the common library. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.privileged" . }} +``` + +### `newrelic.common.privileged.value` +Returns true if privileged mode is enabled or false if not. This is to have the value of privileged ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.privileged.value" . }} +``` + + + +## _proxy.tpl +### `newrelic.common.proxy` +Returns the proxy URL configured by the user. + +Usage: +```mustache +{{ include "newrelic.common.proxy" . }} +``` + + + +## _security-context.tpl +Use these functions to share the security context among all charts. Useful in clusters that have security enforcing not to +use the root user (like OpenShift) or users that have an admission webhooks. + +The functions are: +* `newrelic.common.securityContext.container` +* `newrelic.common.securityContext.pod` + +Usage: +```mustache +apiVersion: v1 +kind: Pod +metadata: + name: example +spec: + spec: + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + + containers: + - name: example + {{- with include "nriKubernetes.securityContext.container" . }} + securityContext: + {{- toYaml . | nindent 12 }} + {{- end }} +``` + + + +## _staging.tpl +### `newrelic.common.nrStaging` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + nrStaging: # Note that this is empty (nil) +nrStaging: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `nrStaging` is defined, the global one is going to be always ignored. + +This function returns "true" of "" (empty string) so it can be used for evaluating conditionals. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging" . }} +``` + +### `newrelic.common.nrStaging.value` +Returns true if staging is enabled or false if not. This is to have the staging value ready to be templated. + +Usage: +```mustache +{{ include "newrelic.common.nrStaging.value" . }} +``` + + + +## _verbose-log.tpl +### `newrelic.common.verboseLog` +Like almost everything in this library, it reads global and local variables: +```yaml +global: + verboseLog: # Note that this is empty (nil) +verboseLog: # Note that this is empty (nil) +``` + +Be careful: chart writers should NOT PUT ANY VALUE for this library to work properly. If in you +values a `verboseLog` is defined, the global one is going to be always ignored. + +Usage: +```mustache +{{ include "newrelic.common.verboseLog" . }} +``` + +### `newrelic.common.verboseLog.valueAsBoolean` +Returns true if verbose is enabled or false if not. This is to have the verbose value ready to be templated as a boolean + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsBoolean" . }} +``` + +### `newrelic.common.verboseLog.valueAsInt` +Returns 1 if verbose is enabled or 0 if not. This is to have the verbose value ready to be templated as an integer + +Usage: +```mustache +{{ include "newrelic.common.verboseLog.valueAsInt" . }} +``` diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/README.md b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/README.md new file mode 100644 index 0000000000..10f08ca677 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/README.md @@ -0,0 +1,106 @@ +# Helm Common library + +The common library is a way to unify the UX through all the Helm charts that implement it. + +The tooling suite that New Relic is huge and growing and this allows to set things globally +and locally for a single chart. + +## Documentation for chart writers + +If you are writing a chart that is going to use this library you can check the [developers guide](/library/common-library/DEVELOPERS.md) to see all +the functions/templates that we have implemented, what they do and how to use them. + +## Values managed globally + +We want to have a seamless experience through all the charts so we created this library that tries to standardize the behaviour +of all the charts. Sadly, because of the complexity of all these integrations, not all the charts behave exactly as expected. + +An example is `newrelic-infrastructure` that ignores `hostNetwork` in the control plane scraper because most of the users has the +control plane listening in the node to `localhost`. + +For each chart that has a special behavior (or further information of the behavior) there is a "chart particularities" section +in its README.md that explains which is the expected behavior. + +At the time of writing this, all the charts from `nri-bundle` except `newrelic-logging` and `synthetics-minion` implements this +library and honors global options as described in this document. + +Here is a list of global options: + +| Global keys | Local keys | Default | Merged[1](#values-managed-globally-1) | Description | +|-------------|------------|---------|--------------------------------------------------|-------------| +| global.cluster | cluster | `""` | | Name of the Kubernetes cluster monitored | +| global.licenseKey | licenseKey | `""` | | This set this license key to use | +| global.customSecretName | customSecretName | `""` | | In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there | +| global.customSecretLicenseKey | customSecretLicenseKey | `""` | | In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located | +| global.podLabels | podLabels | `{}` | yes | Additional labels for chart pods | +| global.labels | labels | `{}` | yes | Additional labels for chart objects | +| global.priorityClassName | priorityClassName | `""` | | Sets pod's priorityClassName | +| global.hostNetwork | hostNetwork | `false` | | Sets pod's hostNetwork | +| global.dnsConfig | dnsConfig | `{}` | | Sets pod's dnsConfig | +| global.images.registry | See [Further information](#values-managed-globally-2) | `""` | | Changes the registry where to get the images. Useful when there is an internal image cache/proxy | +| global.images.pullSecrets | See [Further information](#values-managed-globally-2) | `[]` | yes | Set secrets to be able to fetch images | +| global.podSecurityContext | podSecurityContext | `{}` | | Sets security context (at pod level) | +| global.containerSecurityContext | containerSecurityContext | `{}` | | Sets security context (at container level) | +| global.affinity | affinity | `{}` | | Sets pod/node affinities | +| global.nodeSelector | nodeSelector | `{}` | | Sets pod's node selector | +| global.tolerations | tolerations | `[]` | | Sets pod's tolerations to node taints | +| global.serviceAccount.create | serviceAccount.create | `true` | | Configures if the service account should be created or not | +| global.serviceAccount.name | serviceAccount.name | name of the release | | Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. | +| global.serviceAccount.annotations | serviceAccount.annotations | `{}` | yes | Add these annotations to the service account we create | +| global.customAttributes | customAttributes | `{}` | | Adds extra attributes to the cluster and all the metrics emitted to the backend | +| global.fedramp | fedramp | `false` | | Enables FedRAMP | +| global.lowDataMode | lowDataMode | `false` | | Reduces number of metrics sent in order to reduce costs | +| global.privileged | privileged | Depends on the chart | | In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | +| global.proxy | proxy | `""` | | Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` | +| global.nrStaging | nrStaging | `false` | | Send the metrics to the staging backend. Requires a valid staging license key | +| global.verboseLog | verboseLog | `false` | | Sets the debug/trace logs to this integration or all integrations if it is set globally | + +### Further information + +#### 1. Merged + +Merged means that the values from global are not replaced by the local ones. Think in this example: +```yaml +global: + labels: + global: global + hostNetwork: true + nodeSelector: + global: global + +labels: + local: local +nodeSelector: + local: local +hostNetwork: false +``` + +This values will template `hostNetwork` to `false`, a map of labels `{ "global": "global", "local": "local" }` and a `nodeSelector` with +`{ "local": "local" }`. + +As Helm by default merges all the maps it could be confusing that we have two behaviors (merging `labels` and replacing `nodeSelector`) +the `values` from global to local. This is the rationale behind this: +* `hostNetwork` is templated to `false` because is overriding the value defined globally. +* `labels` are merged because the user may want to label all the New Relic pods at once and label other solution pods differently for + clarity' sake. +* `nodeSelector` does not merge as `labels` because could make it harder to overwrite/delete a selector that comes from global because + of the logic that Helm follows merging maps. + + +#### 2. Fine grain registries + +Some charts only have 1 image while others that can have 2 or more images. The local path for the registry can change depending +on the chart itself. + +As this is mostly unique per helm chart, you should take a look to the chart's values table (or directly to the `values.yaml` file to see all the +images that you can change. + +This should only be needed if you have an advanced setup that forces you to have granularity enough to force a proxy/cache registry per integration. + + + +#### 3. Privileged mode + +By default, from the common library, the privileged mode is set to false. But most of the helm charts require this to be true to fetch more +metrics so could see a true in some charts. The consequences of the privileged mode differ from one chart to another so for each chart that +honors the privileged mode toggle should be a section in the README explaining which is the behavior with it enabled or disabled. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_affinity.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_affinity.tpl new file mode 100644 index 0000000000..1b2636754e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_affinity.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod affinity */ -}} +{{- define "newrelic.common.affinity" -}} + {{- if .Values.affinity -}} + {{- toYaml .Values.affinity -}} + {{- else if .Values.global -}} + {{- if .Values.global.affinity -}} + {{- toYaml .Values.global.affinity -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_agent-config.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_agent-config.tpl new file mode 100644 index 0000000000..9c32861a02 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_agent-config.tpl @@ -0,0 +1,26 @@ +{{/* +This helper should return the defaults that all agents should have +*/}} +{{- define "newrelic.common.agentConfig.defaults" -}} +{{- if include "newrelic.common.verboseLog" . }} +log: + level: trace +{{- end }} + +{{- if (include "newrelic.common.nrStaging" . ) }} +staging: true +{{- end }} + +{{- with include "newrelic.common.proxy" . }} +proxy: {{ . | quote }} +{{- end }} + +{{- with include "newrelic.common.fedramp.enabled" . }} +fedramp: {{ . }} +{{- end }} + +{{- with fromYaml ( include "newrelic.common.customAttributes" . ) }} +custom_attributes: + {{- toYaml . | nindent 2 }} +{{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_cluster.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_cluster.tpl new file mode 100644 index 0000000000..0197dd35a3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_cluster.tpl @@ -0,0 +1,15 @@ +{{/* +Return the cluster +*/}} +{{- define "newrelic.common.cluster" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.cluster -}} + {{- .Values.cluster -}} +{{- else if $global.cluster -}} + {{- $global.cluster -}} +{{- else -}} + {{ fail "There is not cluster name definition set neither in `.global.cluster' nor `.cluster' in your values.yaml. Cluster name is required." }} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_custom-attributes.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_custom-attributes.tpl new file mode 100644 index 0000000000..92020719c3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_custom-attributes.tpl @@ -0,0 +1,17 @@ +{{/* +This will render custom attributes as a YAML ready to be templated or be used with `fromYaml`. +*/}} +{{- define "newrelic.common.customAttributes" -}} +{{- $customAttributes := dict -}} + +{{- $global := index .Values "global" | default dict -}} +{{- if $global.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes $global.customAttributes -}} +{{- end -}} + +{{- if .Values.customAttributes -}} +{{- $customAttributes = mergeOverwrite $customAttributes .Values.customAttributes -}} +{{- end -}} + +{{- toYaml $customAttributes -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_dnsconfig.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_dnsconfig.tpl new file mode 100644 index 0000000000..d4e40aa8af --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_dnsconfig.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod dnsConfig */ -}} +{{- define "newrelic.common.dnsConfig" -}} + {{- if .Values.dnsConfig -}} + {{- toYaml .Values.dnsConfig -}} + {{- else if .Values.global -}} + {{- if .Values.global.dnsConfig -}} + {{- toYaml .Values.global.dnsConfig -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_fedramp.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_fedramp.tpl new file mode 100644 index 0000000000..9df8d6b5e9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_fedramp.tpl @@ -0,0 +1,25 @@ +{{- /* Defines the fedRAMP flag */ -}} +{{- define "newrelic.common.fedramp.enabled" -}} + {{- if .Values.fedramp -}} + {{- if .Values.fedramp.enabled -}} + {{- .Values.fedramp.enabled -}} + {{- end -}} + {{- else if .Values.global -}} + {{- if .Values.global.fedramp -}} + {{- if .Values.global.fedramp.enabled -}} + {{- .Values.global.fedramp.enabled -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + + + +{{- /* Return FedRAMP value directly ready to be templated */ -}} +{{- define "newrelic.common.fedramp.enabled.value" -}} +{{- if include "newrelic.common.fedramp.enabled" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_hostnetwork.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_hostnetwork.tpl new file mode 100644 index 0000000000..4cf017ef7e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_hostnetwork.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the hostNetwork toggle. +This helper allows to override the global `.global.hostNetwork` with the value of `.hostNetwork`. +Returns "true" if `hostNetwork` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.hostNetwork" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} + +{{- /* +`get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs + +We also want only to return when this is true, returning `false` here will template "false" (string) when doing +an `(include "newrelic.common.hostNetwork" .)`, which is not an "empty string" so it is `true` if it is used +as an evaluation somewhere else. +*/ -}} +{{- if get .Values "hostNetwork" | kindIs "bool" -}} + {{- if .Values.hostNetwork -}} + {{- .Values.hostNetwork -}} + {{- end -}} +{{- else if get $global "hostNetwork" | kindIs "bool" -}} + {{- if $global.hostNetwork -}} + {{- $global.hostNetwork -}} + {{- end -}} +{{- end -}} +{{- end -}} + + +{{- /* +Abstraction of the hostNetwork toggle. +This helper abstracts the function "newrelic.common.hostNetwork" to return true or false directly. +*/ -}} +{{- define "newrelic.common.hostNetwork.value" -}} +{{- if include "newrelic.common.hostNetwork" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_images.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_images.tpl new file mode 100644 index 0000000000..d4fb432905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_images.tpl @@ -0,0 +1,94 @@ +{{- /* +Return the proper image name +{{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.image" -}} + {{- $registryName := include "newrelic.common.images.registry" ( dict "imageRoot" .imageRoot "defaultRegistry" .defaultRegistry "context" .context ) -}} + {{- $repositoryName := include "newrelic.common.images.repository" .imageRoot -}} + {{- $tag := include "newrelic.common.images.tag" ( dict "imageRoot" .imageRoot "context" .context) -}} + + {{- if $registryName -}} + {{- printf "%s/%s:%s" $registryName $repositoryName $tag | quote -}} + {{- else -}} + {{- printf "%s:%s" $repositoryName $tag | quote -}} + {{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image registry +{{ include "newrelic.common.images.registry" ( dict "imageRoot" .Values.path.to.the.image "defaultRegistry" "your.private.registry.tld" "context" .) }} +*/ -}} +{{- define "newrelic.common.images.registry" -}} +{{- $globalRegistry := "" -}} +{{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- with .context.Values.global.images.registry -}} + {{- $globalRegistry = . -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- $localRegistry := "" -}} +{{- if .imageRoot.registry -}} + {{- $localRegistry = .imageRoot.registry -}} +{{- end -}} + +{{- $registry := $localRegistry | default $globalRegistry | default .defaultRegistry -}} +{{- if $registry -}} + {{- $registry -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Return the proper image repository +{{ include "newrelic.common.images.repository" .Values.path.to.the.image }} +*/ -}} +{{- define "newrelic.common.images.repository" -}} + {{- .repository -}} +{{- end -}} + + + +{{- /* +Return the proper image tag +{{ include "newrelic.common.images.tag" ( dict "imageRoot" .Values.path.to.the.image "context" .) }} +*/ -}} +{{- define "newrelic.common.images.tag" -}} + {{- .imageRoot.tag | default .context.Chart.AppVersion | toString -}} +{{- end -}} + + + +{{- /* +Return the proper Image Pull Registry Secret Names evaluating values as templates +{{ include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.path.to.the.images.pullSecrets1, .Values.path.to.the.images.pullSecrets2) "context" .) }} +*/ -}} +{{- define "newrelic.common.images.renderPullSecrets" -}} + {{- $flatlist := list }} + + {{- if .context.Values.global -}} + {{- if .context.Values.global.images -}} + {{- if .context.Values.global.images.pullSecrets -}} + {{- range .context.Values.global.images.pullSecrets -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- range .pullSecrets -}} + {{- if not (empty .) -}} + {{- range . -}} + {{- $flatlist = append $flatlist . -}} + {{- end -}} + {{- end -}} + {{- end -}} + + {{- if $flatlist -}} + {{- toYaml $flatlist -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights.tpl new file mode 100644 index 0000000000..895c377326 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the Insights Key. +*/}} +{{- define "newrelic.common.insightsKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "insightskey" ) -}} +{{- include "newrelic.common.insightsKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +*/}} +{{- define "newrelic.common.insightsKey.secretKeyName" -}} +{{- include "newrelic.common.insightsKey._customSecretKey" . | default "insightsKey" -}} +{{- end -}} + +{{/* +Return local insightsKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._licenseKey" -}} +{{- if .Values.insightsKey -}} + {{- .Values.insightsKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.insightsKey -}} + {{- .Values.global.insightsKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the Insights Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretName" -}} +{{- if .Values.customInsightsKeySecretName -}} + {{- .Values.customInsightsKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretName -}} + {{- .Values.global.customInsightsKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the Insights Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.insightsKey._customSecretKey" -}} +{{- if .Values.customInsightsKeySecretKey -}} + {{- .Values.customInsightsKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customInsightsKeySecretKey }} + {{- .Values.global.customInsightsKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights_secret.yaml.tpl new file mode 100644 index 0000000000..556caa6ca6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_insights_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the insights key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.insightsKey.secret" }} +{{- if not (include "newrelic.common.insightsKey._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.insightsKey._licenseKey" .) }} + {{- fail "You must specify a insightsKey or a customInsightsSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.insightsKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.insightsKey.secretKeyName" . }}: {{ include "newrelic.common.insightsKey._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_labels.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_labels.tpl new file mode 100644 index 0000000000..b025948285 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_labels.tpl @@ -0,0 +1,54 @@ +{{/* +This will render the labels that should be used in all the manifests used by the helm chart. +*/}} +{{- define "newrelic.common.labels" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- $chart := dict "helm.sh/chart" (include "newrelic.common.naming.chart" . ) -}} +{{- $managedBy := dict "app.kubernetes.io/managed-by" .Release.Service -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $labels := mustMergeOverwrite $chart $managedBy $selectorLabels -}} +{{- if .Chart.AppVersion -}} +{{- $labels = mustMergeOverwrite $labels (dict "app.kubernetes.io/version" .Chart.AppVersion) -}} +{{- end -}} + +{{- $globalUserLabels := $global.labels | default dict -}} +{{- $localUserLabels := .Values.labels | default dict -}} + +{{- $labels = mustMergeOverwrite $labels $globalUserLabels $localUserLabels -}} + +{{- toYaml $labels -}} +{{- end -}} + + + +{{/* +This will render the labels that should be used in deployments/daemonsets template pods as a selector. +*/}} +{{- define "newrelic.common.labels.selectorLabels" -}} +{{- $name := dict "app.kubernetes.io/name" ( include "newrelic.common.naming.name" . ) -}} +{{- $instance := dict "app.kubernetes.io/instance" .Release.Name -}} + +{{- $selectorLabels := mustMergeOverwrite $name $instance -}} + +{{- toYaml $selectorLabels -}} +{{- end }} + + + +{{/* +Pod labels +*/}} +{{- define "newrelic.common.labels.podLabels" -}} +{{- $selectorLabels := fromYaml (include "newrelic.common.labels.selectorLabels" . ) -}} + +{{- $global := index .Values "global" | default dict -}} +{{- $globalPodLabels := $global.podLabels | default dict }} + +{{- $localPodLabels := .Values.podLabels | default dict }} + +{{- $podLabels := mustMergeOverwrite $selectorLabels $globalPodLabels $localPodLabels -}} + +{{- toYaml $podLabels -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license.tpl new file mode 100644 index 0000000000..cb349f6bb6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license.tpl @@ -0,0 +1,68 @@ +{{/* +Return the name of the secret holding the License Key. +*/}} +{{- define "newrelic.common.license.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "license" ) -}} +{{- include "newrelic.common.license._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +*/}} +{{- define "newrelic.common.license.secretKeyName" -}} +{{- include "newrelic.common.license._customSecretKey" . | default "licenseKey" -}} +{{- end -}} + +{{/* +Return local licenseKey if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._licenseKey" -}} +{{- if .Values.licenseKey -}} + {{- .Values.licenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.licenseKey -}} + {{- .Values.global.licenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the License Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretName" -}} +{{- if .Values.customSecretName -}} + {{- .Values.customSecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretName -}} + {{- .Values.global.customSecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the License Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._customSecretKey" -}} +{{- if .Values.customSecretLicenseKey -}} + {{- .Values.customSecretLicenseKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customSecretLicenseKey }} + {{- .Values.global.customSecretLicenseKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the user set a custom secret for the license. +This helper is for internal use. +*/}} +{{- define "newrelic.common.license._usesCustomSecret" -}} +{{- if or (include "newrelic.common.license._customSecretName" .) (include "newrelic.common.license._customSecretKey" .) -}} +true +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license_secret.yaml.tpl new file mode 100644 index 0000000000..610a0a3370 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_license_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the license key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.license.secret" }} +{{- if not (include "newrelic.common.license._customSecretName" .) }} +{{- /* Fail if licenseKey is empty and required: */ -}} +{{- if not (include "newrelic.common.license._licenseKey" .) }} + {{- fail "You must specify a licenseKey or a customSecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.license.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.license.secretKeyName" . }}: {{ include "newrelic.common.license._licenseKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_low-data-mode.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_low-data-mode.tpl new file mode 100644 index 0000000000..3dd55ef2ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_low-data-mode.tpl @@ -0,0 +1,26 @@ +{{- /* +Abstraction of the lowDataMode toggle. +This helper allows to override the global `.global.lowDataMode` with the value of `.lowDataMode`. +Returns "true" if `lowDataMode` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.lowDataMode" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "lowDataMode" | kindIs "bool") -}} + {{- if .Values.lowDataMode -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.lowDataMode" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.lowDataMode -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "lowDataMode" | kindIs "bool" -}} + {{- if $global.lowDataMode -}} + {{- $global.lowDataMode -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_naming.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_naming.tpl new file mode 100644 index 0000000000..19fa92648c --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_naming.tpl @@ -0,0 +1,73 @@ +{{/* +This is an function to be called directly with a string just to truncate strings to +63 chars because some Kubernetes name fields are limited to that. +*/}} +{{- define "newrelic.common.naming.truncateToDNS" -}} +{{- . | trunc 63 | trimSuffix "-" }} +{{- end }} + + + +{{- /* +Given a name and a suffix returns a 'DNS Valid' which always include the suffix, truncating the name if needed. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If suffix is too long it gets truncated but it always takes precedence over name, so a 63 chars suffix would suppress the name. +Usage: +{{ include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" "" "suffix" "my-suffix" ) }} +*/ -}} +{{- define "newrelic.common.naming.truncateToDNSWithSuffix" -}} +{{- $suffix := (include "newrelic.common.naming.truncateToDNS" .suffix) -}} +{{- $maxLen := (max (sub 63 (add1 (len $suffix))) 0) -}} {{- /* We prepend "-" to the suffix so an additional character is needed */ -}} + +{{- $newName := .name | trunc ($maxLen | int) | trimSuffix "-" -}} +{{- if $newName -}} +{{- printf "%s-%s" $newName $suffix -}} +{{- else -}} +{{ $suffix }} +{{- end -}} + +{{- end -}} + + + +{{/* +Expand the name of the chart. +Uses the Chart name by default if nameOverride is not set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.name" -}} +{{- $name := .Values.nameOverride | default .Chart.Name -}} +{{- include "newrelic.common.naming.truncateToDNS" $name -}} +{{- end }} + + + +{{/* +Create a default fully qualified app name. +By default the full name will be "" just in if it has the chart name included in that, if not +it will be concatenated like "-". This could change if fullnameOverride or +nameOverride are set. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +*/}} +{{- define "newrelic.common.naming.fullname" -}} +{{- $name := include "newrelic.common.naming.name" . -}} + +{{- if .Values.fullnameOverride -}} + {{- $name = .Values.fullnameOverride -}} +{{- else if not (contains $name .Release.Name) -}} + {{- $name = printf "%s-%s" .Release.Name $name -}} +{{- end -}} + +{{- include "newrelic.common.naming.truncateToDNS" $name -}} + +{{- end -}} + + + +{{/* +Create chart name and version as used by the chart label. +This function should not be used for naming objects. Use "common.naming.{name,fullname}" instead. +*/}} +{{- define "newrelic.common.naming.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_nodeselector.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_nodeselector.tpl new file mode 100644 index 0000000000..d488873412 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_nodeselector.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod nodeSelector */ -}} +{{- define "newrelic.common.nodeSelector" -}} + {{- if .Values.nodeSelector -}} + {{- toYaml .Values.nodeSelector -}} + {{- else if .Values.global -}} + {{- if .Values.global.nodeSelector -}} + {{- toYaml .Values.global.nodeSelector -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_priority-class-name.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_priority-class-name.tpl new file mode 100644 index 0000000000..50182b7343 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_priority-class-name.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the pod priorityClassName */ -}} +{{- define "newrelic.common.priorityClassName" -}} + {{- if .Values.priorityClassName -}} + {{- .Values.priorityClassName -}} + {{- else if .Values.global -}} + {{- if .Values.global.priorityClassName -}} + {{- .Values.global.priorityClassName -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_privileged.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_privileged.tpl new file mode 100644 index 0000000000..f3ae814dd9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_privileged.tpl @@ -0,0 +1,28 @@ +{{- /* +This is a helper that returns whether the chart should assume the user is fine deploying privileged pods. +*/ -}} +{{- define "newrelic.common.privileged" -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists. */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if get .Values "privileged" | kindIs "bool" -}} + {{- if .Values.privileged -}} + {{- .Values.privileged -}} + {{- end -}} +{{- else if get $global "privileged" | kindIs "bool" -}} + {{- if $global.privileged -}} + {{- $global.privileged -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* Return directly "true" or "false" based in the exist of "newrelic.common.privileged" */ -}} +{{- define "newrelic.common.privileged.value" -}} +{{- if include "newrelic.common.privileged" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_proxy.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_proxy.tpl new file mode 100644 index 0000000000..60f34c7ec1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_proxy.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the proxy */ -}} +{{- define "newrelic.common.proxy" -}} + {{- if .Values.proxy -}} + {{- .Values.proxy -}} + {{- else if .Values.global -}} + {{- if .Values.global.proxy -}} + {{- .Values.global.proxy -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_region.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_region.tpl new file mode 100644 index 0000000000..bdcacf3235 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_region.tpl @@ -0,0 +1,74 @@ +{{/* +Return the region that is being used by the user +*/}} +{{- define "newrelic.common.region" -}} +{{- if and (include "newrelic.common.license._usesCustomSecret" .) (not (include "newrelic.common.region._fromValues" .)) -}} + {{- fail "This Helm Chart is not able to compute the region. You must specify a .global.region or .region if the license is set using a custom secret." -}} +{{- end -}} + +{{- /* Defaults */ -}} +{{- $region := "us" -}} +{{- if include "newrelic.common.nrStaging" . -}} + {{- $region = "staging" -}} +{{- else if include "newrelic.common.region._isEULicenseKey" . -}} + {{- $region = "eu" -}} +{{- end -}} + +{{- include "newrelic.common.region.validate" (include "newrelic.common.region._fromValues" . | default $region ) -}} +{{- end -}} + + + +{{/* +Returns the region from the values if valid. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. + +Usage: `include "newrelic.common.region.validate" "us"` +*/}} +{{- define "newrelic.common.region.validate" -}} +{{- /* Ref: https://github.com/newrelic/newrelic-client-go/blob/cbe3e4cf2b95fd37095bf2ffdc5d61cffaec17e2/pkg/region/region_constants.go#L8-L21 */ -}} +{{- $region := . | lower -}} +{{- if eq $region "us" -}} + US +{{- else if eq $region "eu" -}} + EU +{{- else if eq $region "staging" -}} + Staging +{{- else if eq $region "local" -}} + Local +{{- else -}} + {{- fail (printf "the region provided is not valid: %s not in \"US\" \"EU\" \"Staging\" \"Local\"" .) -}} +{{- end -}} +{{- end -}} + + + +{{/* +Returns the region from the values. This only return the value from the `values.yaml`. +More intelligence should be used to compute the region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._fromValues" -}} +{{- if .Values.region -}} + {{- .Values.region -}} +{{- else if .Values.global -}} + {{- if .Values.global.region -}} + {{- .Values.global.region -}} + {{- end -}} +{{- end -}} +{{- end -}} + + + +{{/* +Return empty string (falsehood) or "true" if the license is for EU region. +This helper is for internal use. +*/}} +{{- define "newrelic.common.region._isEULicenseKey" -}} +{{- if not (include "newrelic.common.license._usesCustomSecret" .) -}} + {{- $license := include "newrelic.common.license._licenseKey" . -}} + {{- if hasPrefix "eu" $license -}} + true + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_security-context.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_security-context.tpl new file mode 100644 index 0000000000..9edfcabfd0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_security-context.tpl @@ -0,0 +1,23 @@ +{{- /* Defines the container securityContext context */ -}} +{{- define "newrelic.common.securityContext.container" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.containerSecurityContext -}} + {{- toYaml .Values.containerSecurityContext -}} +{{- else if $global.containerSecurityContext -}} + {{- toYaml $global.containerSecurityContext -}} +{{- end -}} +{{- end -}} + + + +{{- /* Defines the pod securityContext context */ -}} +{{- define "newrelic.common.securityContext.pod" -}} +{{- $global := index .Values "global" | default dict -}} + +{{- if .Values.podSecurityContext -}} + {{- toYaml .Values.podSecurityContext -}} +{{- else if $global.podSecurityContext -}} + {{- toYaml $global.podSecurityContext -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_serviceaccount.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_serviceaccount.tpl new file mode 100644 index 0000000000..2d352f6ea9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_serviceaccount.tpl @@ -0,0 +1,90 @@ +{{- /* Defines if the service account has to be created or not */ -}} +{{- define "newrelic.common.serviceAccount.create" -}} +{{- $valueFound := false -}} + +{{- /* Look for a global creation of a service account */ -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "create" | kindIs "bool") -}} + {{- $valueFound = true -}} + {{- if .Values.serviceAccount.create -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.serviceAccount.name" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.serviceAccount.create -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* Look for a local creation of a service account */ -}} +{{- if not $valueFound -}} + {{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} + {{- $global := index .Values "global" | default dict -}} + {{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "create" | kindIs "bool" -}} + {{- $valueFound = true -}} + {{- if $global.serviceAccount.create -}} + {{- $global.serviceAccount.create -}} + {{- end -}} + {{- end -}} + {{- end -}} +{{- end -}} + +{{- /* In case no serviceAccount value has been found, default to "true" */ -}} +{{- if not $valueFound -}} +true +{{- end -}} +{{- end -}} + + + +{{- /* Defines the name of the service account */ -}} +{{- define "newrelic.common.serviceAccount.name" -}} +{{- $localServiceAccount := "" -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if (get .Values.serviceAccount "name" | kindIs "string") -}} + {{- $localServiceAccount = .Values.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := "" -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "name" | kindIs "string" -}} + {{- $globalServiceAccount = $global.serviceAccount.name -}} + {{- end -}} +{{- end -}} + +{{- if (include "newrelic.common.serviceAccount.create" .) -}} + {{- $localServiceAccount | default $globalServiceAccount | default (include "newrelic.common.naming.fullname" .) -}} +{{- else -}} + {{- $localServiceAccount | default $globalServiceAccount | default "default" -}} +{{- end -}} +{{- end -}} + + + +{{- /* Merge the global and local annotations for the service account */ -}} +{{- define "newrelic.common.serviceAccount.annotations" -}} +{{- $localServiceAccount := dict -}} +{{- if get .Values "serviceAccount" | kindIs "map" -}} + {{- if get .Values.serviceAccount "annotations" -}} + {{- $localServiceAccount = .Values.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $globalServiceAccount := dict -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "serviceAccount" | kindIs "map" -}} + {{- if get $global.serviceAccount "annotations" -}} + {{- $globalServiceAccount = $global.serviceAccount.annotations -}} + {{- end -}} +{{- end -}} + +{{- $merged := mustMergeOverwrite $globalServiceAccount $localServiceAccount -}} + +{{- if $merged -}} + {{- toYaml $merged -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_staging.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_staging.tpl new file mode 100644 index 0000000000..bd9ad09bb9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_staging.tpl @@ -0,0 +1,39 @@ +{{- /* +Abstraction of the nrStaging toggle. +This helper allows to override the global `.global.nrStaging` with the value of `.nrStaging`. +Returns "true" if `nrStaging` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.nrStaging" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "nrStaging" | kindIs "bool") -}} + {{- if .Values.nrStaging -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.nrStaging" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.nrStaging -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "nrStaging" | kindIs "bool" -}} + {{- if $global.nrStaging -}} + {{- $global.nrStaging -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Returns "true" of "false" directly instead of empty string (Helm falsiness) based on the exit of "newrelic.common.nrStaging" +*/ -}} +{{- define "newrelic.common.nrStaging.value" -}} +{{- if include "newrelic.common.nrStaging" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_tolerations.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_tolerations.tpl new file mode 100644 index 0000000000..e016b38e27 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_tolerations.tpl @@ -0,0 +1,10 @@ +{{- /* Defines the Pod tolerations */ -}} +{{- define "newrelic.common.tolerations" -}} + {{- if .Values.tolerations -}} + {{- toYaml .Values.tolerations -}} + {{- else if .Values.global -}} + {{- if .Values.global.tolerations -}} + {{- toYaml .Values.global.tolerations -}} + {{- end -}} + {{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey.tpl new file mode 100644 index 0000000000..982ea8e09d --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey.tpl @@ -0,0 +1,56 @@ +{{/* +Return the name of the secret holding the API Key. +*/}} +{{- define "newrelic.common.userKey.secretName" -}} +{{- $default := include "newrelic.common.naming.truncateToDNSWithSuffix" ( dict "name" (include "newrelic.common.naming.fullname" .) "suffix" "userkey" ) -}} +{{- include "newrelic.common.userKey._customSecretName" . | default $default -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +*/}} +{{- define "newrelic.common.userKey.secretKeyName" -}} +{{- include "newrelic.common.userKey._customSecretKey" . | default "userKey" -}} +{{- end -}} + +{{/* +Return local API Key if set, global otherwise. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._userKey" -}} +{{- if .Values.userKey -}} + {{- .Values.userKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.userKey -}} + {{- .Values.global.userKey -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name of the secret holding the API Key. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretName" -}} +{{- if .Values.customUserKeySecretName -}} + {{- .Values.customUserKeySecretName -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretName -}} + {{- .Values.global.customUserKeySecretName -}} + {{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Return the name key for the API Key inside the secret. +This helper is for internal use. +*/}} +{{- define "newrelic.common.userKey._customSecretKey" -}} +{{- if .Values.customUserKeySecretKey -}} + {{- .Values.customUserKeySecretKey -}} +{{- else if .Values.global -}} + {{- if .Values.global.customUserKeySecretKey }} + {{- .Values.global.customUserKeySecretKey -}} + {{- end -}} +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey_secret.yaml.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey_secret.yaml.tpl new file mode 100644 index 0000000000..b979856548 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_userkey_secret.yaml.tpl @@ -0,0 +1,21 @@ +{{/* +Renders the user key secret if user has not specified a custom secret. +*/}} +{{- define "newrelic.common.userKey.secret" }} +{{- if not (include "newrelic.common.userKey._customSecretName" .) }} +{{- /* Fail if user key is empty and required: */ -}} +{{- if not (include "newrelic.common.userKey._userKey" .) }} + {{- fail "You must specify a userKey or a customUserKeySecretName containing it" }} +{{- end }} +--- +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "newrelic.common.userKey.secretName" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +data: + {{ include "newrelic.common.userKey.secretKeyName" . }}: {{ include "newrelic.common.userKey._userKey" . | b64enc }} +{{- end }} +{{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_verbose-log.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_verbose-log.tpl new file mode 100644 index 0000000000..2286d46815 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/templates/_verbose-log.tpl @@ -0,0 +1,54 @@ +{{- /* +Abstraction of the verbose toggle. +This helper allows to override the global `.global.verboseLog` with the value of `.verboseLog`. +Returns "true" if `verbose` is enabled, otherwise "" (empty string) +*/ -}} +{{- define "newrelic.common.verboseLog" -}} +{{- /* `get` will return "" (empty string) if value is not found, and the value otherwise, so we can type-assert with kindIs */ -}} +{{- if (get .Values "verboseLog" | kindIs "bool") -}} + {{- if .Values.verboseLog -}} + {{- /* + We want only to return when this is true, returning `false` here will template "false" (string) when doing + an `(include "newrelic.common.verboseLog" .)`, which is not an "empty string" so it is `true` if it is used + as an evaluation somewhere else. + */ -}} + {{- .Values.verboseLog -}} + {{- end -}} +{{- else -}} +{{- /* This allows us to use `$global` as an empty dict directly in case `Values.global` does not exists */ -}} +{{- $global := index .Values "global" | default dict -}} +{{- if get $global "verboseLog" | kindIs "bool" -}} + {{- if $global.verboseLog -}} + {{- $global.verboseLog -}} + {{- end -}} +{{- end -}} +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return true or false directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsBoolean" -}} +{{- if include "newrelic.common.verboseLog" . -}} +true +{{- else -}} +false +{{- end -}} +{{- end -}} + + + +{{- /* +Abstraction of the verbose toggle. +This helper abstracts the function "newrelic.common.verboseLog" to return 1 or 0 directly. +*/ -}} +{{- define "newrelic.common.verboseLog.valueAsInt" -}} +{{- if include "newrelic.common.verboseLog" . -}} +1 +{{- else -}} +0 +{{- end -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/values.yaml new file mode 100644 index 0000000000..75e2d112ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/charts/common-library/values.yaml @@ -0,0 +1 @@ +# values are not needed for the library chart, however this file is still needed for helm lint to work. diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-lowdatamode-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-lowdatamode-values.yaml new file mode 100644 index 0000000000..57b307a2d0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-lowdatamode-values.yaml @@ -0,0 +1,9 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +lowDataMode: true + +image: + repository: e2e/nri-prometheus + tag: "test" # Defaults to chart's appVersion diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-override-global-lowdatamode.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-override-global-lowdatamode.yaml new file mode 100644 index 0000000000..7ff1a730fa --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-override-global-lowdatamode.yaml @@ -0,0 +1,10 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + lowDataMode: true + +lowDataMode: false + +image: + repository: e2e/nri-prometheus + tag: "test" # Defaults to chart's appVersion diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-values.yaml new file mode 100644 index 0000000000..fcd07b2d3e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/ci/test-values.yaml @@ -0,0 +1,104 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +lowDataMode: true + +nameOverride: my-custom-name + +image: + registry: + repository: e2e/nri-prometheus + tag: "test" + imagePullPolicy: IfNotPresent + +resources: + limits: + cpu: 200m + memory: 512Mi + requests: + cpu: 100m + memory: 256Mi + +rbac: + create: true + +serviceAccount: + # Specifies whether a ServiceAccount should be created + create: true + # The name of the ServiceAccount to use. + # If not set and create is true, a name is generated using the name template + name: "" + # Specify any annotations to add to the ServiceAccount + annotations: + foo: bar + +# If you wish to provide your own config.yaml file include it under config: +# the sample config file is included here as an example. +config: + scrape_duration: "60s" + scrape_timeout: "15s" + + scrape_services: false + scrape_endpoints: true + + audit: false + + insecure_skip_verify: false + + scrape_enabled_label: "prometheus.io/scrape" + + require_scrape_enabled_label_for_nodes: true + + transformations: + - description: "Custom transformation Example" + rename_attributes: + - metric_prefix: "foo_" + attributes: + old_label: "newLabel" + ignore_metrics: + - prefixes: + - bar_ + copy_attributes: + - from_metric: "foo_info" + to_metrics: "foo_" + match_by: + - namespace + +podAnnotations: + custom-pod-annotation: test + +podSecurityContext: + runAsUser: 1000 + runAsGroup: 3000 + fsGroup: 2000 + +containerSecurityContext: + runAsUser: 2000 + +tolerations: + - key: "key1" + operator: "Exists" + effect: "NoSchedule" + +nodeSelector: + kubernetes.io/os: linux + +affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: kubernetes.io/os + operator: In + values: + - linux + +nrStaging: false + +fedramp: + enabled: true + +proxy: + +verboseLog: true diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/static/lowdatamodedefaults.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/static/lowdatamodedefaults.yaml new file mode 100644 index 0000000000..f749e28da4 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/static/lowdatamodedefaults.yaml @@ -0,0 +1,10 @@ +transformations: + - description: "Low data mode defaults" + ignore_metrics: + # Ignore the following metrics. + # These metrics are already collected by the New Relic Kubernetes Integration. + - prefixes: + - kube_ + - container_ + - machine_ + - cadvisor_ diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/_helpers.tpl b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/_helpers.tpl new file mode 100644 index 0000000000..23c072bd72 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/_helpers.tpl @@ -0,0 +1,15 @@ +{{/* vim: set filetype=mustache: */}} + +{{/* +Returns mergeTransformations +Helm can't merge maps of different types. Need to manually create a `transformations` section. +*/}} +{{- define "nri-prometheus.mergeTransformations" -}} + {{/* Remove current `transformations` from config. */}} + {{- omit .Values.config "transformations" | toYaml | nindent 4 -}} + {{/* Create new `transformations` yaml section with merged configs from .Values.config.transformations and lowDataMode. */}} + transformations: + {{- .Values.config.transformations | toYaml | nindent 4 -}} + {{ $lowDataDefault := .Files.Get "static/lowdatamodedefaults.yaml" | fromYaml }} + {{- $lowDataDefault.transformations | toYaml | nindent 4 -}} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrole.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrole.yaml new file mode 100644 index 0000000000..ac4734d31e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrole.yaml @@ -0,0 +1,23 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +rules: +- apiGroups: [""] + resources: + - "nodes" + - "nodes/metrics" + - "nodes/stats" + - "nodes/proxy" + - "pods" + - "services" + - "endpoints" + verbs: ["get", "list", "watch"] +- nonResourceURLs: + - /metrics + verbs: + - get +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrolebinding.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrolebinding.yaml new file mode 100644 index 0000000000..44244653ff --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/clusterrolebinding.yaml @@ -0,0 +1,16 @@ +{{- if .Values.rbac.create }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "newrelic.common.naming.fullname" . }} +subjects: +- kind: ServiceAccount + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/configmap.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/configmap.yaml new file mode 100644 index 0000000000..5daeed64ad --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/configmap.yaml @@ -0,0 +1,21 @@ +kind: ConfigMap +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +apiVersion: v1 +data: + config.yaml: | + cluster_name: {{ include "newrelic.common.cluster" . }} +{{- if .Values.config -}} + {{- if and (.Values.config.transformations) (include "newrelic.common.lowDataMode" .) -}} + {{- include "nri-prometheus.mergeTransformations" . -}} + {{- else if (include "newrelic.common.lowDataMode" .) -}} + {{ $lowDataDefault := .Files.Get "static/lowdatamodedefaults.yaml" | fromYaml }} + {{- mergeOverwrite (deepCopy .Values.config) $lowDataDefault | toYaml | nindent 4 -}} + {{- else }} + {{- .Values.config | toYaml | nindent 4 -}} + {{- end -}} +{{- end -}} + diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/deployment.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/deployment.yaml new file mode 100644 index 0000000000..8529b71f40 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/deployment.yaml @@ -0,0 +1,98 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ include "newrelic.common.naming.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} +spec: + replicas: 1 + selector: + matchLabels: + {{- /* We cannot use the common library here because of a legacy issue */}} + {{- /* `selector` is inmutable and the previous chart did not have all the idiomatic labels */}} + app.kubernetes.io/name: {{ include "newrelic.common.naming.name" . }} + template: + metadata: + annotations: + checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }} + {{- with .Values.podAnnotations }} + {{- toYaml . | nindent 8 }} + {{- end }} + labels: + {{- include "newrelic.common.labels.podLabels" . | nindent 8 }} + spec: + serviceAccountName: {{ include "newrelic.common.serviceAccount.name" . }} + {{- with include "newrelic.common.securityContext.pod" . }} + securityContext: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.images.renderPullSecrets" ( dict "pullSecrets" (list .Values.image.pullSecrets) "context" .) }} + imagePullSecrets: + {{- . | nindent 8 }} + {{- end }} + containers: + - name: nri-prometheus + {{- with include "newrelic.common.securityContext.container" . }} + securityContext: + {{- . | nindent 10 }} + {{- end }} + image: {{ include "newrelic.common.images.image" ( dict "imageRoot" .Values.image "context" .) }} + imagePullPolicy: {{ .Values.image.pullPolicy }} + args: + - "--configfile=/etc/nri-prometheus/config.yaml" + ports: + - containerPort: 8080 + volumeMounts: + - name: config-volume + mountPath: /etc/nri-prometheus/ + env: + - name: "LICENSE_KEY" + valueFrom: + secretKeyRef: + name: {{ include "newrelic.common.license.secretName" . }} + key: {{ include "newrelic.common.license.secretKeyName" . }} + {{- if (include "newrelic.common.nrStaging" .) }} + - name: "METRIC_API_URL" + value: "https://staging-metric-api.newrelic.com/metric/v1/infra" + {{- else if (include "newrelic.common.fedramp.enabled" .) }} + - name: "METRIC_API_URL" + value: "https://gov-metric-api.newrelic.com/metric/v1" + {{- end }} + {{- with include "newrelic.common.proxy" . }} + - name: EMITTER_PROXY + value: {{ . | quote }} + {{- end }} + {{- with include "newrelic.common.verboseLog" . }} + - name: "VERBOSE" + value: {{ . | quote }} + {{- end }} + - name: "BEARER_TOKEN_FILE" + value: "/var/run/secrets/kubernetes.io/serviceaccount/token" + - name: "CA_FILE" + value: "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" + resources: + {{- toYaml .Values.resources | nindent 10 }} + volumes: + - name: config-volume + configMap: + name: {{ include "newrelic.common.naming.fullname" . }} + {{- with include "newrelic.common.priorityClassName" . }} + priorityClassName: {{ . }} + {{- end }} + {{- with include "newrelic.common.dnsConfig" . }} + dnsConfig: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.nodeSelector" . }} + nodeSelector: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.affinity" . }} + affinity: + {{- . | nindent 8 }} + {{- end }} + {{- with include "newrelic.common.tolerations" . }} + tolerations: + {{- . | nindent 8 }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/secret.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/secret.yaml new file mode 100644 index 0000000000..f558ee86c9 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/secret.yaml @@ -0,0 +1,2 @@ +{{- /* Common library will take care of creating the secret or not. */}} +{{- include "newrelic.common.license.secret" . }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/serviceaccount.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/serviceaccount.yaml new file mode 100644 index 0000000000..df451ec905 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/templates/serviceaccount.yaml @@ -0,0 +1,13 @@ +{{- if (include "newrelic.common.serviceAccount.create" .) }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "newrelic.common.serviceAccount.name" . }} + namespace: {{ .Release.Namespace }} + labels: + {{- include "newrelic.common.labels" . | nindent 4 }} + {{- with (include "newrelic.common.serviceAccount.annotations" .) }} + annotations: + {{- . | nindent 4 }} + {{- end }} +{{- end -}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/configmap_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/configmap_test.yaml new file mode 100644 index 0000000000..ae7d921fe6 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/configmap_test.yaml @@ -0,0 +1,86 @@ +suite: test nri-prometheus configmap +templates: + - templates/configmap.yaml + - templates/deployment.yaml +tests: + - it: creates the config map with default config in values.yaml and cluster_name. + set: + licenseKey: fakeLicense + cluster: my-cluster-name + asserts: + - equal: + path: data["config.yaml"] + value: |- + cluster_name: my-cluster-name + audit: false + insecure_skip_verify: false + require_scrape_enabled_label_for_nodes: true + scrape_enabled_label: prometheus.io/scrape + scrape_endpoints: false + scrape_services: true + transformations: [] + template: templates/configmap.yaml + + - it: creates the config map with lowDataMode. + set: + licenseKey: fakeLicense + cluster: my-cluster-name + lowDataMode: true + asserts: + - equal: + path: data["config.yaml"] + value: |- + cluster_name: my-cluster-name + audit: false + insecure_skip_verify: false + require_scrape_enabled_label_for_nodes: true + scrape_enabled_label: prometheus.io/scrape + scrape_endpoints: false + scrape_services: true + transformations: + - description: Low data mode defaults + ignore_metrics: + - prefixes: + - kube_ + - container_ + - machine_ + - cadvisor_ + template: templates/configmap.yaml + + - it: merges existing transformation with lowDataMode. + set: + licenseKey: fakeLicense + cluster: my-cluster-name + lowDataMode: true + config: + transformations: + - description: Custom transformation Example + rename_attributes: + - metric_prefix: test_ + attributes: + container_name: containerName + asserts: + - equal: + path: data["config.yaml"] + value: |- + cluster_name: my-cluster-name + audit: false + insecure_skip_verify: false + require_scrape_enabled_label_for_nodes: true + scrape_enabled_label: prometheus.io/scrape + scrape_endpoints: false + scrape_services: true + transformations: + - description: Custom transformation Example + rename_attributes: + - attributes: + container_name: containerName + metric_prefix: test_ + - description: Low data mode defaults + ignore_metrics: + - prefixes: + - kube_ + - container_ + - machine_ + - cadvisor_ + template: templates/configmap.yaml diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/deployment_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/deployment_test.yaml new file mode 100644 index 0000000000..cb6f903405 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/deployment_test.yaml @@ -0,0 +1,82 @@ +suite: test deployment +templates: + - templates/deployment.yaml + - templates/configmap.yaml + +release: + name: release + +tests: + - it: adds defaults. + set: + licenseKey: fakeLicense + cluster: test + asserts: + - equal: + path: spec.template.metadata.labels["app.kubernetes.io/instance"] + value: release + template: templates/deployment.yaml + - equal: + path: spec.template.metadata.labels["app.kubernetes.io/name"] + value: nri-prometheus + template: templates/deployment.yaml + - equal: + path: spec.selector.matchLabels + value: + app.kubernetes.io/name: nri-prometheus + template: templates/deployment.yaml + - isNotEmpty: + path: spec.template.metadata.annotations["checksum/config"] + template: templates/deployment.yaml + + - it: adds METRIC_API_URL when nrStaging is true. + set: + licenseKey: fakeLicense + cluster: test + nrStaging: true + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: "METRIC_API_URL" + value: "https://staging-metric-api.newrelic.com/metric/v1/infra" + template: templates/deployment.yaml + + - it: adds FedRamp endpoint when FedRamp is enabled. + set: + licenseKey: fakeLicense + cluster: test + fedramp: + enabled: true + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: "METRIC_API_URL" + value: "https://gov-metric-api.newrelic.com/metric/v1" + template: templates/deployment.yaml + + - it: adds proxy when enabled. + set: + licenseKey: fakeLicense + cluster: test + proxy: "https://my-proxy:9999" + asserts: + - contains: + path: spec.template.spec.containers[0].env + content: + name: "EMITTER_PROXY" + value: "https://my-proxy:9999" + template: templates/deployment.yaml + + - it: set priorityClassName. + set: + licenseKey: fakeLicense + cluster: test + priorityClassName: foo + asserts: + - equal: + path: spec.template.spec.priorityClassName + value: foo + template: templates/deployment.yaml + diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/labels_test.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/labels_test.yaml new file mode 100644 index 0000000000..2b6cb53bb1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/tests/labels_test.yaml @@ -0,0 +1,32 @@ +suite: test object names +templates: + - templates/clusterrole.yaml + - templates/clusterrolebinding.yaml + - templates/configmap.yaml + - templates/deployment.yaml + - templates/secret.yaml + - templates/serviceaccount.yaml + +release: + name: release + revision: + +tests: + - it: adds default labels. + set: + licenseKey: fakeLicense + cluster: test + asserts: + - equal: + path: metadata.labels["app.kubernetes.io/instance"] + value: release + - equal: + path: metadata.labels["app.kubernetes.io/managed-by"] + value: Helm + - equal: + path: metadata.labels["app.kubernetes.io/name"] + value: nri-prometheus + - isNotEmpty: + path: metadata.labels["app.kubernetes.io/version"] + - isNotEmpty: + path: metadata.labels["helm.sh/chart"] diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/values.yaml new file mode 100644 index 0000000000..4c562cc660 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/nri-prometheus/values.yaml @@ -0,0 +1,251 @@ +# -- Override the name of the chart +nameOverride: "" +# -- Override the full name of the release +fullnameOverride: "" + +# -- Name of the Kubernetes cluster monitored. Can be configured also with `global.cluster` +cluster: "" +# -- This set this license key to use. Can be configured also with `global.licenseKey` +licenseKey: "" +# -- In case you don't want to have the license key in you values, this allows you to point to a user created secret to get the key from there. Can be configured also with `global.customSecretName` +customSecretName: "" +# -- In case you don't want to have the license key in you values, this allows you to point to which secret key is the license key located. Can be configured also with `global.customSecretLicenseKey` +customSecretLicenseKey: "" + +# -- Image for the New Relic Kubernetes integration +# @default -- See `values.yaml` +image: + registry: + repository: newrelic/nri-prometheus + tag: "" # Defaults to chart's appVersion + imagePullPolicy: IfNotPresent + # -- The secrets that are needed to pull images from a custom registry. + pullSecrets: [] + # - name: regsecret + +resources: {} + # limits: + # cpu: 200m + # memory: 512Mi + # requests: + # cpu: 100m + # memory: 256Mi + +rbac: + # -- Specifies whether RBAC resources should be created + create: true + +serviceAccount: + # -- Add these annotations to the service account we create. Can be configured also with `global.serviceAccount.annotations` + annotations: {} + # -- Configures if the service account should be created or not. Can be configured also with `global.serviceAccount.create` + create: true + # -- Change the name of the service account. This is honored if you disable on this cahrt the creation of the service account so you can use your own. Can be configured also with `global.serviceAccount.name` + name: + +# -- Annotations to be added to all pods created by the integration. +podAnnotations: {} +# -- Additional labels for chart pods. Can be configured also with `global.podLabels` +podLabels: {} +# -- Additional labels for chart objects. Can be configured also with `global.labels` +labels: {} + +# -- Sets pod's priorityClassName. Can be configured also with `global.priorityClassName` +priorityClassName: "" +# -- (bool) Sets pod's hostNetwork. Can be configured also with `global.hostNetwork` +# @default -- `false` +hostNetwork: +# -- Sets pod's dnsConfig. Can be configured also with `global.dnsConfig` +dnsConfig: {} + +# -- Sets security context (at pod level). Can be configured also with `global.podSecurityContext` +podSecurityContext: {} +# -- Sets security context (at container level). Can be configured also with `global.containerSecurityContext` +containerSecurityContext: {} + +# -- Sets pod/node affinities. Can be configured also with `global.affinity` +affinity: {} +# -- Sets pod's node selector. Can be configured also with `global.nodeSelector` +nodeSelector: {} +# -- Sets pod's tolerations to node taints. Can be configured also with `global.tolerations` +tolerations: [] + + +# -- Provides your own `config.yaml` for this integration. +# Ref: https://docs.newrelic.com/docs/infrastructure/prometheus-integrations/install-configure-openmetrics/configure-prometheus-openmetrics-integrations/#example-configuration-file +# @default -- See `values.yaml` +config: + # How often the integration should run. + # Default: "30s" + # scrape_duration: "30s" + + # The HTTP client timeout when fetching data from targets. + # Default: "5s" + # scrape_timeout: "5s" + + # scrape_services Allows to enable scraping the service and not the endpoints behind. + # When endpoints are scraped this is no longer needed + scrape_services: true + + # scrape_endpoints Allows to enable scraping directly endpoints instead of services as prometheus service natively does. + # Please notice that depending on the number of endpoints behind a service the load can increase considerably + scrape_endpoints: false + + # How old must the entries used for calculating the counters delta be + # before the telemetry emitter expires them. + # Default: "5m" + # telemetry_emitter_delta_expiration_age: "5m" + + # How often must the telemetry emitter check for expired delta entries. + # Default: "5m" + # telemetry_emitter_delta_expiration_check_interval: "5m" + + # Whether the integration should run in audit mode or not. Defaults to false. + # Audit mode logs the uncompressed data sent to New Relic. Use this to log all data sent. + # It does not include verbose mode. This can lead to a high log volume, use with care. + # Default: false + audit: false + + # Whether the integration should skip TLS verification or not. + # Default: false + insecure_skip_verify: false + + # The label used to identify scrapeable targets. + # Targets can be identified using a label or annotation. + # Default: "prometheus.io/scrape" + scrape_enabled_label: "prometheus.io/scrape" + + # Whether k8s nodes need to be labelled to be scraped or not. + # Default: true + require_scrape_enabled_label_for_nodes: true + + # Number of worker threads used for scraping targets. + # For large clusters with many (>400) targets, slowly increase until scrape + # time falls between the desired `scrape_duration`. + # Increasing this value too much will result in huge memory consumption if too + # many metrics are being scraped. + # Default: 4 + # worker_threads: 4 + + # Maximum number of metrics to keep in memory until a report is triggered. + # Changing this value is not recommended unless instructed by the New Relic support team. + # max_stored_metrics: 10000 + + # Minimum amount of time to wait between reports. Cannot be lowered than the default, 200ms. + # Changing this value is not recommended unless instructed by the New Relic support team. + # min_emitter_harvest_period: 200ms + + # targets: + # - description: Secure etcd example + # urls: ["https://192.168.3.1:2379", "https://192.168.3.2:2379", "https://192.168.3.3:2379"] + # If true the Kubernetes Service Account token will be included as a Bearer token in the HTTP request. + # use_bearer: false + # tls_config: + # ca_file_path: "/etc/etcd/etcd-client-ca.crt" + # cert_file_path: "/etc/etcd/etcd-client.crt" + # key_file_path: "/etc/etcd/etcd-client.key" + + # Certificate to add to the root CA that the emitter will use when + # verifying server certificates. + # If left empty, TLS uses the host's root CA set. + # emitter_ca_file: "/path/to/cert/server.pem" + + # Set to true in order to stop autodiscovery in the k8s cluster. It can be useful when running the Pod with a service account + # having limited privileges. + # Default: false + # disable_autodiscovery: false + + # Whether the emitter should skip TLS verification when submitting data. + # Default: false + # emitter_insecure_skip_verify: false + + # Histogram support is based on New Relic's guidelines for higher + # level metrics abstractions https://github.com/newrelic/newrelic-exporter-specs/blob/master/Guidelines.md. + # To better support visualization of this data, percentiles are calculated + # based on the histogram metrics and sent to New Relic. + # By default, the following percentiles are calculated: 50, 95 and 99. + # + # percentiles: + # - 50 + # - 95 + # - 99 + + transformations: [] + # - description: "Custom transformation Example" + # rename_attributes: + # - metric_prefix: "" + # attributes: + # container_name: "containerName" + # pod_name: "podName" + # namespace: "namespaceName" + # node: "nodeName" + # container: "containerName" + # pod: "podName" + # deployment: "deploymentName" + # ignore_metrics: + # # Ignore the following metrics. + # # These metrics are already collected by the New Relic Kubernetes Integration. + # - prefixes: + # - kube_daemonset_ + # - kube_deployment_ + # - kube_endpoint_ + # - kube_namespace_ + # - kube_node_ + # - kube_persistentvolume_ + # - kube_pod_ + # - kube_replicaset_ + # - kube_service_ + # - kube_statefulset_ + # copy_attributes: + # # Copy all the labels from the timeseries with metric name + # # `kube_hpa_labels` into every timeseries with a metric name that + # # starts with `kube_hpa_` only if they share the same `namespace` + # # and `hpa` labels. + # - from_metric: "kube_hpa_labels" + # to_metrics: "kube_hpa_" + # match_by: + # - namespace + # - hpa + # - from_metric: "kube_daemonset_labels" + # to_metrics: "kube_daemonset_" + # match_by: + # - namespace + # - daemonset + # - from_metric: "kube_statefulset_labels" + # to_metrics: "kube_statefulset_" + # match_by: + # - namespace + # - statefulset + # - from_metric: "kube_endpoint_labels" + # to_metrics: "kube_endpoint_" + # match_by: + # - namespace + # - endpoint + # - from_metric: "kube_service_labels" + # to_metrics: "kube_service_" + # match_by: + # - namespace + # - service + # - from_metric: "kube_node_labels" + # to_metrics: "kube_node_" + # match_by: + # - namespace + # - node + +# -- (bool) Reduces number of metrics sent in order to reduce costs. Can be configured also with `global.lowDataMode` +# @default -- false +lowDataMode: + +# -- Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port`. Can be configured also with `global.proxy` +proxy: "" + +# -- (bool) Send the metrics to the staging backend. Requires a valid staging license key. Can be configured also with `global.nrStaging` +# @default -- false +nrStaging: +fedramp: + # fedramp.enabled -- (bool) Enables FedRAMP. Can be configured also with `global.fedramp.enabled` + # @default -- false + enabled: +# -- (bool) Sets the debug logs to this integration or all integrations if it is set globally. Can be configured also with `global.verboseLog` +# @default -- false +verboseLog: diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/Chart.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/Chart.yaml new file mode 100644 index 0000000000..a0ce0a388f --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/Chart.yaml @@ -0,0 +1,4 @@ +apiVersion: v2 +name: pixie-operator-chart +type: application +version: 0.1.6 diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/olm_crd.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/olm_crd.yaml new file mode 100644 index 0000000000..3f5429f789 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/olm_crd.yaml @@ -0,0 +1,9045 @@ +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: catalogsources.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: CatalogSource + listKind: CatalogSourceList + plural: catalogsources + shortNames: + - catsrc + singular: catalogsource + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: The pretty name of the catalog + jsonPath: .spec.displayName + name: Display + type: string + - description: The type of the catalog + jsonPath: .spec.sourceType + name: Type + type: string + - description: The publisher of the catalog + jsonPath: .spec.publisher + name: Publisher + type: string + - jsonPath: .metadata.creationTimestamp + name: Age + type: date + name: v1alpha1 + schema: + openAPIV3Schema: + description: CatalogSource is a repository of CSVs, CRDs, and operator packages. + type: object + required: + - metadata + - spec + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + type: object + required: + - sourceType + properties: + address: + description: 'Address is a host that OLM can use to connect to a pre-existing registry. Format: : Only used when SourceType = SourceTypeGrpc. Ignored when the Image field is set.' + type: string + configMap: + description: ConfigMap is the name of the ConfigMap to be used to back a configmap-server registry. Only used when SourceType = SourceTypeConfigmap or SourceTypeInternal. + type: string + description: + type: string + displayName: + description: Metadata + type: string + grpcPodConfig: + description: GrpcPodConfig exposes different overrides for the pod spec of the CatalogSource Pod. Only used when SourceType = SourceTypeGrpc and Image is set. + type: object + properties: + affinity: + description: Affinity is the catalog source's pod's affinity. + type: object + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for the pod. + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. + type: array + items: + description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + type: object + required: + - preference + - weight + properties: + preference: + description: A node selector term, associated with the corresponding weight. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + weight: + description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. + type: object + required: + - nodeSelectorTerms + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. The terms are ORed. + type: array + items: + description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + podAffinity: + description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + extractContent: + description: ExtractContent configures the gRPC catalog Pod to extract catalog metadata from the provided index image and use a well-known version of the `opm` server to expose it. The catalog index image that this CatalogSource is configured to use *must* be using the file-based catalogs in order to utilize this feature. + type: object + required: + - cacheDir + - catalogDir + properties: + cacheDir: + description: CacheDir is the directory storing the pre-calculated API cache. + type: string + catalogDir: + description: CatalogDir is the directory storing the file-based catalog contents. + type: string + memoryTarget: + description: "MemoryTarget configures the $GOMEMLIMIT value for the gRPC catalog Pod. This is a soft memory limit for the server, which the runtime will attempt to meet but makes no guarantees that it will do so. If this value is set, the Pod will have the following modifications made to the container running the server: - the $GOMEMLIMIT environment variable will be set to this value in bytes - the memory request will be set to this value \n This field should be set if it's desired to reduce the footprint of a catalog server as much as possible, or if a catalog being served is very large and needs more than the default allocation. If your index image has a file- system cache, determine a good approximation for this value by doubling the size of the package cache at /tmp/cache/cache/packages.json in the index image. \n This field is best-effort; if unset, no default will be used and no Pod memory limit or $GOMEMLIMIT value will be set." + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + nodeSelector: + description: NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node's labels for the pod to be scheduled on that node. + type: object + additionalProperties: + type: string + priorityClassName: + description: If specified, indicates the pod's priority. If not specified, the pod priority will be default or zero if there is no default. + type: string + securityContextConfig: + description: "SecurityContextConfig can be one of `legacy` or `restricted`. The CatalogSource's pod is either injected with the right pod.spec.securityContext and pod.spec.container[*].securityContext values to allow the pod to run in Pod Security Admission (PSA) `restricted` mode, or doesn't set these values at all, in which case the pod can only be run in PSA `baseline` or `privileged` namespaces. Currently if the SecurityContextConfig is unspecified, the default value of `legacy` is used. Specifying a value other than `legacy` or `restricted` result in a validation error. When using older catalog images, which could not be run in `restricted` mode, the SecurityContextConfig should be set to `legacy`. \n In a future version will the default will be set to `restricted`, catalog maintainers should rebuild their catalogs with a version of opm that supports running catalogSource pods in `restricted` mode to prepare for these changes. \n More information about PSA can be found here: https://kubernetes.io/docs/concepts/security/pod-security-admission/'" + type: string + default: legacy + enum: + - legacy + - restricted + tolerations: + description: Tolerations are the catalog source's pod's tolerations. + type: array + items: + description: The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . + type: object + properties: + effect: + description: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. + type: integer + format: int64 + value: + description: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + icon: + type: object + required: + - base64data + - mediatype + properties: + base64data: + type: string + mediatype: + type: string + image: + description: Image is an operator-registry container image to instantiate a registry-server with. Only used when SourceType = SourceTypeGrpc. If present, the address field is ignored. + type: string + priority: + description: 'Priority field assigns a weight to the catalog source to prioritize them so that it can be consumed by the dependency resolver. Usage: Higher weight indicates that this catalog source is preferred over lower weighted catalog sources during dependency resolution. The range of the priority value can go from positive to negative in the range of int32. The default value to a catalog source with unassigned priority would be 0. The catalog source with the same priority values will be ranked lexicographically based on its name.' + type: integer + publisher: + type: string + secrets: + description: Secrets represent set of secrets that can be used to access the contents of the catalog. It is best to keep this list small, since each will need to be tried for every catalog entry. + type: array + items: + type: string + sourceType: + description: SourceType is the type of source + type: string + updateStrategy: + description: UpdateStrategy defines how updated catalog source images can be discovered Consists of an interval that defines polling duration and an embedded strategy type + type: object + properties: + registryPoll: + type: object + properties: + interval: + description: Interval is used to determine the time interval between checks of the latest catalog source version. The catalog operator polls to see if a new version of the catalog source is available. If available, the latest image is pulled and gRPC traffic is directed to the latest catalog source. + type: string + status: + type: object + properties: + conditions: + description: Represents the state of a CatalogSource. Note that Message and Reason represent the original status information, which may be migrated to be conditions based in the future. Any new features introduced will use conditions. + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + x-kubernetes-list-map-keys: + - type + x-kubernetes-list-type: map + configMapReference: + type: object + required: + - name + - namespace + properties: + lastUpdateTime: + type: string + format: date-time + name: + type: string + namespace: + type: string + resourceVersion: + type: string + uid: + description: UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. + type: string + connectionState: + type: object + required: + - lastObservedState + properties: + address: + type: string + lastConnect: + type: string + format: date-time + lastObservedState: + type: string + latestImageRegistryPoll: + description: The last time the CatalogSource image registry has been polled to ensure the image is up-to-date + type: string + format: date-time + message: + description: A human readable message indicating details about why the CatalogSource is in this condition. + type: string + reason: + description: Reason is the reason the CatalogSource was transitioned to its current state. + type: string + registryService: + type: object + properties: + createdAt: + type: string + format: date-time + port: + type: string + protocol: + type: string + serviceName: + type: string + serviceNamespace: + type: string + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: clusterserviceversions.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: ClusterServiceVersion + listKind: ClusterServiceVersionList + plural: clusterserviceversions + shortNames: + - csv + - csvs + singular: clusterserviceversion + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: The name of the CSV + jsonPath: .spec.displayName + name: Display + type: string + - description: The version of the CSV + jsonPath: .spec.version + name: Version + type: string + - description: The name of a CSV that this one replaces + jsonPath: .spec.replaces + name: Replaces + type: string + - jsonPath: .status.phase + name: Phase + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + description: ClusterServiceVersion is a Custom Resource of type `ClusterServiceVersionSpec`. + type: object + required: + - metadata + - spec + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: ClusterServiceVersionSpec declarations tell OLM how to install an operator that can manage apps for a given version. + type: object + required: + - displayName + - install + properties: + annotations: + description: Annotations is an unstructured key value map stored with a resource that may be set by external tools to store and retrieve arbitrary metadata. + type: object + additionalProperties: + type: string + apiservicedefinitions: + description: APIServiceDefinitions declares all of the extension apis managed or required by an operator being ran by ClusterServiceVersion. + type: object + properties: + owned: + type: array + items: + description: APIServiceDescription provides details to OLM about apis provided via aggregation + type: object + required: + - group + - kind + - name + - version + properties: + actionDescriptors: + type: array + items: + description: ActionDescriptor describes a declarative action that can be performed on a custom resource instance + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + containerPort: + type: integer + format: int32 + deploymentName: + type: string + description: + type: string + displayName: + type: string + group: + type: string + kind: + type: string + name: + type: string + resources: + type: array + items: + description: APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. + type: object + required: + - kind + - name + - version + properties: + kind: + description: Kind of the referenced resource type. + type: string + name: + description: Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. + type: string + version: + description: API Version of the referenced resource type. + type: string + specDescriptors: + type: array + items: + description: SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + statusDescriptors: + type: array + items: + description: StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + version: + type: string + required: + type: array + items: + description: APIServiceDescription provides details to OLM about apis provided via aggregation + type: object + required: + - group + - kind + - name + - version + properties: + actionDescriptors: + type: array + items: + description: ActionDescriptor describes a declarative action that can be performed on a custom resource instance + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + containerPort: + type: integer + format: int32 + deploymentName: + type: string + description: + type: string + displayName: + type: string + group: + type: string + kind: + type: string + name: + type: string + resources: + type: array + items: + description: APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. + type: object + required: + - kind + - name + - version + properties: + kind: + description: Kind of the referenced resource type. + type: string + name: + description: Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. + type: string + version: + description: API Version of the referenced resource type. + type: string + specDescriptors: + type: array + items: + description: SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + statusDescriptors: + type: array + items: + description: StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + version: + type: string + cleanup: + description: Cleanup specifies the cleanup behaviour when the CSV gets deleted + type: object + required: + - enabled + properties: + enabled: + type: boolean + customresourcedefinitions: + description: "CustomResourceDefinitions declares all of the CRDs managed or required by an operator being ran by ClusterServiceVersion. \n If the CRD is present in the Owned list, it is implicitly required." + type: object + properties: + owned: + type: array + items: + description: CRDDescription provides details to OLM about the CRDs + type: object + required: + - kind + - name + - version + properties: + actionDescriptors: + type: array + items: + description: ActionDescriptor describes a declarative action that can be performed on a custom resource instance + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + description: + type: string + displayName: + type: string + kind: + type: string + name: + type: string + resources: + type: array + items: + description: APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. + type: object + required: + - kind + - name + - version + properties: + kind: + description: Kind of the referenced resource type. + type: string + name: + description: Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. + type: string + version: + description: API Version of the referenced resource type. + type: string + specDescriptors: + type: array + items: + description: SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + statusDescriptors: + type: array + items: + description: StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + version: + type: string + required: + type: array + items: + description: CRDDescription provides details to OLM about the CRDs + type: object + required: + - kind + - name + - version + properties: + actionDescriptors: + type: array + items: + description: ActionDescriptor describes a declarative action that can be performed on a custom resource instance + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + description: + type: string + displayName: + type: string + kind: + type: string + name: + type: string + resources: + type: array + items: + description: APIResourceReference is a reference to a Kubernetes resource type that the referrer utilizes. + type: object + required: + - kind + - name + - version + properties: + kind: + description: Kind of the referenced resource type. + type: string + name: + description: Plural name of the referenced resource type (CustomResourceDefinition.Spec.Names[].Plural). Empty string if the referenced resource type is not a custom resource. + type: string + version: + description: API Version of the referenced resource type. + type: string + specDescriptors: + type: array + items: + description: SpecDescriptor describes a field in a spec block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + statusDescriptors: + type: array + items: + description: StatusDescriptor describes a field in a status block of a CRD so that OLM can consume it + type: object + required: + - path + properties: + description: + type: string + displayName: + type: string + path: + type: string + value: + description: RawMessage is a raw encoded JSON value. It implements Marshaler and Unmarshaler and can be used to delay JSON decoding or precompute a JSON encoding. + type: string + format: byte + x-descriptors: + type: array + items: + type: string + version: + type: string + description: + description: Description of the operator. Can include the features, limitations or use-cases of the operator. + type: string + displayName: + description: The name of the operator in display format. + type: string + icon: + description: The icon for this operator. + type: array + items: + type: object + required: + - base64data + - mediatype + properties: + base64data: + type: string + mediatype: + type: string + install: + description: NamedInstallStrategy represents the block of an ClusterServiceVersion resource where the install strategy is specified. + type: object + required: + - strategy + properties: + spec: + description: StrategyDetailsDeployment represents the parsed details of a Deployment InstallStrategy. + type: object + required: + - deployments + properties: + clusterPermissions: + type: array + items: + description: StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy + type: object + required: + - rules + - serviceAccountName + properties: + rules: + type: array + items: + description: PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. + type: object + required: + - verbs + properties: + apiGroups: + description: APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. + type: array + items: + type: string + nonResourceURLs: + description: NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. + type: array + items: + type: string + resourceNames: + description: ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. + type: array + items: + type: string + resources: + description: Resources is a list of resources this rule applies to. '*' represents all resources. + type: array + items: + type: string + verbs: + description: Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. + type: array + items: + type: string + serviceAccountName: + type: string + deployments: + type: array + items: + description: StrategyDeploymentSpec contains the name, spec and labels for the deployment ALM should create + type: object + required: + - name + - spec + properties: + label: + description: Set is a map of label:value. It implements Labels. + type: object + additionalProperties: + type: string + name: + type: string + spec: + description: DeploymentSpec is the specification of the desired behavior of the Deployment. + type: object + required: + - selector + - template + properties: + minReadySeconds: + description: Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) + type: integer + format: int32 + paused: + description: Indicates that the deployment is paused. + type: boolean + progressDeadlineSeconds: + description: The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. + type: integer + format: int32 + replicas: + description: Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. + type: integer + format: int32 + revisionHistoryLimit: + description: The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. + type: integer + format: int32 + selector: + description: Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + strategy: + description: The deployment strategy to use to replace existing pods with new ones. + type: object + properties: + rollingUpdate: + description: 'Rolling update config params. Present only if DeploymentStrategyType = RollingUpdate. --- TODO: Update this to follow our convention for oneOf, whatever we decide it to be.' + type: object + properties: + maxSurge: + description: 'The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods.' + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + maxUnavailable: + description: 'The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods.' + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + type: + description: Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. + type: string + template: + description: Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". + type: object + properties: + metadata: + description: 'Standard object''s metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata' + type: object + x-kubernetes-preserve-unknown-fields: true + spec: + description: 'Specification of the desired behavior of the pod. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status' + type: object + required: + - containers + properties: + activeDeadlineSeconds: + description: Optional duration in seconds the pod may be active on the node relative to StartTime before the system will actively try to mark it failed and kill associated containers. Value must be a positive integer. + type: integer + format: int64 + affinity: + description: If specified, the pod's scheduling constraints + type: object + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for the pod. + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. + type: array + items: + description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + type: object + required: + - preference + - weight + properties: + preference: + description: A node selector term, associated with the corresponding weight. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + weight: + description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. + type: object + required: + - nodeSelectorTerms + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. The terms are ORed. + type: array + items: + description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + podAffinity: + description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + automountServiceAccountToken: + description: AutomountServiceAccountToken indicates whether a service account token should be automatically mounted. + type: boolean + containers: + description: List of containers belonging to the pod. Containers cannot currently be added or removed. There must be at least one container in a Pod. Cannot be updated. + type: array + items: + description: A single application container that you want to run within a pod. + type: object + required: + - name + properties: + args: + description: 'Arguments to the entrypoint. The container image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + command: + description: 'Entrypoint array. Not executed within a shell. The container image''s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + env: + description: List of environment variables to set in the container. Cannot be updated. + type: array + items: + description: EnvVar represents an environment variable present in a Container. + type: object + required: + - name + properties: + name: + description: Name of the environment variable. Must be a C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. Cannot be used if value is not empty. + type: object + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + type: object + required: + - key + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap or its key must be defined + type: boolean + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['''']`, `metadata.annotations['''']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secretKeyRef: + description: Selects a key of a secret in the pod's namespace + type: object + required: + - key + properties: + key: + description: The key of the secret to select from. Must be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be defined + type: boolean + envFrom: + description: List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. + type: array + items: + description: EnvFromSource represents the source of a set of ConfigMaps + type: object + properties: + configMapRef: + description: The ConfigMap to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap must be defined + type: boolean + prefix: + description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret must be defined + type: boolean + image: + description: 'Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.' + type: string + imagePullPolicy: + description: 'Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images' + type: string + lifecycle: + description: Actions that the management system should take in response to container lifecycle events. Cannot be updated. + type: object + properties: + postStart: + description: 'PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + preStop: + description: 'PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod''s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod''s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + livenessProbe: + description: 'Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + name: + description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. + type: string + ports: + description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated. + type: array + items: + description: ContainerPort represents a network port in a single container. + type: object + required: + - containerPort + properties: + containerPort: + description: Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. + type: integer + format: int32 + hostIP: + description: What host IP to bind the external port to. + type: string + hostPort: + description: Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. + type: integer + format: int32 + name: + description: If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. + type: string + protocol: + description: Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". + type: string + default: TCP + x-kubernetes-list-map-keys: + - containerPort + - protocol + x-kubernetes-list-type: map + readinessProbe: + description: 'Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + resizePolicy: + description: Resources resize policy for the container. + type: array + items: + description: ContainerResizePolicy represents resource resize policy for the container. + type: object + required: + - resourceName + - restartPolicy + properties: + resourceName: + description: 'Name of the resource to which this resource resize policy applies. Supported values: cpu, memory.' + type: string + restartPolicy: + description: Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. + type: string + x-kubernetes-list-type: atomic + resources: + description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + restartPolicy: + description: 'RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod''s restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the next init container. Instead, the next init container starts immediately after this init container is started, or after any startupProbe has successfully completed.' + type: string + securityContext: + description: 'SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/' + type: object + properties: + allowPrivilegeEscalation: + description: 'AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.' + type: boolean + capabilities: + description: The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + add: + description: Added capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + drop: + description: Removed capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + privileged: + description: Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + procMount: + description: procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. + type: string + readOnlyRootFilesystem: + description: Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + runAsGroup: + description: The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + runAsNonRoot: + description: Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + seLinuxOptions: + description: The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + level: + description: Level is SELinux level label that applies to the container. + type: string + role: + description: Role is a SELinux role label that applies to the container. + type: string + type: + description: Type is a SELinux type label that applies to the container. + type: string + user: + description: User is a SELinux user label that applies to the container. + type: string + seccompProfile: + description: The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. + type: object + required: + - type + properties: + localhostProfile: + description: localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: "type indicates which kind of seccomp profile will be applied. Valid options are: \n Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied." + type: string + windowsOptions: + description: The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. + type: object + properties: + gmsaCredentialSpec: + description: GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name of the GMSA credential spec to use. + type: string + hostProcess: + description: HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + startupProbe: + description: 'StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod''s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + stdin: + description: Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. + type: boolean + stdinOnce: + description: Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false + type: boolean + terminationMessagePath: + description: 'Optional: Path at which the file to which the container''s termination message will be written is mounted into the container''s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.' + type: string + terminationMessagePolicy: + description: Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. + type: string + tty: + description: Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. + type: boolean + volumeDevices: + description: volumeDevices is the list of block devices to be used by the container. + type: array + items: + description: volumeDevice describes a mapping of a raw block device within a container. + type: object + required: + - devicePath + - name + properties: + devicePath: + description: devicePath is the path inside of the container that the device will be mapped to. + type: string + name: + description: name must match the name of a persistentVolumeClaim in the pod + type: string + volumeMounts: + description: Pod volumes to mount into the container's filesystem. Cannot be updated. + type: array + items: + description: VolumeMount describes a mounting of a Volume within a container. + type: object + required: + - mountPath + - name + properties: + mountPath: + description: Path within the container at which the volume should be mounted. Must not contain ':'. + type: string + mountPropagation: + description: mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. + type: boolean + subPath: + description: Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). + type: string + subPathExpr: + description: Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. + type: string + workingDir: + description: Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. + type: string + dnsConfig: + description: Specifies the DNS parameters of a pod. Parameters specified here will be merged to the generated DNS configuration based on DNSPolicy. + type: object + properties: + nameservers: + description: A list of DNS name server IP addresses. This will be appended to the base nameservers generated from DNSPolicy. Duplicated nameservers will be removed. + type: array + items: + type: string + options: + description: A list of DNS resolver options. This will be merged with the base options generated from DNSPolicy. Duplicated entries will be removed. Resolution options given in Options will override those that appear in the base DNSPolicy. + type: array + items: + description: PodDNSConfigOption defines DNS resolver options of a pod. + type: object + properties: + name: + description: Required. + type: string + value: + type: string + searches: + description: A list of DNS search domains for host-name lookup. This will be appended to the base search paths generated from DNSPolicy. Duplicated search paths will be removed. + type: array + items: + type: string + dnsPolicy: + description: Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are 'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS parameters given in DNSConfig will be merged with the policy selected with DNSPolicy. To have DNS options set along with hostNetwork, you have to specify DNS policy explicitly to 'ClusterFirstWithHostNet'. + type: string + enableServiceLinks: + description: 'EnableServiceLinks indicates whether information about services should be injected into pod''s environment variables, matching the syntax of Docker links. Optional: Defaults to true.' + type: boolean + ephemeralContainers: + description: List of ephemeral containers run in this pod. Ephemeral containers may be run in an existing pod to perform user-initiated actions such as debugging. This list cannot be specified when creating a pod, and it cannot be modified by updating the pod spec. In order to add an ephemeral container to an existing pod, use the pod's ephemeralcontainers subresource. + type: array + items: + description: "An EphemeralContainer is a temporary container that you may add to an existing Pod for user-initiated activities such as debugging. Ephemeral containers have no resource or scheduling guarantees, and they will not be restarted when they exit or when a Pod is removed or restarted. The kubelet may evict a Pod if an ephemeral container causes the Pod to exceed its resource allocation. \n To add an ephemeral container, use the ephemeralcontainers subresource of an existing Pod. Ephemeral containers may not be removed or restarted." + type: object + required: + - name + properties: + args: + description: 'Arguments to the entrypoint. The image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + command: + description: 'Entrypoint array. Not executed within a shell. The image''s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + env: + description: List of environment variables to set in the container. Cannot be updated. + type: array + items: + description: EnvVar represents an environment variable present in a Container. + type: object + required: + - name + properties: + name: + description: Name of the environment variable. Must be a C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. Cannot be used if value is not empty. + type: object + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + type: object + required: + - key + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap or its key must be defined + type: boolean + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['''']`, `metadata.annotations['''']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secretKeyRef: + description: Selects a key of a secret in the pod's namespace + type: object + required: + - key + properties: + key: + description: The key of the secret to select from. Must be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be defined + type: boolean + envFrom: + description: List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. + type: array + items: + description: EnvFromSource represents the source of a set of ConfigMaps + type: object + properties: + configMapRef: + description: The ConfigMap to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap must be defined + type: boolean + prefix: + description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret must be defined + type: boolean + image: + description: 'Container image name. More info: https://kubernetes.io/docs/concepts/containers/images' + type: string + imagePullPolicy: + description: 'Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images' + type: string + lifecycle: + description: Lifecycle is not allowed for ephemeral containers. + type: object + properties: + postStart: + description: 'PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + preStop: + description: 'PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod''s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod''s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + livenessProbe: + description: Probes are not allowed for ephemeral containers. + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + name: + description: Name of the ephemeral container specified as a DNS_LABEL. This name must be unique among all containers, init containers and ephemeral containers. + type: string + ports: + description: Ports are not allowed for ephemeral containers. + type: array + items: + description: ContainerPort represents a network port in a single container. + type: object + required: + - containerPort + properties: + containerPort: + description: Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. + type: integer + format: int32 + hostIP: + description: What host IP to bind the external port to. + type: string + hostPort: + description: Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. + type: integer + format: int32 + name: + description: If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. + type: string + protocol: + description: Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". + type: string + default: TCP + x-kubernetes-list-map-keys: + - containerPort + - protocol + x-kubernetes-list-type: map + readinessProbe: + description: Probes are not allowed for ephemeral containers. + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + resizePolicy: + description: Resources resize policy for the container. + type: array + items: + description: ContainerResizePolicy represents resource resize policy for the container. + type: object + required: + - resourceName + - restartPolicy + properties: + resourceName: + description: 'Name of the resource to which this resource resize policy applies. Supported values: cpu, memory.' + type: string + restartPolicy: + description: Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. + type: string + x-kubernetes-list-type: atomic + resources: + description: Resources are not allowed for ephemeral containers. Ephemeral containers use spare resources already allocated to the pod. + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + restartPolicy: + description: Restart policy for the container to manage the restart behavior of each container within a pod. This may only be set for init containers. You cannot set this field on ephemeral containers. + type: string + securityContext: + description: 'Optional: SecurityContext defines the security options the ephemeral container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext.' + type: object + properties: + allowPrivilegeEscalation: + description: 'AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.' + type: boolean + capabilities: + description: The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + add: + description: Added capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + drop: + description: Removed capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + privileged: + description: Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + procMount: + description: procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. + type: string + readOnlyRootFilesystem: + description: Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + runAsGroup: + description: The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + runAsNonRoot: + description: Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + seLinuxOptions: + description: The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + level: + description: Level is SELinux level label that applies to the container. + type: string + role: + description: Role is a SELinux role label that applies to the container. + type: string + type: + description: Type is a SELinux type label that applies to the container. + type: string + user: + description: User is a SELinux user label that applies to the container. + type: string + seccompProfile: + description: The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. + type: object + required: + - type + properties: + localhostProfile: + description: localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: "type indicates which kind of seccomp profile will be applied. Valid options are: \n Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied." + type: string + windowsOptions: + description: The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. + type: object + properties: + gmsaCredentialSpec: + description: GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name of the GMSA credential spec to use. + type: string + hostProcess: + description: HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + startupProbe: + description: Probes are not allowed for ephemeral containers. + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + stdin: + description: Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. + type: boolean + stdinOnce: + description: Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false + type: boolean + targetContainerName: + description: "If set, the name of the container from PodSpec that this ephemeral container targets. The ephemeral container will be run in the namespaces (IPC, PID, etc) of this container. If not set then the ephemeral container uses the namespaces configured in the Pod spec. \n The container runtime must implement support for this feature. If the runtime does not support namespace targeting then the result of setting this field is undefined." + type: string + terminationMessagePath: + description: 'Optional: Path at which the file to which the container''s termination message will be written is mounted into the container''s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.' + type: string + terminationMessagePolicy: + description: Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. + type: string + tty: + description: Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. + type: boolean + volumeDevices: + description: volumeDevices is the list of block devices to be used by the container. + type: array + items: + description: volumeDevice describes a mapping of a raw block device within a container. + type: object + required: + - devicePath + - name + properties: + devicePath: + description: devicePath is the path inside of the container that the device will be mapped to. + type: string + name: + description: name must match the name of a persistentVolumeClaim in the pod + type: string + volumeMounts: + description: Pod volumes to mount into the container's filesystem. Subpath mounts are not allowed for ephemeral containers. Cannot be updated. + type: array + items: + description: VolumeMount describes a mounting of a Volume within a container. + type: object + required: + - mountPath + - name + properties: + mountPath: + description: Path within the container at which the volume should be mounted. Must not contain ':'. + type: string + mountPropagation: + description: mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. + type: boolean + subPath: + description: Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). + type: string + subPathExpr: + description: Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. + type: string + workingDir: + description: Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. + type: string + hostAliases: + description: HostAliases is an optional list of hosts and IPs that will be injected into the pod's hosts file if specified. This is only valid for non-hostNetwork pods. + type: array + items: + description: HostAlias holds the mapping between IP and hostnames that will be injected as an entry in the pod's hosts file. + type: object + properties: + hostnames: + description: Hostnames for the above IP address. + type: array + items: + type: string + ip: + description: IP address of the host file entry. + type: string + hostIPC: + description: 'Use the host''s ipc namespace. Optional: Default to false.' + type: boolean + hostNetwork: + description: Host networking requested for this pod. Use the host's network namespace. If this option is set, the ports that will be used must be specified. Default to false. + type: boolean + hostPID: + description: 'Use the host''s pid namespace. Optional: Default to false.' + type: boolean + hostUsers: + description: 'Use the host''s user namespace. Optional: Default to true. If set to true or not present, the pod will be run in the host user namespace, useful for when the pod needs a feature only available to the host user namespace, such as loading a kernel module with CAP_SYS_MODULE. When set to false, a new userns is created for the pod. Setting false is useful for mitigating container breakout vulnerabilities even allowing users to run their containers as root without actually having root privileges on the host. This field is alpha-level and is only honored by servers that enable the UserNamespacesSupport feature.' + type: boolean + hostname: + description: Specifies the hostname of the Pod If not specified, the pod's hostname will be set to a system-defined value. + type: string + imagePullSecrets: + description: 'ImagePullSecrets is an optional list of references to secrets in the same namespace to use for pulling any of the images used by this PodSpec. If specified, these secrets will be passed to individual puller implementations for them to use. More info: https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod' + type: array + items: + description: LocalObjectReference contains enough information to let you locate the referenced object inside the same namespace. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + initContainers: + description: 'List of initialization containers belonging to the pod. Init containers are executed in order prior to containers being started. If any init container fails, the pod is considered to have failed and is handled according to its restartPolicy. The name for an init container or normal container must be unique among all containers. Init containers may not have Lifecycle actions, Readiness probes, Liveness probes, or Startup probes. The resourceRequirements of an init container are taken into account during scheduling by finding the highest request/limit for each resource type, and then using the max of of that value or the sum of the normal containers. Limits are applied to init containers in a similar fashion. Init containers cannot currently be added or removed. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/' + type: array + items: + description: A single application container that you want to run within a pod. + type: object + required: + - name + properties: + args: + description: 'Arguments to the entrypoint. The container image''s CMD is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + command: + description: 'Entrypoint array. Not executed within a shell. The container image''s ENTRYPOINT is used if this is not provided. Variable references $(VAR_NAME) are expanded using the container''s environment. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell' + type: array + items: + type: string + env: + description: List of environment variables to set in the container. Cannot be updated. + type: array + items: + description: EnvVar represents an environment variable present in a Container. + type: object + required: + - name + properties: + name: + description: Name of the environment variable. Must be a C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. Cannot be used if value is not empty. + type: object + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + type: object + required: + - key + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap or its key must be defined + type: boolean + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['''']`, `metadata.annotations['''']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secretKeyRef: + description: Selects a key of a secret in the pod's namespace + type: object + required: + - key + properties: + key: + description: The key of the secret to select from. Must be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be defined + type: boolean + envFrom: + description: List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated. + type: array + items: + description: EnvFromSource represents the source of a set of ConfigMaps + type: object + properties: + configMapRef: + description: The ConfigMap to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap must be defined + type: boolean + prefix: + description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret must be defined + type: boolean + image: + description: 'Container image name. More info: https://kubernetes.io/docs/concepts/containers/images This field is optional to allow higher level config management to default or override container images in workload controllers like Deployments and StatefulSets.' + type: string + imagePullPolicy: + description: 'Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated. More info: https://kubernetes.io/docs/concepts/containers/images#updating-images' + type: string + lifecycle: + description: Actions that the management system should take in response to container lifecycle events. Cannot be updated. + type: object + properties: + postStart: + description: 'PostStart is called immediately after a container is created. If the handler fails, the container is terminated and restarted according to its restart policy. Other management of the container blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + preStop: + description: 'PreStop is called immediately before a container is terminated due to an API request or management event such as liveness/startup probe failure, preemption, resource contention, etc. The handler is not called if the container crashes or exits. The Pod''s termination grace period countdown begins before the PreStop hook is executed. Regardless of the outcome of the handler, the container will eventually terminate within the Pod''s termination grace period (unless delayed by finalizers). Other management of the container blocks until the hook completes or until the termination grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + tcpSocket: + description: Deprecated. TCPSocket is NOT supported as a LifecycleHandler and kept for the backward compatibility. There are no validation of this field and lifecycle hooks will fail in runtime when tcp handler is specified. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + livenessProbe: + description: 'Periodic probe of container liveness. Container will be restarted if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + name: + description: Name of the container specified as a DNS_LABEL. Each container in a pod must have a unique name (DNS_LABEL). Cannot be updated. + type: string + ports: + description: List of ports to expose from the container. Not specifying a port here DOES NOT prevent that port from being exposed. Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. Modifying this array with strategic merge patch may corrupt the data. For more information See https://github.com/kubernetes/kubernetes/issues/108255. Cannot be updated. + type: array + items: + description: ContainerPort represents a network port in a single container. + type: object + required: + - containerPort + properties: + containerPort: + description: Number of port to expose on the pod's IP address. This must be a valid port number, 0 < x < 65536. + type: integer + format: int32 + hostIP: + description: What host IP to bind the external port to. + type: string + hostPort: + description: Number of port to expose on the host. If specified, this must be a valid port number, 0 < x < 65536. If HostNetwork is specified, this must match ContainerPort. Most containers do not need this. + type: integer + format: int32 + name: + description: If specified, this must be an IANA_SVC_NAME and unique within the pod. Each named port in a pod must have a unique name. Name for the port that can be referred to by services. + type: string + protocol: + description: Protocol for port. Must be UDP, TCP, or SCTP. Defaults to "TCP". + type: string + default: TCP + x-kubernetes-list-map-keys: + - containerPort + - protocol + x-kubernetes-list-type: map + readinessProbe: + description: 'Periodic probe of container service readiness. Container will be removed from service endpoints if the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + resizePolicy: + description: Resources resize policy for the container. + type: array + items: + description: ContainerResizePolicy represents resource resize policy for the container. + type: object + required: + - resourceName + - restartPolicy + properties: + resourceName: + description: 'Name of the resource to which this resource resize policy applies. Supported values: cpu, memory.' + type: string + restartPolicy: + description: Restart policy to apply when specified resource is resized. If not specified, it defaults to NotRequired. + type: string + x-kubernetes-list-type: atomic + resources: + description: 'Compute Resources required by this container. Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + restartPolicy: + description: 'RestartPolicy defines the restart behavior of individual containers in a pod. This field may only be set for init containers, and the only allowed value is "Always". For non-init containers or when this field is not specified, the restart behavior is defined by the Pod''s restart policy and the container type. Setting the RestartPolicy as "Always" for the init container will have the following effect: this init container will be continually restarted on exit until all regular containers have terminated. Once all regular containers have completed, all init containers with restartPolicy "Always" will be shut down. This lifecycle differs from normal init containers and is often referred to as a "sidecar" container. Although this init container still starts in the init container sequence, it does not wait for the container to complete before proceeding to the next init container. Instead, the next init container starts immediately after this init container is started, or after any startupProbe has successfully completed.' + type: string + securityContext: + description: 'SecurityContext defines the security options the container should be run with. If set, the fields of SecurityContext override the equivalent fields of PodSecurityContext. More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/' + type: object + properties: + allowPrivilegeEscalation: + description: 'AllowPrivilegeEscalation controls whether a process can gain more privileges than its parent process. This bool directly controls if the no_new_privs flag will be set on the container process. AllowPrivilegeEscalation is true always when the container is: 1) run as Privileged 2) has CAP_SYS_ADMIN Note that this field cannot be set when spec.os.name is windows.' + type: boolean + capabilities: + description: The capabilities to add/drop when running containers. Defaults to the default set of capabilities granted by the container runtime. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + add: + description: Added capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + drop: + description: Removed capabilities + type: array + items: + description: Capability represent POSIX capabilities type + type: string + privileged: + description: Run container in privileged mode. Processes in privileged containers are essentially equivalent to root on the host. Defaults to false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + procMount: + description: procMount denotes the type of proc mount to use for the containers. The default is DefaultProcMount which uses the container runtime defaults for readonly paths and masked paths. This requires the ProcMountType feature flag to be enabled. Note that this field cannot be set when spec.os.name is windows. + type: string + readOnlyRootFilesystem: + description: Whether this container has a read-only root filesystem. Default is false. Note that this field cannot be set when spec.os.name is windows. + type: boolean + runAsGroup: + description: The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + runAsNonRoot: + description: Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + seLinuxOptions: + description: The SELinux context to be applied to the container. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + level: + description: Level is SELinux level label that applies to the container. + type: string + role: + description: Role is a SELinux role label that applies to the container. + type: string + type: + description: Type is a SELinux type label that applies to the container. + type: string + user: + description: User is a SELinux user label that applies to the container. + type: string + seccompProfile: + description: The seccomp options to use by this container. If seccomp options are provided at both the pod & container level, the container options override the pod options. Note that this field cannot be set when spec.os.name is windows. + type: object + required: + - type + properties: + localhostProfile: + description: localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: "type indicates which kind of seccomp profile will be applied. Valid options are: \n Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied." + type: string + windowsOptions: + description: The Windows specific settings applied to all containers. If unspecified, the options from the PodSecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. + type: object + properties: + gmsaCredentialSpec: + description: GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name of the GMSA credential spec to use. + type: string + hostProcess: + description: HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + startupProbe: + description: 'StartupProbe indicates that the Pod has successfully initialized. If specified, no other probes are executed until this completes successfully. If this probe fails, the Pod will be restarted, just as if the livenessProbe failed. This can be used to provide different probe parameters at the beginning of a Pod''s lifecycle, when it might take a long time to load data or warm a cache, than during steady-state operation. This cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: object + properties: + exec: + description: Exec specifies the action to take. + type: object + properties: + command: + description: Command is the command line to execute inside the container, the working directory for the command is root ('/') in the container's filesystem. The command is simply exec'd, it is not run inside a shell, so traditional shell instructions ('|', etc) won't work. To use a shell, you need to explicitly call out to that shell. Exit status of 0 is treated as live/healthy and non-zero is unhealthy. + type: array + items: + type: string + failureThreshold: + description: Minimum consecutive failures for the probe to be considered failed after having succeeded. Defaults to 3. Minimum value is 1. + type: integer + format: int32 + grpc: + description: GRPC specifies an action involving a GRPC port. + type: object + required: + - port + properties: + port: + description: Port number of the gRPC service. Number must be in the range 1 to 65535. + type: integer + format: int32 + service: + description: "Service is the name of the service to place in the gRPC HealthCheckRequest (see https://github.com/grpc/grpc/blob/master/doc/health-checking.md). \n If this is not specified, the default behavior is defined by gRPC." + type: string + httpGet: + description: HTTPGet specifies the http request to perform. + type: object + required: + - port + properties: + host: + description: Host name to connect to, defaults to the pod IP. You probably want to set "Host" in httpHeaders instead. + type: string + httpHeaders: + description: Custom headers to set in the request. HTTP allows repeated headers. + type: array + items: + description: HTTPHeader describes a custom header to be used in HTTP probes + type: object + required: + - name + - value + properties: + name: + description: The header field name. This will be canonicalized upon output, so case-variant names will be understood as the same header. + type: string + value: + description: The header field value + type: string + path: + description: Path to access on the HTTP server. + type: string + port: + description: Name or number of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + scheme: + description: Scheme to use for connecting to the host. Defaults to HTTP. + type: string + initialDelaySeconds: + description: 'Number of seconds after the container has started before liveness probes are initiated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + periodSeconds: + description: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. + type: integer + format: int32 + successThreshold: + description: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup. Minimum value is 1. + type: integer + format: int32 + tcpSocket: + description: TCPSocket specifies an action involving a TCP port. + type: object + required: + - port + properties: + host: + description: 'Optional: Host name to connect to, defaults to the pod IP.' + type: string + port: + description: Number or name of the port to access on the container. Number must be in the range 1 to 65535. Name must be an IANA_SVC_NAME. + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully upon probe failure. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. If this value is nil, the pod's terminationGracePeriodSeconds will be used. Otherwise, this value overrides the value provided by the pod spec. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). This is a beta field and requires enabling ProbeTerminationGracePeriod feature gate. Minimum value is 1. spec.terminationGracePeriodSeconds is used if unset. + type: integer + format: int64 + timeoutSeconds: + description: 'Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes' + type: integer + format: int32 + stdin: + description: Whether this container should allocate a buffer for stdin in the container runtime. If this is not set, reads from stdin in the container will always result in EOF. Default is false. + type: boolean + stdinOnce: + description: Whether the container runtime should close the stdin channel after it has been opened by a single attach. When stdin is true the stdin stream will remain open across multiple attach sessions. If stdinOnce is set to true, stdin is opened on container start, is empty until the first client attaches to stdin, and then remains open and accepts data until the client disconnects, at which time stdin is closed and remains closed until the container is restarted. If this flag is false, a container processes that reads from stdin will never receive an EOF. Default is false + type: boolean + terminationMessagePath: + description: 'Optional: Path at which the file to which the container''s termination message will be written is mounted into the container''s filesystem. Message written is intended to be brief final status, such as an assertion failure message. Will be truncated by the node if greater than 4096 bytes. The total message length across all containers will be limited to 12kb. Defaults to /dev/termination-log. Cannot be updated.' + type: string + terminationMessagePolicy: + description: Indicate how the termination message should be populated. File will use the contents of terminationMessagePath to populate the container status message on both success and failure. FallbackToLogsOnError will use the last chunk of container log output if the termination message file is empty and the container exited with an error. The log output is limited to 2048 bytes or 80 lines, whichever is smaller. Defaults to File. Cannot be updated. + type: string + tty: + description: Whether this container should allocate a TTY for itself, also requires 'stdin' to be true. Default is false. + type: boolean + volumeDevices: + description: volumeDevices is the list of block devices to be used by the container. + type: array + items: + description: volumeDevice describes a mapping of a raw block device within a container. + type: object + required: + - devicePath + - name + properties: + devicePath: + description: devicePath is the path inside of the container that the device will be mapped to. + type: string + name: + description: name must match the name of a persistentVolumeClaim in the pod + type: string + volumeMounts: + description: Pod volumes to mount into the container's filesystem. Cannot be updated. + type: array + items: + description: VolumeMount describes a mounting of a Volume within a container. + type: object + required: + - mountPath + - name + properties: + mountPath: + description: Path within the container at which the volume should be mounted. Must not contain ':'. + type: string + mountPropagation: + description: mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. + type: boolean + subPath: + description: Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). + type: string + subPathExpr: + description: Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. + type: string + workingDir: + description: Container's working directory. If not specified, the container runtime's default will be used, which might be configured in the container image. Cannot be updated. + type: string + nodeName: + description: NodeName is a request to schedule this pod onto a specific node. If it is non-empty, the scheduler simply schedules this pod onto that node, assuming that it fits resource requirements. + type: string + nodeSelector: + description: 'NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node''s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/' + type: object + additionalProperties: + type: string + x-kubernetes-map-type: atomic + os: + description: "Specifies the OS of the containers in the pod. Some pod and container fields are restricted if this is set. \n If the OS field is set to linux, the following fields must be unset: -securityContext.windowsOptions \n If the OS field is set to windows, following fields must be unset: - spec.hostPID - spec.hostIPC - spec.hostUsers - spec.securityContext.seLinuxOptions - spec.securityContext.seccompProfile - spec.securityContext.fsGroup - spec.securityContext.fsGroupChangePolicy - spec.securityContext.sysctls - spec.shareProcessNamespace - spec.securityContext.runAsUser - spec.securityContext.runAsGroup - spec.securityContext.supplementalGroups - spec.containers[*].securityContext.seLinuxOptions - spec.containers[*].securityContext.seccompProfile - spec.containers[*].securityContext.capabilities - spec.containers[*].securityContext.readOnlyRootFilesystem - spec.containers[*].securityContext.privileged - spec.containers[*].securityContext.allowPrivilegeEscalation - spec.containers[*].securityContext.procMount - spec.containers[*].securityContext.runAsUser - spec.containers[*].securityContext.runAsGroup" + type: object + required: + - name + properties: + name: + description: 'Name is the name of the operating system. The currently supported values are linux and windows. Additional value may be defined in future and can be one of: https://github.com/opencontainers/runtime-spec/blob/master/config.md#platform-specific-configuration Clients should expect to handle additional values and treat unrecognized values in this field as os: null' + type: string + overhead: + description: 'Overhead represents the resource overhead associated with running a pod for a given RuntimeClass. This field will be autopopulated at admission time by the RuntimeClass admission controller. If the RuntimeClass admission controller is enabled, overhead must not be set in Pod create requests. The RuntimeClass admission controller will reject Pod create requests which have the overhead already set. If RuntimeClass is configured and selected in the PodSpec, Overhead will be set to the value defined in the corresponding RuntimeClass, otherwise it will remain unset and treated as zero. More info: https://git.k8s.io/enhancements/keps/sig-node/688-pod-overhead/README.md' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + preemptionPolicy: + description: PreemptionPolicy is the Policy for preempting pods with lower priority. One of Never, PreemptLowerPriority. Defaults to PreemptLowerPriority if unset. + type: string + priority: + description: The priority value. Various system components use this field to find the priority of the pod. When Priority Admission Controller is enabled, it prevents users from setting this field. The admission controller populates this field from PriorityClassName. The higher the value, the higher the priority. + type: integer + format: int32 + priorityClassName: + description: If specified, indicates the pod's priority. "system-node-critical" and "system-cluster-critical" are two special keywords which indicate the highest priorities with the former being the highest priority. Any other name must be defined by creating a PriorityClass object with that name. If not specified, the pod priority will be default or zero if there is no default. + type: string + readinessGates: + description: 'If specified, all readiness gates will be evaluated for pod readiness. A pod is ready when all its containers are ready AND all conditions specified in the readiness gates have status equal to "True" More info: https://git.k8s.io/enhancements/keps/sig-network/580-pod-readiness-gates' + type: array + items: + description: PodReadinessGate contains the reference to a pod condition + type: object + required: + - conditionType + properties: + conditionType: + description: ConditionType refers to a condition in the pod's condition list with matching type. + type: string + resourceClaims: + description: "ResourceClaims defines which ResourceClaims must be allocated and reserved before the Pod is allowed to start. The resources will be made available to those containers which consume them by name. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable." + type: array + items: + description: PodResourceClaim references exactly one ResourceClaim through a ClaimSource. It adds a name to it that uniquely identifies the ResourceClaim inside the Pod. Containers that need access to the ResourceClaim reference it with this name. + type: object + required: + - name + properties: + name: + description: Name uniquely identifies this resource claim inside the pod. This must be a DNS_LABEL. + type: string + source: + description: Source describes where to find the ResourceClaim. + type: object + properties: + resourceClaimName: + description: ResourceClaimName is the name of a ResourceClaim object in the same namespace as this pod. + type: string + resourceClaimTemplateName: + description: "ResourceClaimTemplateName is the name of a ResourceClaimTemplate object in the same namespace as this pod. \n The template will be used to create a new ResourceClaim, which will be bound to this pod. When this pod is deleted, the ResourceClaim will also be deleted. The pod name and resource name, along with a generated component, will be used to form a unique name for the ResourceClaim, which will be recorded in pod.status.resourceClaimStatuses. \n This field is immutable and no changes will be made to the corresponding ResourceClaim by the control plane after creating the ResourceClaim." + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + restartPolicy: + description: 'Restart policy for all containers within the pod. One of Always, OnFailure, Never. In some contexts, only a subset of those values may be permitted. Default to Always. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#restart-policy' + type: string + runtimeClassName: + description: 'RuntimeClassName refers to a RuntimeClass object in the node.k8s.io group, which should be used to run this pod. If no RuntimeClass resource matches the named class, the pod will not be run. If unset or empty, the "legacy" RuntimeClass will be used, which is an implicit class with an empty definition that uses the default runtime handler. More info: https://git.k8s.io/enhancements/keps/sig-node/585-runtime-class' + type: string + schedulerName: + description: If specified, the pod will be dispatched by specified scheduler. If not specified, the pod will be dispatched by default scheduler. + type: string + schedulingGates: + description: "SchedulingGates is an opaque list of values that if specified will block scheduling the pod. If schedulingGates is not empty, the pod will stay in the SchedulingGated state and the scheduler will not attempt to schedule the pod. \n SchedulingGates can only be set at pod creation time, and be removed only afterwards. \n This is a beta feature enabled by the PodSchedulingReadiness feature gate." + type: array + items: + description: PodSchedulingGate is associated to a Pod to guard its scheduling. + type: object + required: + - name + properties: + name: + description: Name of the scheduling gate. Each scheduling gate must have a unique name field. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + securityContext: + description: 'SecurityContext holds pod-level security attributes and common container settings. Optional: Defaults to empty. See type description for default values of each field.' + type: object + properties: + fsGroup: + description: "A special supplemental group that applies to all containers in a pod. Some volume types allow the Kubelet to change the ownership of that volume to be owned by the pod: \n 1. The owning GID will be the FSGroup 2. The setgid bit is set (new files created in the volume will be owned by FSGroup) 3. The permission bits are OR'd with rw-rw---- \n If unset, the Kubelet will not modify the ownership and permissions of any volume. Note that this field cannot be set when spec.os.name is windows." + type: integer + format: int64 + fsGroupChangePolicy: + description: 'fsGroupChangePolicy defines behavior of changing ownership and permission of the volume before being exposed inside Pod. This field will only apply to volume types which support fsGroup based ownership(and permissions). It will have no effect on ephemeral volume types such as: secret, configmaps and emptydir. Valid values are "OnRootMismatch" and "Always". If not specified, "Always" is used. Note that this field cannot be set when spec.os.name is windows.' + type: string + runAsGroup: + description: The GID to run the entrypoint of the container process. Uses runtime default if unset. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + runAsNonRoot: + description: Indicates that the container must run as a non-root user. If true, the Kubelet will validate the image at runtime to ensure that it does not run as UID 0 (root) and fail to start the container if it does. If unset or false, no such validation will be performed. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: boolean + runAsUser: + description: The UID to run the entrypoint of the container process. Defaults to user specified in image metadata if unspecified. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. + type: integer + format: int64 + seLinuxOptions: + description: The SELinux context to be applied to all containers. If unspecified, the container runtime will allocate a random SELinux context for each container. May also be set in SecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence for that container. Note that this field cannot be set when spec.os.name is windows. + type: object + properties: + level: + description: Level is SELinux level label that applies to the container. + type: string + role: + description: Role is a SELinux role label that applies to the container. + type: string + type: + description: Type is a SELinux type label that applies to the container. + type: string + user: + description: User is a SELinux user label that applies to the container. + type: string + seccompProfile: + description: The seccomp options to use by the containers in this pod. Note that this field cannot be set when spec.os.name is windows. + type: object + required: + - type + properties: + localhostProfile: + description: localhostProfile indicates a profile defined in a file on the node should be used. The profile must be preconfigured on the node to work. Must be a descending path, relative to the kubelet's configured seccomp profile location. Must be set if type is "Localhost". Must NOT be set for any other type. + type: string + type: + description: "type indicates which kind of seccomp profile will be applied. Valid options are: \n Localhost - a profile defined in a file on the node should be used. RuntimeDefault - the container runtime default profile should be used. Unconfined - no profile should be applied." + type: string + supplementalGroups: + description: A list of groups applied to the first process run in each container, in addition to the container's primary GID, the fsGroup (if specified), and group memberships defined in the container image for the uid of the container process. If unspecified, no additional groups are added to any container. Note that group memberships defined in the container image for the uid of the container process are still effective, even if they are not included in this list. Note that this field cannot be set when spec.os.name is windows. + type: array + items: + type: integer + format: int64 + sysctls: + description: Sysctls hold a list of namespaced sysctls used for the pod. Pods with unsupported sysctls (by the container runtime) might fail to launch. Note that this field cannot be set when spec.os.name is windows. + type: array + items: + description: Sysctl defines a kernel parameter to be set + type: object + required: + - name + - value + properties: + name: + description: Name of a property to set + type: string + value: + description: Value of a property to set + type: string + windowsOptions: + description: The Windows specific settings applied to all containers. If unspecified, the options within a container's SecurityContext will be used. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. Note that this field cannot be set when spec.os.name is linux. + type: object + properties: + gmsaCredentialSpec: + description: GMSACredentialSpec is where the GMSA admission webhook (https://github.com/kubernetes-sigs/windows-gmsa) inlines the contents of the GMSA credential spec named by the GMSACredentialSpecName field. + type: string + gmsaCredentialSpecName: + description: GMSACredentialSpecName is the name of the GMSA credential spec to use. + type: string + hostProcess: + description: HostProcess determines if a container should be run as a 'Host Process' container. All of a Pod's containers must have the same effective HostProcess value (it is not allowed to have a mix of HostProcess containers and non-HostProcess containers). In addition, if HostProcess is true then HostNetwork must also be set to true. + type: boolean + runAsUserName: + description: The UserName in Windows to run the entrypoint of the container process. Defaults to the user specified in image metadata if unspecified. May also be set in PodSecurityContext. If set in both SecurityContext and PodSecurityContext, the value specified in SecurityContext takes precedence. + type: string + serviceAccount: + description: 'DeprecatedServiceAccount is a depreciated alias for ServiceAccountName. Deprecated: Use serviceAccountName instead.' + type: string + serviceAccountName: + description: 'ServiceAccountName is the name of the ServiceAccount to use to run this pod. More info: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/' + type: string + setHostnameAsFQDN: + description: If true the pod's hostname will be configured as the pod's FQDN, rather than the leaf name (the default). In Linux containers, this means setting the FQDN in the hostname field of the kernel (the nodename field of struct utsname). In Windows containers, this means setting the registry value of hostname for the registry key HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\Tcpip\\Parameters to FQDN. If a pod does not have FQDN, this has no effect. Default to false. + type: boolean + shareProcessNamespace: + description: 'Share a single process namespace between all of the containers in a pod. When this is set containers will be able to view and signal processes from other containers in the same pod, and the first process in each container will not be assigned PID 1. HostPID and ShareProcessNamespace cannot both be set. Optional: Default to false.' + type: boolean + subdomain: + description: If specified, the fully qualified Pod hostname will be "...svc.". If not specified, the pod will not have a domainname at all. + type: string + terminationGracePeriodSeconds: + description: Optional duration in seconds the pod needs to terminate gracefully. May be decreased in delete request. Value must be non-negative integer. The value zero indicates stop immediately via the kill signal (no opportunity to shut down). If this value is nil, the default grace period will be used instead. The grace period is the duration in seconds after the processes running in the pod are sent a termination signal and the time when the processes are forcibly halted with a kill signal. Set this value longer than the expected cleanup time for your process. Defaults to 30 seconds. + type: integer + format: int64 + tolerations: + description: If specified, the pod's tolerations. + type: array + items: + description: The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . + type: object + properties: + effect: + description: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. + type: integer + format: int64 + value: + description: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + topologySpreadConstraints: + description: TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed. + type: array + items: + description: TopologySpreadConstraint specifies how to spread matching pods among the given topology. + type: object + required: + - maxSkew + - topologyKey + - whenUnsatisfiable + properties: + labelSelector: + description: LabelSelector is used to find matching pods. Pods that match this label selector are counted to determine the number of pods in their corresponding topology domain. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + matchLabelKeys: + description: "MatchLabelKeys is a set of pod label keys to select the pods over which spreading will be calculated. The keys are used to lookup values from the incoming pod labels, those key-value labels are ANDed with labelSelector to select the group of existing pods over which spreading will be calculated for the incoming pod. The same key is forbidden to exist in both MatchLabelKeys and LabelSelector. MatchLabelKeys cannot be set when LabelSelector isn't set. Keys that don't exist in the incoming pod labels will be ignored. A null or empty list means only match against labelSelector. \n This is a beta field and requires the MatchLabelKeysInPodTopologySpread feature gate to be enabled (enabled by default)." + type: array + items: + type: string + x-kubernetes-list-type: atomic + maxSkew: + description: 'MaxSkew describes the degree to which pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`, it is the maximum permitted difference between the number of matching pods in the target topology and the global minimum. The global minimum is the minimum number of matching pods in an eligible domain or zero if the number of eligible domains is less than MinDomains. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 2/2/1: In this case, the global minimum is 1. | zone1 | zone2 | zone3 | | P P | P P | P | - if MaxSkew is 1, incoming pod can only be scheduled to zone3 to become 2/2/2; scheduling it onto zone1(zone2) would make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1). - if MaxSkew is 2, incoming pod can be scheduled onto any zone. When `whenUnsatisfiable=ScheduleAnyway`, it is used to give higher precedence to topologies that satisfy it. It''s a required field. Default value is 1 and 0 is not allowed.' + type: integer + format: int32 + minDomains: + description: "MinDomains indicates a minimum number of eligible domains. When the number of eligible domains with matching topology keys is less than minDomains, Pod Topology Spread treats \"global minimum\" as 0, and then the calculation of Skew is performed. And when the number of eligible domains with matching topology keys equals or greater than minDomains, this value has no effect on scheduling. As a result, when the number of eligible domains is less than minDomains, scheduler won't schedule more than maxSkew Pods to those domains. If value is nil, the constraint behaves as if MinDomains is equal to 1. Valid values are integers greater than 0. When value is not nil, WhenUnsatisfiable must be DoNotSchedule. \n For example, in a 3-zone cluster, MaxSkew is set to 2, MinDomains is set to 5 and pods with the same labelSelector spread as 2/2/2: | zone1 | zone2 | zone3 | | P P | P P | P P | The number of domains is less than 5(MinDomains), so \"global minimum\" is treated as 0. In this situation, new pod with the same labelSelector cannot be scheduled, because computed skew will be 3(3 - 0) if new Pod is scheduled to any of the three zones, it will violate MaxSkew. \n This is a beta field and requires the MinDomainsInPodTopologySpread feature gate to be enabled (enabled by default)." + type: integer + format: int32 + nodeAffinityPolicy: + description: "NodeAffinityPolicy indicates how we will treat Pod's nodeAffinity/nodeSelector when calculating pod topology spread skew. Options are: - Honor: only nodes matching nodeAffinity/nodeSelector are included in the calculations. - Ignore: nodeAffinity/nodeSelector are ignored. All nodes are included in the calculations. \n If this value is nil, the behavior is equivalent to the Honor policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag." + type: string + nodeTaintsPolicy: + description: "NodeTaintsPolicy indicates how we will treat node taints when calculating pod topology spread skew. Options are: - Honor: nodes without taints, along with tainted nodes for which the incoming pod has a toleration, are included. - Ignore: node taints are ignored. All nodes are included. \n If this value is nil, the behavior is equivalent to the Ignore policy. This is a beta-level feature default enabled by the NodeInclusionPolicyInPodTopologySpread feature flag." + type: string + topologyKey: + description: TopologyKey is the key of node labels. Nodes that have a label with this key and identical values are considered to be in the same topology. We consider each as a "bucket", and try to put balanced number of pods into each bucket. We define a domain as a particular instance of a topology. Also, we define an eligible domain as a domain whose nodes meet the requirements of nodeAffinityPolicy and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname", each Node is a domain of that topology. And, if TopologyKey is "topology.kubernetes.io/zone", each zone is a domain of that topology. It's a required field. + type: string + whenUnsatisfiable: + description: 'WhenUnsatisfiable indicates how to deal with a pod if it doesn''t satisfy the spread constraint. - DoNotSchedule (default) tells the scheduler not to schedule it. - ScheduleAnyway tells the scheduler to schedule the pod in any location, but giving higher precedence to topologies that would help reduce the skew. A constraint is considered "Unsatisfiable" for an incoming pod if and only if every possible node assignment for that pod would violate "MaxSkew" on some topology. For example, in a 3-zone cluster, MaxSkew is set to 1, and pods with the same labelSelector spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P | P | P | If WhenUnsatisfiable is set to DoNotSchedule, incoming pod can only be scheduled to zone2(zone3) to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3) satisfies MaxSkew(1). In other words, the cluster can still be imbalanced, but scheduler won''t make it *more* imbalanced. It''s a required field.' + type: string + x-kubernetes-list-map-keys: + - topologyKey + - whenUnsatisfiable + x-kubernetes-list-type: map + volumes: + description: 'List of volumes that can be mounted by containers belonging to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes' + type: array + items: + description: Volume represents a named volume in a pod that may be accessed by any container in the pod. + type: object + required: + - name + properties: + awsElasticBlockStore: + description: 'awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: object + required: + - volumeID + properties: + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + partition: + description: 'partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).' + type: integer + format: int32 + readOnly: + description: 'readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: boolean + volumeID: + description: 'volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: string + azureDisk: + description: azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. + type: object + required: + - diskName + - diskURI + properties: + cachingMode: + description: 'cachingMode is the Host Caching mode: None, Read Only, Read Write.' + type: string + diskName: + description: diskName is the Name of the data disk in the blob storage + type: string + diskURI: + description: diskURI is the URI of data disk in the blob storage + type: string + fsType: + description: fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + kind: + description: 'kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared' + type: string + readOnly: + description: readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + azureFile: + description: azureFile represents an Azure File Service mount on the host and bind mount to the pod. + type: object + required: + - secretName + - shareName + properties: + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretName: + description: secretName is the name of secret that contains Azure Storage Account Name and Key + type: string + shareName: + description: shareName is the azure share Name + type: string + cephfs: + description: cephFS represents a Ceph FS mount on the host that shares a pod's lifetime + type: object + required: + - monitors + properties: + monitors: + description: 'monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: array + items: + type: string + path: + description: 'path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /' + type: string + readOnly: + description: 'readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: boolean + secretFile: + description: 'secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: string + secretRef: + description: 'secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + user: + description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: string + cinder: + description: 'cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: object + required: + - volumeID + properties: + fsType: + description: 'fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: string + readOnly: + description: 'readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: boolean + secretRef: + description: 'secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + volumeID: + description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: string + configMap: + description: configMap represents a configMap that should populate this volume + type: object + properties: + defaultMode: + description: 'defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional specify whether the ConfigMap or its keys must be defined + type: boolean + csi: + description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). + type: object + required: + - driver + properties: + driver: + description: driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. + type: string + fsType: + description: fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. + type: string + nodePublishSecretRef: + description: nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + readOnly: + description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). + type: boolean + volumeAttributes: + description: volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. + type: object + additionalProperties: + type: string + downwardAPI: + description: downwardAPI represents downward API about the pod that should populate this volume + type: object + properties: + defaultMode: + description: 'Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: Items is a list of downward API volume file + type: array + items: + description: DownwardAPIVolumeFile represents information to create the file containing the pod field + type: object + required: + - path + properties: + fieldRef: + description: 'Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + mode: + description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: 'Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ''..'' path. Must be utf-8 encoded. The first item of the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + emptyDir: + description: 'emptyDir represents a temporary directory that shares a pod''s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + type: object + properties: + medium: + description: 'medium represents what type of storage medium should back this directory. The default is "" which means to use the node''s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + type: string + sizeLimit: + description: 'sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + ephemeral: + description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time." + type: object + properties: + volumeClaimTemplate: + description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `-` where `` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil." + type: object + required: + - spec + properties: + metadata: + description: May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. + type: object + spec: + description: The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. + type: object + properties: + accessModes: + description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1' + type: array + items: + type: string + dataSource: + description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.' + type: object + required: + - kind + - name + properties: + apiGroup: + description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + dataSourceRef: + description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.' + type: object + required: + - kind + - name + properties: + apiGroup: + description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + resources: + description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources' + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + selector: + description: selector is a label query over volumes to consider for binding. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + storageClassName: + description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1' + type: string + volumeMode: + description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the PersistentVolume backing this claim. + type: string + fc: + description: fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. + type: object + properties: + fsType: + description: 'fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + lun: + description: 'lun is Optional: FC target lun number' + type: integer + format: int32 + readOnly: + description: 'readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.' + type: boolean + targetWWNs: + description: 'targetWWNs is Optional: FC target worldwide names (WWNs)' + type: array + items: + type: string + wwids: + description: 'wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.' + type: array + items: + type: string + flexVolume: + description: flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. + type: object + required: + - driver + properties: + driver: + description: driver is the name of the driver to use for this volume. + type: string + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. + type: string + options: + description: 'options is Optional: this field holds extra command options if any.' + type: object + additionalProperties: + type: string + readOnly: + description: 'readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.' + type: boolean + secretRef: + description: 'secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + flocker: + description: flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running + type: object + properties: + datasetName: + description: datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated + type: string + datasetUUID: + description: datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset + type: string + gcePersistentDisk: + description: 'gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: object + required: + - pdName + properties: + fsType: + description: 'fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + partition: + description: 'partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: integer + format: int32 + pdName: + description: 'pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: string + readOnly: + description: 'readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: boolean + gitRepo: + description: 'gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod''s container.' + type: object + required: + - repository + properties: + directory: + description: directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. + type: string + repository: + description: repository is the URL + type: string + revision: + description: revision is the commit hash for the specified revision. + type: string + glusterfs: + description: 'glusterfs represents a Glusterfs mount on the host that shares a pod''s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md' + type: object + required: + - endpoints + - path + properties: + endpoints: + description: 'endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: string + path: + description: 'path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: string + readOnly: + description: 'readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: boolean + hostPath: + description: 'hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.' + type: object + required: + - path + properties: + path: + description: 'path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath' + type: string + type: + description: 'type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath' + type: string + iscsi: + description: 'iscsi represents an ISCSI Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md' + type: object + required: + - iqn + - lun + - targetPortal + properties: + chapAuthDiscovery: + description: chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication + type: boolean + chapAuthSession: + description: chapAuthSession defines whether support iSCSI Session CHAP authentication + type: boolean + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + initiatorName: + description: initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. + type: string + iqn: + description: iqn is the target iSCSI Qualified Name. + type: string + iscsiInterface: + description: iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). + type: string + lun: + description: lun represents iSCSI Target Lun number. + type: integer + format: int32 + portals: + description: portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). + type: array + items: + type: string + readOnly: + description: readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. + type: boolean + secretRef: + description: secretRef is the CHAP Secret for iSCSI target and initiator authentication + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + targetPortal: + description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). + type: string + name: + description: 'name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + nfs: + description: 'nfs represents an NFS mount on the host that shares a pod''s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: object + required: + - path + - server + properties: + path: + description: 'path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: string + readOnly: + description: 'readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: boolean + server: + description: 'server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: string + persistentVolumeClaim: + description: 'persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + type: object + required: + - claimName + properties: + claimName: + description: 'claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + type: string + readOnly: + description: readOnly Will force the ReadOnly setting in VolumeMounts. Default false. + type: boolean + photonPersistentDisk: + description: photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine + type: object + required: + - pdID + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + pdID: + description: pdID is the ID that identifies Photon Controller persistent disk + type: string + portworxVolume: + description: portworxVolume represents a portworx volume attached and mounted on kubelets host machine + type: object + required: + - volumeID + properties: + fsType: + description: fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. + type: string + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + volumeID: + description: volumeID uniquely identifies a Portworx volume + type: string + projected: + description: projected items for all in one resources secrets, configmaps, and downward API + type: object + properties: + defaultMode: + description: defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. + type: integer + format: int32 + sources: + description: sources is the list of volume projections + type: array + items: + description: Projection that may be projected along with other supported volume types + type: object + properties: + configMap: + description: configMap information about the configMap data to project + type: object + properties: + items: + description: items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional specify whether the ConfigMap or its keys must be defined + type: boolean + downwardAPI: + description: downwardAPI information about the downwardAPI data to project + type: object + properties: + items: + description: Items is a list of DownwardAPIVolume file + type: array + items: + description: DownwardAPIVolumeFile represents information to create the file containing the pod field + type: object + required: + - path + properties: + fieldRef: + description: 'Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + mode: + description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: 'Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ''..'' path. Must be utf-8 encoded. The first item of the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secret: + description: secret information about the secret data to project + type: object + properties: + items: + description: items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional field specify whether the Secret or its key must be defined + type: boolean + serviceAccountToken: + description: serviceAccountToken is information about the serviceAccountToken data to project + type: object + required: + - path + properties: + audience: + description: audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. + type: string + expirationSeconds: + description: expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. + type: integer + format: int64 + path: + description: path is the path relative to the mount point of the file to project the token into. + type: string + quobyte: + description: quobyte represents a Quobyte mount on the host that shares a pod's lifetime + type: object + required: + - registry + - volume + properties: + group: + description: group to map volume access to Default is no group + type: string + readOnly: + description: readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. + type: boolean + registry: + description: registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes + type: string + tenant: + description: tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin + type: string + user: + description: user to map volume access to Defaults to serivceaccount user + type: string + volume: + description: volume is a string that references an already created Quobyte volume by name. + type: string + rbd: + description: 'rbd represents a Rados Block Device mount on the host that shares a pod''s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md' + type: object + required: + - image + - monitors + properties: + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + image: + description: 'image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + keyring: + description: 'keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + monitors: + description: 'monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: array + items: + type: string + pool: + description: 'pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + readOnly: + description: 'readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: boolean + secretRef: + description: 'secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + user: + description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + scaleIO: + description: scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. + type: object + required: + - gateway + - secretRef + - system + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". + type: string + gateway: + description: gateway is the host address of the ScaleIO API Gateway. + type: string + protectionDomain: + description: protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. + type: string + readOnly: + description: readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretRef: + description: secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + sslEnabled: + description: sslEnabled Flag enable/disable SSL communication with Gateway, default false + type: boolean + storageMode: + description: storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. + type: string + storagePool: + description: storagePool is the ScaleIO Storage Pool associated with the protection domain. + type: string + system: + description: system is the name of the storage system as configured in ScaleIO. + type: string + volumeName: + description: volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. + type: string + secret: + description: 'secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret' + type: object + properties: + defaultMode: + description: 'defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + optional: + description: optional field specify whether the Secret or its keys must be defined + type: boolean + secretName: + description: 'secretName is the name of the secret in the pod''s namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret' + type: string + storageos: + description: storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. + type: object + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretRef: + description: secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + volumeName: + description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. + type: string + volumeNamespace: + description: volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. + type: string + vsphereVolume: + description: vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine + type: object + required: + - volumePath + properties: + fsType: + description: fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + storagePolicyID: + description: storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. + type: string + storagePolicyName: + description: storagePolicyName is the storage Policy Based Management (SPBM) profile name. + type: string + volumePath: + description: volumePath is the path that identifies vSphere volume vmdk + type: string + permissions: + type: array + items: + description: StrategyDeploymentPermissions describe the rbac rules and service account needed by the install strategy + type: object + required: + - rules + - serviceAccountName + properties: + rules: + type: array + items: + description: PolicyRule holds information that describes a policy rule, but does not contain information about who the rule applies to or which namespace the rule applies to. + type: object + required: + - verbs + properties: + apiGroups: + description: APIGroups is the name of the APIGroup that contains the resources. If multiple API groups are specified, any action requested against one of the enumerated resources in any API group will be allowed. "" represents the core API group and "*" represents all API groups. + type: array + items: + type: string + nonResourceURLs: + description: NonResourceURLs is a set of partial urls that a user should have access to. *s are allowed, but only as the full, final step in the path Since non-resource URLs are not namespaced, this field is only applicable for ClusterRoles referenced from a ClusterRoleBinding. Rules can either apply to API resources (such as "pods" or "secrets") or non-resource URL paths (such as "/api"), but not both. + type: array + items: + type: string + resourceNames: + description: ResourceNames is an optional white list of names that the rule applies to. An empty set means that everything is allowed. + type: array + items: + type: string + resources: + description: Resources is a list of resources this rule applies to. '*' represents all resources. + type: array + items: + type: string + verbs: + description: Verbs is a list of Verbs that apply to ALL the ResourceKinds contained in this rule. '*' represents all verbs. + type: array + items: + type: string + serviceAccountName: + type: string + strategy: + type: string + installModes: + description: InstallModes specify supported installation types + type: array + items: + description: InstallMode associates an InstallModeType with a flag representing if the CSV supports it + type: object + required: + - supported + - type + properties: + supported: + type: boolean + type: + description: InstallModeType is a supported type of install mode for CSV installation + type: string + keywords: + description: A list of keywords describing the operator. + type: array + items: + type: string + labels: + description: Map of string keys and values that can be used to organize and categorize (scope and select) objects. + type: object + additionalProperties: + type: string + links: + description: A list of links related to the operator. + type: array + items: + type: object + properties: + name: + type: string + url: + type: string + maintainers: + description: A list of organizational entities maintaining the operator. + type: array + items: + type: object + properties: + email: + type: string + name: + type: string + maturity: + type: string + minKubeVersion: + type: string + nativeAPIs: + type: array + items: + description: GroupVersionKind unambiguously identifies a kind. It doesn't anonymously include GroupVersion to avoid automatic coercion. It doesn't use a GroupVersion to avoid custom marshalling + type: object + required: + - group + - kind + - version + properties: + group: + type: string + kind: + type: string + version: + type: string + provider: + description: The publishing entity behind the operator. + type: object + properties: + name: + type: string + url: + type: string + relatedImages: + description: List any related images, or other container images that your Operator might require to perform their functions. This list should also include operand images as well. All image references should be specified by digest (SHA) and not by tag. This field is only used during catalog creation and plays no part in cluster runtime. + type: array + items: + type: object + required: + - image + - name + properties: + image: + type: string + name: + type: string + replaces: + description: The name of a CSV this one replaces. Should match the `metadata.Name` field of the old CSV. + type: string + selector: + description: Label selector for related resources. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + skips: + description: The name(s) of one or more CSV(s) that should be skipped in the upgrade graph. Should match the `metadata.Name` field of the CSV that should be skipped. This field is only used during catalog creation and plays no part in cluster runtime. + type: array + items: + type: string + version: + type: string + webhookdefinitions: + type: array + items: + description: WebhookDescription provides details to OLM about required webhooks + type: object + required: + - admissionReviewVersions + - generateName + - sideEffects + - type + properties: + admissionReviewVersions: + type: array + items: + type: string + containerPort: + type: integer + format: int32 + default: 443 + maximum: 65535 + minimum: 1 + conversionCRDs: + type: array + items: + type: string + deploymentName: + type: string + failurePolicy: + description: FailurePolicyType specifies a failure policy that defines how unrecognized errors from the admission endpoint are handled. + type: string + generateName: + type: string + matchPolicy: + description: MatchPolicyType specifies the type of match policy. + type: string + objectSelector: + description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + reinvocationPolicy: + description: ReinvocationPolicyType specifies what type of policy the admission hook uses. + type: string + rules: + type: array + items: + description: RuleWithOperations is a tuple of Operations and Resources. It is recommended to make sure that all the tuple expansions are valid. + type: object + properties: + apiGroups: + description: APIGroups is the API groups the resources belong to. '*' is all groups. If '*' is present, the length of the slice must be one. Required. + type: array + items: + type: string + x-kubernetes-list-type: atomic + apiVersions: + description: APIVersions is the API versions the resources belong to. '*' is all versions. If '*' is present, the length of the slice must be one. Required. + type: array + items: + type: string + x-kubernetes-list-type: atomic + operations: + description: Operations is the operations the admission hook cares about - CREATE, UPDATE, DELETE, CONNECT or * for all of those operations and any future admission operations that are added. If '*' is present, the length of the slice must be one. Required. + type: array + items: + description: OperationType specifies an operation for a request. + type: string + x-kubernetes-list-type: atomic + resources: + description: "Resources is a list of resources this rule applies to. \n For example: 'pods' means pods. 'pods/log' means the log subresource of pods. '*' means all resources, but not subresources. 'pods/*' means all subresources of pods. '*/scale' means all scale subresources. '*/*' means all resources and their subresources. \n If wildcard is present, the validation rule will ensure resources do not overlap with each other. \n Depending on the enclosing object, subresources might not be allowed. Required." + type: array + items: + type: string + x-kubernetes-list-type: atomic + scope: + description: scope specifies the scope of this rule. Valid values are "Cluster", "Namespaced", and "*" "Cluster" means that only cluster-scoped resources will match this rule. Namespace API objects are cluster-scoped. "Namespaced" means that only namespaced resources will match this rule. "*" means that there are no scope restrictions. Subresources match the scope of their parent resource. Default is "*". + type: string + sideEffects: + description: SideEffectClass specifies the types of side effects a webhook may have. + type: string + targetPort: + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + timeoutSeconds: + type: integer + format: int32 + type: + description: WebhookAdmissionType is the type of admission webhooks supported by OLM + type: string + enum: + - ValidatingAdmissionWebhook + - MutatingAdmissionWebhook + - ConversionWebhook + webhookPath: + type: string + status: + description: ClusterServiceVersionStatus represents information about the status of a CSV. Status may trail the actual state of a system. + type: object + properties: + certsLastUpdated: + description: Last time the owned APIService certs were updated + type: string + format: date-time + certsRotateAt: + description: Time the owned APIService certs will rotate next + type: string + format: date-time + cleanup: + description: CleanupStatus represents information about the status of cleanup while a CSV is pending deletion + type: object + properties: + pendingDeletion: + description: PendingDeletion is the list of custom resource objects that are pending deletion and blocked on finalizers. This indicates the progress of cleanup that is blocking CSV deletion or operator uninstall. + type: array + items: + description: ResourceList represents a list of resources which are of the same Group/Kind + type: object + required: + - group + - instances + - kind + properties: + group: + type: string + instances: + type: array + items: + type: object + required: + - name + properties: + name: + type: string + namespace: + description: Namespace can be empty for cluster-scoped resources + type: string + kind: + type: string + conditions: + description: List of conditions, a history of state transitions + type: array + items: + description: Conditions appear in the status as a record of state transitions on the ClusterServiceVersion + type: object + properties: + lastTransitionTime: + description: Last time the status transitioned from one status to another. + type: string + format: date-time + lastUpdateTime: + description: Last time we updated the status + type: string + format: date-time + message: + description: A human readable message indicating details about why the ClusterServiceVersion is in this condition. + type: string + phase: + description: Condition of the ClusterServiceVersion + type: string + reason: + description: A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' + type: string + lastTransitionTime: + description: Last time the status transitioned from one status to another. + type: string + format: date-time + lastUpdateTime: + description: Last time we updated the status + type: string + format: date-time + message: + description: A human readable message indicating details about why the ClusterServiceVersion is in this condition. + type: string + phase: + description: Current condition of the ClusterServiceVersion + type: string + reason: + description: A brief CamelCase message indicating details about why the ClusterServiceVersion is in this state. e.g. 'RequirementsNotMet' + type: string + requirementStatus: + description: The status of each requirement for this CSV + type: array + items: + type: object + required: + - group + - kind + - message + - name + - status + - version + properties: + dependents: + type: array + items: + description: DependentStatus is the status for a dependent requirement (to prevent infinite nesting) + type: object + required: + - group + - kind + - status + - version + properties: + group: + type: string + kind: + type: string + message: + type: string + status: + description: StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus + type: string + uuid: + type: string + version: + type: string + group: + type: string + kind: + type: string + message: + type: string + name: + type: string + status: + description: StatusReason is a camelcased reason for the status of a RequirementStatus or DependentStatus + type: string + uuid: + type: string + version: + type: string + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: installplans.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: InstallPlan + listKind: InstallPlanList + plural: installplans + shortNames: + - ip + singular: installplan + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: The first CSV in the list of clusterServiceVersionNames + jsonPath: .spec.clusterServiceVersionNames[0] + name: CSV + type: string + - description: The approval mode + jsonPath: .spec.approval + name: Approval + type: string + - jsonPath: .spec.approved + name: Approved + type: boolean + name: v1alpha1 + schema: + openAPIV3Schema: + description: InstallPlan defines the installation of a set of operators. + type: object + required: + - metadata + - spec + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: InstallPlanSpec defines a set of Application resources to be installed + type: object + required: + - approval + - approved + - clusterServiceVersionNames + properties: + approval: + description: Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". + type: string + approved: + type: boolean + clusterServiceVersionNames: + type: array + items: + type: string + generation: + type: integer + source: + type: string + sourceNamespace: + type: string + status: + description: "InstallPlanStatus represents the information about the status of steps required to complete installation. \n Status may trail the actual state of a system." + type: object + required: + - catalogSources + - phase + properties: + attenuatedServiceAccountRef: + description: AttenuatedServiceAccountRef references the service account that is used to do scoped operator install. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + bundleLookups: + description: BundleLookups is the set of in-progress requests to pull and unpackage bundle content to the cluster. + type: array + items: + description: BundleLookup is a request to pull and unpackage the content of a bundle to the cluster. + type: object + required: + - catalogSourceRef + - identifier + - path + - replaces + properties: + catalogSourceRef: + description: CatalogSourceRef is a reference to the CatalogSource the bundle path was resolved from. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + conditions: + description: Conditions represents the overall state of a BundleLookup. + type: array + items: + type: object + required: + - status + - type + properties: + lastTransitionTime: + description: Last time the condition transitioned from one status to another. + type: string + format: date-time + lastUpdateTime: + description: Last time the condition was probed. + type: string + format: date-time + message: + description: A human readable message indicating details about the transition. + type: string + reason: + description: The reason for the condition's last transition. + type: string + status: + description: Status of the condition, one of True, False, Unknown. + type: string + type: + description: Type of condition. + type: string + identifier: + description: Identifier is the catalog-unique name of the operator (the name of the CSV for bundles that contain CSVs) + type: string + path: + description: Path refers to the location of a bundle to pull. It's typically an image reference. + type: string + properties: + description: The effective properties of the unpacked bundle. + type: string + replaces: + description: Replaces is the name of the bundle to replace with the one found at Path. + type: string + catalogSources: + type: array + items: + type: string + conditions: + type: array + items: + description: InstallPlanCondition represents the overall status of the execution of an InstallPlan. + type: object + properties: + lastTransitionTime: + type: string + format: date-time + lastUpdateTime: + type: string + format: date-time + message: + type: string + reason: + description: ConditionReason is a camelcased reason for the state transition. + type: string + status: + type: string + type: + description: InstallPlanConditionType describes the state of an InstallPlan at a certain point as a whole. + type: string + message: + description: Message is a human-readable message containing detailed information that may be important to understanding why the plan has its current status. + type: string + phase: + description: InstallPlanPhase is the current status of a InstallPlan as a whole. + type: string + plan: + type: array + items: + description: Step represents the status of an individual step in an InstallPlan. + type: object + required: + - resolving + - resource + - status + properties: + optional: + type: boolean + resolving: + type: string + resource: + description: StepResource represents the status of a resource to be tracked by an InstallPlan. + type: object + required: + - group + - kind + - name + - sourceName + - sourceNamespace + - version + properties: + group: + type: string + kind: + type: string + manifest: + type: string + name: + type: string + sourceName: + type: string + sourceNamespace: + type: string + version: + type: string + status: + description: StepStatus is the current status of a particular resource an in InstallPlan + type: string + startTime: + description: StartTime is the time when the controller began applying the resources listed in the plan to the cluster. + type: string + format: date-time + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: olmconfigs.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: OLMConfig + listKind: OLMConfigList + plural: olmconfigs + singular: olmconfig + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: OLMConfig is a resource responsible for configuring OLM. + type: object + required: + - metadata + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OLMConfigSpec is the spec for an OLMConfig resource. + type: object + properties: + features: + description: Features contains the list of configurable OLM features. + type: object + properties: + disableCopiedCSVs: + description: DisableCopiedCSVs is used to disable OLM's "Copied CSV" feature for operators installed at the cluster scope, where a cluster scoped operator is one that has been installed in an OperatorGroup that targets all namespaces. When reenabled, OLM will recreate the "Copied CSVs" for each cluster scoped operator. + type: boolean + packageServerSyncInterval: + description: PackageServerSyncInterval is used to define the sync interval for packagerserver pods. Packageserver pods periodically check the status of CatalogSources; this specifies the period using duration format (e.g. "60m"). For this parameter, only hours ("h"), minutes ("m"), and seconds ("s") may be specified. When not specified, the period defaults to the value specified within the packageserver. + type: string + pattern: ^([0-9]+(\.[0-9]+)?(s|m|h))+$ + status: + description: OLMConfigStatus is the status for an OLMConfig resource. + type: object + properties: + conditions: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: operatorconditions.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: OperatorCondition + listKind: OperatorConditionList + plural: operatorconditions + shortNames: + - condition + singular: operatorcondition + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + description: OperatorCondition is a Custom Resource of type `OperatorCondition` which is used to convey information to OLM about the state of an operator. + type: object + required: + - metadata + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OperatorConditionSpec allows a cluster admin to convey information about the state of an operator to OLM, potentially overriding state reported by the operator. + type: object + properties: + deployments: + type: array + items: + type: string + overrides: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + serviceAccounts: + type: array + items: + type: string + status: + description: OperatorConditionStatus allows an operator to convey information its state to OLM. The status may trail the actual state of a system. + type: object + properties: + conditions: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + served: true + storage: false + subresources: + status: {} + - name: v2 + schema: + openAPIV3Schema: + description: OperatorCondition is a Custom Resource of type `OperatorCondition` which is used to convey information to OLM about the state of an operator. + type: object + required: + - metadata + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OperatorConditionSpec allows an operator to report state to OLM and provides cluster admin with the ability to manually override state reported by the operator. + type: object + properties: + conditions: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + deployments: + type: array + items: + type: string + overrides: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + serviceAccounts: + type: array + items: + type: string + status: + description: OperatorConditionStatus allows OLM to convey which conditions have been observed. + type: object + properties: + conditions: + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: operatorgroups.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: OperatorGroup + listKind: OperatorGroupList + plural: operatorgroups + shortNames: + - og + singular: operatorgroup + scope: Namespaced + versions: + - name: v1 + schema: + openAPIV3Schema: + description: OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. + type: object + required: + - metadata + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OperatorGroupSpec is the spec for an OperatorGroup resource. + type: object + default: + upgradeStrategy: Default + properties: + selector: + description: Selector selects the OperatorGroup's target namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + serviceAccountName: + description: ServiceAccountName is the admin specified service account which will be used to deploy operator(s) in this operator group. + type: string + staticProvidedAPIs: + description: Static tells OLM not to update the OperatorGroup's providedAPIs annotation + type: boolean + targetNamespaces: + description: TargetNamespaces is an explicit set of namespaces to target. If it is set, Selector is ignored. + type: array + items: + type: string + x-kubernetes-list-type: set + upgradeStrategy: + description: "UpgradeStrategy defines the upgrade strategy for operators in the namespace. There are currently two supported upgrade strategies: \n Default: OLM will only allow clusterServiceVersions to move to the replacing phase from the succeeded phase. This effectively means that OLM will not allow operators to move to the next version if an installation or upgrade has failed. \n TechPreviewUnsafeFailForward: OLM will allow clusterServiceVersions to move to the replacing phase from the succeeded phase or from the failed phase. Additionally, OLM will generate new installPlans when a subscription references a failed installPlan and the catalog has been updated with a new upgrade for the existing set of operators. \n WARNING: The TechPreviewUnsafeFailForward upgrade strategy is unsafe and may result in unexpected behavior or unrecoverable data loss unless you have deep understanding of the set of operators being managed in the namespace." + type: string + default: Default + enum: + - Default + - TechPreviewUnsafeFailForward + status: + description: OperatorGroupStatus is the status for an OperatorGroupResource. + type: object + required: + - lastUpdated + properties: + conditions: + description: Conditions is an array of the OperatorGroup's conditions. + type: array + items: + description: "Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, \n type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition `json:\"conditions,omitempty\" patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` \n // other fields }" + type: object + required: + - lastTransitionTime + - message + - reason + - status + - type + properties: + lastTransitionTime: + description: lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. + type: string + format: date-time + message: + description: message is a human readable message indicating details about the transition. This may be an empty string. + type: string + maxLength: 32768 + observedGeneration: + description: observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. + type: integer + format: int64 + minimum: 0 + reason: + description: reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. + type: string + maxLength: 1024 + minLength: 1 + pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$ + status: + description: status of the condition, one of True, False, Unknown. + type: string + enum: + - "True" + - "False" + - Unknown + type: + description: type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) + type: string + maxLength: 316 + pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$ + lastUpdated: + description: LastUpdated is a timestamp of the last time the OperatorGroup's status was Updated. + type: string + format: date-time + namespaces: + description: Namespaces is the set of target namespaces for the OperatorGroup. + type: array + items: + type: string + x-kubernetes-list-type: set + serviceAccountRef: + description: ServiceAccountRef references the service account object specified. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + served: true + storage: true + subresources: + status: {} + - name: v1alpha2 + schema: + openAPIV3Schema: + description: OperatorGroup is the unit of multitenancy for OLM managed operators. It constrains the installation of operators in its namespace to a specified set of target namespaces. + type: object + required: + - metadata + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OperatorGroupSpec is the spec for an OperatorGroup resource. + type: object + properties: + selector: + description: Selector selects the OperatorGroup's target namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + serviceAccountName: + description: ServiceAccountName is the admin specified service account which will be used to deploy operator(s) in this operator group. + type: string + staticProvidedAPIs: + description: Static tells OLM not to update the OperatorGroup's providedAPIs annotation + type: boolean + targetNamespaces: + description: TargetNamespaces is an explicit set of namespaces to target. If it is set, Selector is ignored. + type: array + items: + type: string + status: + description: OperatorGroupStatus is the status for an OperatorGroupResource. + type: object + required: + - lastUpdated + properties: + lastUpdated: + description: LastUpdated is a timestamp of the last time the OperatorGroup's status was Updated. + type: string + format: date-time + namespaces: + description: Namespaces is the set of target namespaces for the OperatorGroup. + type: array + items: + type: string + serviceAccountRef: + description: ServiceAccountRef references the service account object specified. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + served: true + storage: false + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: operators.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: Operator + listKind: OperatorList + plural: operators + singular: operator + scope: Cluster + versions: + - name: v1 + schema: + openAPIV3Schema: + description: Operator represents a cluster operator. + type: object + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: OperatorSpec defines the desired state of Operator + type: object + status: + description: OperatorStatus defines the observed state of an Operator and its components + type: object + properties: + components: + description: Components describes resources that compose the operator. + type: object + required: + - labelSelector + properties: + labelSelector: + description: LabelSelector is a label query over a set of resources used to select the operator's components + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + refs: + description: Refs are a set of references to the operator's component resources, selected with LabelSelector. + type: array + items: + description: RichReference is a reference to a resource, enriched with its status conditions. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + conditions: + description: Conditions represents the latest state of the component. + type: array + items: + description: Condition represent the latest available observations of an component's state. + type: object + required: + - status + - type + properties: + lastTransitionTime: + description: Last time the condition transitioned from one status to another. + type: string + format: date-time + lastUpdateTime: + description: Last time the condition was probed + type: string + format: date-time + message: + description: A human readable message indicating details about the transition. + type: string + reason: + description: The reason for the condition's last transition. + type: string + status: + description: Status of the condition, one of True, False, Unknown. + type: string + type: + description: Type of condition. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + served: true + storage: true + subresources: + status: {} +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.9.0 + creationTimestamp: null + name: subscriptions.operators.coreos.com +spec: + group: operators.coreos.com + names: + categories: + - olm + kind: Subscription + listKind: SubscriptionList + plural: subscriptions + shortNames: + - sub + - subs + singular: subscription + scope: Namespaced + versions: + - additionalPrinterColumns: + - description: The package subscribed to + jsonPath: .spec.name + name: Package + type: string + - description: The catalog source for the specified package + jsonPath: .spec.source + name: Source + type: string + - description: The channel of updates to subscribe to + jsonPath: .spec.channel + name: Channel + type: string + name: v1alpha1 + schema: + openAPIV3Schema: + description: Subscription keeps operators up to date by tracking changes to Catalogs. + type: object + required: + - metadata + - spec + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: SubscriptionSpec defines an Application that can be installed + type: object + required: + - name + - source + - sourceNamespace + properties: + channel: + type: string + config: + description: SubscriptionConfig contains configuration specified for a subscription. + type: object + properties: + affinity: + description: If specified, overrides the pod's scheduling constraints. nil sub-attributes will *not* override the original values in the pod.spec for those sub-attributes. Use empty object ({}) to erase original sub-attribute values. + type: object + properties: + nodeAffinity: + description: Describes node affinity scheduling rules for the pod. + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node matches the corresponding matchExpressions; the node(s) with the highest sum are the most preferred. + type: array + items: + description: An empty preferred scheduling term matches all objects with implicit weight 0 (i.e. it's a no-op). A null preferred scheduling term matches no objects (i.e. is also a no-op). + type: object + required: + - preference + - weight + properties: + preference: + description: A node selector term, associated with the corresponding weight. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + weight: + description: Weight associated with matching the corresponding nodeSelectorTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to an update), the system may or may not try to eventually evict the pod from its node. + type: object + required: + - nodeSelectorTerms + properties: + nodeSelectorTerms: + description: Required. A list of node selector terms. The terms are ORed. + type: array + items: + description: A null or empty node selector term matches no objects. The requirements of them are ANDed. The TopologySelectorTerm type implements a subset of the NodeSelectorTerm. + type: object + properties: + matchExpressions: + description: A list of node selector requirements by node's labels. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchFields: + description: A list of node selector requirements by node's fields. + type: array + items: + description: A node selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: The label key that the selector applies to. + type: string + operator: + description: Represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists, DoesNotExist. Gt, and Lt. + type: string + values: + description: An array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. If the operator is Gt or Lt, the values array must have a single element, which will be interpreted as an integer. This array is replaced during a strategic merge patch. + type: array + items: + type: string + podAffinity: + description: Describes pod affinity scheduling rules (e.g. co-locate this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + podAntiAffinity: + description: Describes pod anti-affinity scheduling rules (e.g. avoid putting this pod in the same node, zone, etc. as some other pod(s)). + type: object + properties: + preferredDuringSchedulingIgnoredDuringExecution: + description: The scheduler will prefer to schedule pods to nodes that satisfy the anti-affinity expressions specified by this field, but it may choose a node that violates one or more of the expressions. The node that is most preferred is the one with the greatest sum of weights, i.e. for each node that meets all of the scheduling requirements (resource request, requiredDuringScheduling anti-affinity expressions, etc.), compute a sum by iterating through the elements of this field and adding "weight" to the sum if the node has pods which matches the corresponding podAffinityTerm; the node(s) with the highest sum are the most preferred. + type: array + items: + description: The weights of all of the matched WeightedPodAffinityTerm fields are added per-node to find the most preferred node(s) + type: object + required: + - podAffinityTerm + - weight + properties: + podAffinityTerm: + description: Required. A pod affinity term, associated with the corresponding weight. + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + weight: + description: weight associated with matching the corresponding podAffinityTerm, in the range 1-100. + type: integer + format: int32 + requiredDuringSchedulingIgnoredDuringExecution: + description: If the anti-affinity requirements specified by this field are not met at scheduling time, the pod will not be scheduled onto the node. If the anti-affinity requirements specified by this field cease to be met at some point during pod execution (e.g. due to a pod label update), the system may or may not try to eventually evict the pod from its node. When there are multiple elements, the lists of nodes corresponding to each podAffinityTerm are intersected, i.e. all terms must be satisfied. + type: array + items: + description: Defines a set of pods (namely those matching the labelSelector relative to the given namespace(s)) that this pod should be co-located (affinity) or not co-located (anti-affinity) with, where co-located is defined as running on a node whose value of the label with key matches that of any node on which a pod of the set of pods is running + type: object + required: + - topologyKey + properties: + labelSelector: + description: A label query over a set of resources, in this case pods. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaceSelector: + description: A label query over the set of namespaces that the term applies to. The term is applied to the union of the namespaces selected by this field and the ones listed in the namespaces field. null selector and null or empty namespaces list means "this pod's namespace". An empty selector ({}) matches all namespaces. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + namespaces: + description: namespaces specifies a static list of namespace names that the term applies to. The term is applied to the union of the namespaces listed in this field and the ones selected by namespaceSelector. null or empty namespaces list and null namespaceSelector means "this pod's namespace". + type: array + items: + type: string + topologyKey: + description: This pod should be co-located (affinity) or not co-located (anti-affinity) with the pods matching the labelSelector in the specified namespaces, where co-located is defined as running on a node whose value of the label with key topologyKey matches that of any node on which any of the selected pods is running. Empty topologyKey is not allowed. + type: string + annotations: + description: Annotations is an unstructured key value map stored with each Deployment, Pod, APIService in the Operator. Typically, annotations may be set by external tools to store and retrieve arbitrary metadata. Use this field to pre-define annotations that OLM should add to each of the Subscription's deployments, pods, and apiservices. + type: object + additionalProperties: + type: string + env: + description: Env is a list of environment variables to set in the container. Cannot be updated. + type: array + items: + description: EnvVar represents an environment variable present in a Container. + type: object + required: + - name + properties: + name: + description: Name of the environment variable. Must be a C_IDENTIFIER. + type: string + value: + description: 'Variable references $(VAR_NAME) are expanded using the previously defined environment variables in the container and any service environment variables. If a variable cannot be resolved, the reference in the input string will be unchanged. Double $$ are reduced to a single $, which allows for escaping the $(VAR_NAME) syntax: i.e. "$$(VAR_NAME)" will produce the string literal "$(VAR_NAME)". Escaped references will never be expanded, regardless of whether the variable exists or not. Defaults to "".' + type: string + valueFrom: + description: Source for the environment variable's value. Cannot be used if value is not empty. + type: object + properties: + configMapKeyRef: + description: Selects a key of a ConfigMap. + type: object + required: + - key + properties: + key: + description: The key to select. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap or its key must be defined + type: boolean + fieldRef: + description: 'Selects a field of the pod: supports metadata.name, metadata.namespace, `metadata.labels['''']`, `metadata.annotations['''']`, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, limits.ephemeral-storage, requests.cpu, requests.memory and requests.ephemeral-storage) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secretKeyRef: + description: Selects a key of a secret in the pod's namespace + type: object + required: + - key + properties: + key: + description: The key of the secret to select from. Must be a valid secret key. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret or its key must be defined + type: boolean + envFrom: + description: EnvFrom is a list of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Immutable. + type: array + items: + description: EnvFromSource represents the source of a set of ConfigMaps + type: object + properties: + configMapRef: + description: The ConfigMap to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the ConfigMap must be defined + type: boolean + prefix: + description: An optional identifier to prepend to each key in the ConfigMap. Must be a C_IDENTIFIER. + type: string + secretRef: + description: The Secret to select from + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: Specify whether the Secret must be defined + type: boolean + nodeSelector: + description: 'NodeSelector is a selector which must be true for the pod to fit on a node. Selector which must match a node''s labels for the pod to be scheduled on that node. More info: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/' + type: object + additionalProperties: + type: string + resources: + description: 'Resources represents compute resources required by this container. Immutable. More info: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/' + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + selector: + description: Selector is the label selector for pods to be configured. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + tolerations: + description: Tolerations are the pod's tolerations. + type: array + items: + description: The pod this Toleration is attached to tolerates any taint that matches the triple using the matching operator . + type: object + properties: + effect: + description: Effect indicates the taint effect to match. Empty means match all taint effects. When specified, allowed values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: Key is the taint key that the toleration applies to. Empty means match all taint keys. If the key is empty, operator must be Exists; this combination means to match all values and all keys. + type: string + operator: + description: Operator represents a key's relationship to the value. Valid operators are Exists and Equal. Defaults to Equal. Exists is equivalent to wildcard for value, so that a pod can tolerate all taints of a particular category. + type: string + tolerationSeconds: + description: TolerationSeconds represents the period of time the toleration (which must be of effect NoExecute, otherwise this field is ignored) tolerates the taint. By default, it is not set, which means tolerate the taint forever (do not evict). Zero and negative values will be treated as 0 (evict immediately) by the system. + type: integer + format: int64 + value: + description: Value is the taint value the toleration matches to. If the operator is Exists, the value should be empty, otherwise just a regular string. + type: string + volumeMounts: + description: List of VolumeMounts to set in the container. + type: array + items: + description: VolumeMount describes a mounting of a Volume within a container. + type: object + required: + - mountPath + - name + properties: + mountPath: + description: Path within the container at which the volume should be mounted. Must not contain ':'. + type: string + mountPropagation: + description: mountPropagation determines how mounts are propagated from the host to container and the other way around. When not set, MountPropagationNone is used. This field is beta in 1.10. + type: string + name: + description: This must match the Name of a Volume. + type: string + readOnly: + description: Mounted read-only if true, read-write otherwise (false or unspecified). Defaults to false. + type: boolean + subPath: + description: Path within the volume from which the container's volume should be mounted. Defaults to "" (volume's root). + type: string + subPathExpr: + description: Expanded path within the volume from which the container's volume should be mounted. Behaves similarly to SubPath but environment variable references $(VAR_NAME) are expanded using the container's environment. Defaults to "" (volume's root). SubPathExpr and SubPath are mutually exclusive. + type: string + volumes: + description: List of Volumes to set in the podSpec. + type: array + items: + description: Volume represents a named volume in a pod that may be accessed by any container in the pod. + type: object + required: + - name + properties: + awsElasticBlockStore: + description: 'awsElasticBlockStore represents an AWS Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: object + required: + - volumeID + properties: + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + partition: + description: 'partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty).' + type: integer + format: int32 + readOnly: + description: 'readOnly value true will force the readOnly setting in VolumeMounts. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: boolean + volumeID: + description: 'volumeID is unique ID of the persistent disk resource in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore' + type: string + azureDisk: + description: azureDisk represents an Azure Data Disk mount on the host and bind mount to the pod. + type: object + required: + - diskName + - diskURI + properties: + cachingMode: + description: 'cachingMode is the Host Caching mode: None, Read Only, Read Write.' + type: string + diskName: + description: diskName is the Name of the data disk in the blob storage + type: string + diskURI: + description: diskURI is the URI of data disk in the blob storage + type: string + fsType: + description: fsType is Filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + kind: + description: 'kind expected values are Shared: multiple blob disks per storage account Dedicated: single blob disk per storage account Managed: azure managed data disk (only in managed availability set). defaults to shared' + type: string + readOnly: + description: readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + azureFile: + description: azureFile represents an Azure File Service mount on the host and bind mount to the pod. + type: object + required: + - secretName + - shareName + properties: + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretName: + description: secretName is the name of secret that contains Azure Storage Account Name and Key + type: string + shareName: + description: shareName is the azure share Name + type: string + cephfs: + description: cephFS represents a Ceph FS mount on the host that shares a pod's lifetime + type: object + required: + - monitors + properties: + monitors: + description: 'monitors is Required: Monitors is a collection of Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: array + items: + type: string + path: + description: 'path is Optional: Used as the mounted root, rather than the full Ceph tree, default is /' + type: string + readOnly: + description: 'readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: boolean + secretFile: + description: 'secretFile is Optional: SecretFile is the path to key ring for User, default is /etc/ceph/user.secret More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: string + secretRef: + description: 'secretRef is Optional: SecretRef is reference to the authentication secret for User, default is empty. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + user: + description: 'user is optional: User is the rados user name, default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it' + type: string + cinder: + description: 'cinder represents a cinder volume attached and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: object + required: + - volumeID + properties: + fsType: + description: 'fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: string + readOnly: + description: 'readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: boolean + secretRef: + description: 'secretRef is optional: points to a secret object containing parameters used to connect to OpenStack.' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + volumeID: + description: 'volumeID used to identify the volume in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md' + type: string + configMap: + description: configMap represents a configMap that should populate this volume + type: object + properties: + defaultMode: + description: 'defaultMode is optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional specify whether the ConfigMap or its keys must be defined + type: boolean + csi: + description: csi (Container Storage Interface) represents ephemeral storage that is handled by certain external CSI drivers (Beta feature). + type: object + required: + - driver + properties: + driver: + description: driver is the name of the CSI driver that handles this volume. Consult with your admin for the correct name as registered in the cluster. + type: string + fsType: + description: fsType to mount. Ex. "ext4", "xfs", "ntfs". If not provided, the empty value is passed to the associated CSI driver which will determine the default filesystem to apply. + type: string + nodePublishSecretRef: + description: nodePublishSecretRef is a reference to the secret object containing sensitive information to pass to the CSI driver to complete the CSI NodePublishVolume and NodeUnpublishVolume calls. This field is optional, and may be empty if no secret is required. If the secret object contains more than one secret, all secret references are passed. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + readOnly: + description: readOnly specifies a read-only configuration for the volume. Defaults to false (read/write). + type: boolean + volumeAttributes: + description: volumeAttributes stores driver-specific properties that are passed to the CSI driver. Consult your driver's documentation for supported values. + type: object + additionalProperties: + type: string + downwardAPI: + description: downwardAPI represents downward API about the pod that should populate this volume + type: object + properties: + defaultMode: + description: 'Optional: mode bits to use on created files by default. Must be a Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: Items is a list of downward API volume file + type: array + items: + description: DownwardAPIVolumeFile represents information to create the file containing the pod field + type: object + required: + - path + properties: + fieldRef: + description: 'Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + mode: + description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: 'Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ''..'' path. Must be utf-8 encoded. The first item of the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + emptyDir: + description: 'emptyDir represents a temporary directory that shares a pod''s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + type: object + properties: + medium: + description: 'medium represents what type of storage medium should back this directory. The default is "" which means to use the node''s default medium. Must be an empty string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + type: string + sizeLimit: + description: 'sizeLimit is the total amount of local storage required for this EmptyDir volume. The size limit is also applicable for memory medium. The maximum usage on memory medium EmptyDir would be the minimum value between the SizeLimit specified here and the sum of memory limits of all containers in a pod. The default is nil which means that the limit is undefined. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir' + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + ephemeral: + description: "ephemeral represents a volume that is handled by a cluster storage driver. The volume's lifecycle is tied to the pod that defines it - it will be created before the pod starts, and deleted when the pod is removed. \n Use this if: a) the volume is only needed while the pod runs, b) features of normal volumes like restoring from snapshot or capacity tracking are needed, c) the storage driver is specified through a storage class, and d) the storage driver supports dynamic volume provisioning through a PersistentVolumeClaim (see EphemeralVolumeSource for more information on the connection between this volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim or one of the vendor-specific APIs for volumes that persist for longer than the lifecycle of an individual pod. \n Use CSI for light-weight local ephemeral volumes if the CSI driver is meant to be used that way - see the documentation of the driver for more information. \n A pod can use both types of ephemeral volumes and persistent volumes at the same time." + type: object + properties: + volumeClaimTemplate: + description: "Will be used to create a stand-alone PVC to provision the volume. The pod in which this EphemeralVolumeSource is embedded will be the owner of the PVC, i.e. the PVC will be deleted together with the pod. The name of the PVC will be `-` where `` is the name from the `PodSpec.Volumes` array entry. Pod validation will reject the pod if the concatenated name is not valid for a PVC (for example, too long). \n An existing PVC with that name that is not owned by the pod will *not* be used for the pod to avoid using an unrelated volume by mistake. Starting the pod is then blocked until the unrelated PVC is removed. If such a pre-created PVC is meant to be used by the pod, the PVC has to updated with an owner reference to the pod once the pod exists. Normally this should not be necessary, but it may be useful when manually reconstructing a broken cluster. \n This field is read-only and no changes will be made by Kubernetes to the PVC after it has been created. \n Required, must not be nil." + type: object + required: + - spec + properties: + metadata: + description: May contain labels and annotations that will be copied into the PVC when creating it. No other fields are allowed and will be rejected during validation. + type: object + spec: + description: The specification for the PersistentVolumeClaim. The entire content is copied unchanged into the PVC that gets created from this template. The same fields as in a PersistentVolumeClaim are also valid here. + type: object + properties: + accessModes: + description: 'accessModes contains the desired access modes the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1' + type: array + items: + type: string + dataSource: + description: 'dataSource field can be used to specify either: * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot) * An existing PVC (PersistentVolumeClaim) If the provisioner or an external controller can support the specified data source, it will create a new volume based on the contents of the specified data source. When the AnyVolumeDataSource feature gate is enabled, dataSource contents will be copied to dataSourceRef, and dataSourceRef contents will be copied to dataSource when dataSourceRef.namespace is not specified. If the namespace is specified, then dataSourceRef will not be copied to dataSource.' + type: object + required: + - kind + - name + properties: + apiGroup: + description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + dataSourceRef: + description: 'dataSourceRef specifies the object from which to populate the volume with data, if a non-empty volume is desired. This may be any object from a non-empty API group (non core object) or a PersistentVolumeClaim object. When this field is specified, volume binding will only succeed if the type of the specified object matches some installed volume populator or dynamic provisioner. This field will replace the functionality of the dataSource field and as such if both fields are non-empty, they must have the same value. For backwards compatibility, when namespace isn''t specified in dataSourceRef, both fields (dataSource and dataSourceRef) will be set to the same value automatically if one of them is empty and the other is non-empty. When namespace is specified in dataSourceRef, dataSource isn''t set to the same value and must be empty. There are three important differences between dataSource and dataSourceRef: * While dataSource only allows two specific types of objects, dataSourceRef allows any non-core object, as well as PersistentVolumeClaim objects. * While dataSource ignores disallowed values (dropping them), dataSourceRef preserves all values, and generates an error if a disallowed value is specified. * While dataSource only allows local objects, dataSourceRef allows objects in any namespaces. (Beta) Using this field requires the AnyVolumeDataSource feature gate to be enabled. (Alpha) Using the namespace field of dataSourceRef requires the CrossNamespaceVolumeDataSource feature gate to be enabled.' + type: object + required: + - kind + - name + properties: + apiGroup: + description: APIGroup is the group for the resource being referenced. If APIGroup is not specified, the specified Kind must be in the core API group. For any other third-party types, APIGroup is required. + type: string + kind: + description: Kind is the type of resource being referenced + type: string + name: + description: Name is the name of resource being referenced + type: string + namespace: + description: Namespace is the namespace of resource being referenced Note that when a namespace is specified, a gateway.networking.k8s.io/ReferenceGrant object is required in the referent namespace to allow that namespace's owner to accept the reference. See the ReferenceGrant documentation for details. (Alpha) This field requires the CrossNamespaceVolumeDataSource feature gate to be enabled. + type: string + resources: + description: 'resources represents the minimum resources the volume should have. If RecoverVolumeExpansionFailure feature is enabled users are allowed to specify resource requirements that are lower than previous value but must still be higher than capacity recorded in the status field of the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources' + type: object + properties: + claims: + description: "Claims lists the names of resources, defined in spec.resourceClaims, that are used by this container. \n This is an alpha field and requires enabling the DynamicResourceAllocation feature gate. \n This field is immutable. It can only be set for containers." + type: array + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + type: object + required: + - name + properties: + name: + description: Name must match the name of one entry in pod.spec.resourceClaims of the Pod where this field is used. It makes that resource available inside a container. + type: string + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + description: 'Limits describes the maximum amount of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + requests: + description: 'Requests describes the minimum amount of compute resources required. If Requests is omitted for a container, it defaults to Limits if that is explicitly specified, otherwise to an implementation-defined value. Requests cannot exceed Limits. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + additionalProperties: + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + selector: + description: selector is a label query over volumes to consider for binding. + type: object + properties: + matchExpressions: + description: matchExpressions is a list of label selector requirements. The requirements are ANDed. + type: array + items: + description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values. + type: object + required: + - key + - operator + properties: + key: + description: key is the label key that the selector applies to. + type: string + operator: + description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist. + type: string + values: + description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch. + type: array + items: + type: string + matchLabels: + description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed. + type: object + additionalProperties: + type: string + storageClassName: + description: 'storageClassName is the name of the StorageClass required by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1' + type: string + volumeMode: + description: volumeMode defines what type of volume is required by the claim. Value of Filesystem is implied when not included in claim spec. + type: string + volumeName: + description: volumeName is the binding reference to the PersistentVolume backing this claim. + type: string + fc: + description: fc represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. + type: object + properties: + fsType: + description: 'fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + lun: + description: 'lun is Optional: FC target lun number' + type: integer + format: int32 + readOnly: + description: 'readOnly is Optional: Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.' + type: boolean + targetWWNs: + description: 'targetWWNs is Optional: FC target worldwide names (WWNs)' + type: array + items: + type: string + wwids: + description: 'wwids Optional: FC volume world wide identifiers (wwids) Either wwids or combination of targetWWNs and lun must be set, but not both simultaneously.' + type: array + items: + type: string + flexVolume: + description: flexVolume represents a generic volume resource that is provisioned/attached using an exec based plugin. + type: object + required: + - driver + properties: + driver: + description: driver is the name of the driver to use for this volume. + type: string + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". The default filesystem depends on FlexVolume script. + type: string + options: + description: 'options is Optional: this field holds extra command options if any.' + type: object + additionalProperties: + type: string + readOnly: + description: 'readOnly is Optional: defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts.' + type: boolean + secretRef: + description: 'secretRef is Optional: secretRef is reference to the secret object containing sensitive information to pass to the plugin scripts. This may be empty if no secret object is specified. If the secret object contains more than one secret, all secrets are passed to the plugin scripts.' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + flocker: + description: flocker represents a Flocker volume attached to a kubelet's host machine. This depends on the Flocker control service being running + type: object + properties: + datasetName: + description: datasetName is Name of the dataset stored as metadata -> name on the dataset for Flocker should be considered as deprecated + type: string + datasetUUID: + description: datasetUUID is the UUID of the dataset. This is unique identifier of a Flocker dataset + type: string + gcePersistentDisk: + description: 'gcePersistentDisk represents a GCE Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: object + required: + - pdName + properties: + fsType: + description: 'fsType is filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + partition: + description: 'partition is the partition in the volume that you want to mount. If omitted, the default is to mount by volume name. Examples: For volume /dev/sda1, you specify the partition as "1". Similarly, the volume partition for /dev/sda is "0" (or you can leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: integer + format: int32 + pdName: + description: 'pdName is unique name of the PD resource in GCE. Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: string + readOnly: + description: 'readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk' + type: boolean + gitRepo: + description: 'gitRepo represents a git repository at a particular revision. DEPRECATED: GitRepo is deprecated. To provision a container with a git repo, mount an EmptyDir into an InitContainer that clones the repo using git, then mount the EmptyDir into the Pod''s container.' + type: object + required: + - repository + properties: + directory: + description: directory is the target directory name. Must not contain or start with '..'. If '.' is supplied, the volume directory will be the git repository. Otherwise, if specified, the volume will contain the git repository in the subdirectory with the given name. + type: string + repository: + description: repository is the URL + type: string + revision: + description: revision is the commit hash for the specified revision. + type: string + glusterfs: + description: 'glusterfs represents a Glusterfs mount on the host that shares a pod''s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md' + type: object + required: + - endpoints + - path + properties: + endpoints: + description: 'endpoints is the endpoint name that details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: string + path: + description: 'path is the Glusterfs volume path. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: string + readOnly: + description: 'readOnly here will force the Glusterfs volume to be mounted with read-only permissions. Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod' + type: boolean + hostPath: + description: 'hostPath represents a pre-existing file or directory on the host machine that is directly exposed to the container. This is generally used for system agents or other privileged things that are allowed to see the host machine. Most containers will NOT need this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath --- TODO(jonesdl) We need to restrict who can use host directory mounts and who can/can not mount host directories as read/write.' + type: object + required: + - path + properties: + path: + description: 'path of the directory on the host. If the path is a symlink, it will follow the link to the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath' + type: string + type: + description: 'type for HostPath Volume Defaults to "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath' + type: string + iscsi: + description: 'iscsi represents an ISCSI Disk resource that is attached to a kubelet''s host machine and then exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md' + type: object + required: + - iqn + - lun + - targetPortal + properties: + chapAuthDiscovery: + description: chapAuthDiscovery defines whether support iSCSI Discovery CHAP authentication + type: boolean + chapAuthSession: + description: chapAuthSession defines whether support iSCSI Session CHAP authentication + type: boolean + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + initiatorName: + description: initiatorName is the custom iSCSI Initiator Name. If initiatorName is specified with iscsiInterface simultaneously, new iSCSI interface : will be created for the connection. + type: string + iqn: + description: iqn is the target iSCSI Qualified Name. + type: string + iscsiInterface: + description: iscsiInterface is the interface Name that uses an iSCSI transport. Defaults to 'default' (tcp). + type: string + lun: + description: lun represents iSCSI Target Lun number. + type: integer + format: int32 + portals: + description: portals is the iSCSI Target Portal List. The portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). + type: array + items: + type: string + readOnly: + description: readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. + type: boolean + secretRef: + description: secretRef is the CHAP Secret for iSCSI target and initiator authentication + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + targetPortal: + description: targetPortal is iSCSI Target Portal. The Portal is either an IP or ip_addr:port if the port is other than default (typically TCP ports 860 and 3260). + type: string + name: + description: 'name of the volume. Must be a DNS_LABEL and unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + nfs: + description: 'nfs represents an NFS mount on the host that shares a pod''s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: object + required: + - path + - server + properties: + path: + description: 'path that is exported by the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: string + readOnly: + description: 'readOnly here will force the NFS export to be mounted with read-only permissions. Defaults to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: boolean + server: + description: 'server is the hostname or IP address of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs' + type: string + persistentVolumeClaim: + description: 'persistentVolumeClaimVolumeSource represents a reference to a PersistentVolumeClaim in the same namespace. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + type: object + required: + - claimName + properties: + claimName: + description: 'claimName is the name of a PersistentVolumeClaim in the same namespace as the pod using this volume. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims' + type: string + readOnly: + description: readOnly Will force the ReadOnly setting in VolumeMounts. Default false. + type: boolean + photonPersistentDisk: + description: photonPersistentDisk represents a PhotonController persistent disk attached and mounted on kubelets host machine + type: object + required: + - pdID + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + pdID: + description: pdID is the ID that identifies Photon Controller persistent disk + type: string + portworxVolume: + description: portworxVolume represents a portworx volume attached and mounted on kubelets host machine + type: object + required: + - volumeID + properties: + fsType: + description: fSType represents the filesystem type to mount Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs". Implicitly inferred to be "ext4" if unspecified. + type: string + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + volumeID: + description: volumeID uniquely identifies a Portworx volume + type: string + projected: + description: projected items for all in one resources secrets, configmaps, and downward API + type: object + properties: + defaultMode: + description: defaultMode are the mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set. + type: integer + format: int32 + sources: + description: sources is the list of volume projections + type: array + items: + description: Projection that may be projected along with other supported volume types + type: object + properties: + configMap: + description: configMap information about the configMap data to project + type: object + properties: + items: + description: items if unspecified, each key-value pair in the Data field of the referenced ConfigMap will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the ConfigMap, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional specify whether the ConfigMap or its keys must be defined + type: boolean + downwardAPI: + description: downwardAPI information about the downwardAPI data to project + type: object + properties: + items: + description: Items is a list of DownwardAPIVolume file + type: array + items: + description: DownwardAPIVolumeFile represents information to create the file containing the pod field + type: object + required: + - path + properties: + fieldRef: + description: 'Required: Selects a field of the pod: only annotations, labels, name and namespace are supported.' + type: object + required: + - fieldPath + properties: + apiVersion: + description: Version of the schema the FieldPath is written in terms of, defaults to "v1". + type: string + fieldPath: + description: Path of the field to select in the specified API version. + type: string + mode: + description: 'Optional: mode bits used to set permissions on this file, must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: 'Required: Path is the relative path name of the file to be created. Must not be absolute or contain the ''..'' path. Must be utf-8 encoded. The first item of the relative path must not start with ''..''' + type: string + resourceFieldRef: + description: 'Selects a resource of the container: only resources limits and requests (limits.cpu, limits.memory, requests.cpu and requests.memory) are currently supported.' + type: object + required: + - resource + properties: + containerName: + description: 'Container name: required for volumes, optional for env vars' + type: string + divisor: + description: Specifies the output format of the exposed resources, defaults to "1" + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + anyOf: + - type: integer + - type: string + x-kubernetes-int-or-string: true + resource: + description: 'Required: resource to select' + type: string + secret: + description: secret information about the secret data to project + type: object + properties: + items: + description: items if unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + optional: + description: optional field specify whether the Secret or its key must be defined + type: boolean + serviceAccountToken: + description: serviceAccountToken is information about the serviceAccountToken data to project + type: object + required: + - path + properties: + audience: + description: audience is the intended audience of the token. A recipient of a token must identify itself with an identifier specified in the audience of the token, and otherwise should reject the token. The audience defaults to the identifier of the apiserver. + type: string + expirationSeconds: + description: expirationSeconds is the requested duration of validity of the service account token. As the token approaches expiration, the kubelet volume plugin will proactively rotate the service account token. The kubelet will start trying to rotate the token if the token is older than 80 percent of its time to live or if the token is older than 24 hours.Defaults to 1 hour and must be at least 10 minutes. + type: integer + format: int64 + path: + description: path is the path relative to the mount point of the file to project the token into. + type: string + quobyte: + description: quobyte represents a Quobyte mount on the host that shares a pod's lifetime + type: object + required: + - registry + - volume + properties: + group: + description: group to map volume access to Default is no group + type: string + readOnly: + description: readOnly here will force the Quobyte volume to be mounted with read-only permissions. Defaults to false. + type: boolean + registry: + description: registry represents a single or multiple Quobyte Registry services specified as a string as host:port pair (multiple entries are separated with commas) which acts as the central registry for volumes + type: string + tenant: + description: tenant owning the given Quobyte volume in the Backend Used with dynamically provisioned Quobyte volumes, value is set by the plugin + type: string + user: + description: user to map volume access to Defaults to serivceaccount user + type: string + volume: + description: volume is a string that references an already created Quobyte volume by name. + type: string + rbd: + description: 'rbd represents a Rados Block Device mount on the host that shares a pod''s lifetime. More info: https://examples.k8s.io/volumes/rbd/README.md' + type: object + required: + - image + - monitors + properties: + fsType: + description: 'fsType is the filesystem type of the volume that you want to mount. Tip: Ensure that the filesystem type is supported by the host operating system. Examples: "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd TODO: how do we prevent errors in the filesystem from compromising the machine' + type: string + image: + description: 'image is the rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + keyring: + description: 'keyring is the path to key ring for RBDUser. Default is /etc/ceph/keyring. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + monitors: + description: 'monitors is a collection of Ceph monitors. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: array + items: + type: string + pool: + description: 'pool is the rados pool name. Default is rbd. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + readOnly: + description: 'readOnly here will force the ReadOnly setting in VolumeMounts. Defaults to false. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: boolean + secretRef: + description: 'secretRef is name of the authentication secret for RBDUser. If provided overrides keyring. Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + user: + description: 'user is the rados user name. Default is admin. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it' + type: string + scaleIO: + description: scaleIO represents a ScaleIO persistent volume attached and mounted on Kubernetes nodes. + type: object + required: + - gateway + - secretRef + - system + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Default is "xfs". + type: string + gateway: + description: gateway is the host address of the ScaleIO API Gateway. + type: string + protectionDomain: + description: protectionDomain is the name of the ScaleIO Protection Domain for the configured storage. + type: string + readOnly: + description: readOnly Defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretRef: + description: secretRef references to the secret for ScaleIO user and other sensitive information. If this is not provided, Login operation will fail. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + sslEnabled: + description: sslEnabled Flag enable/disable SSL communication with Gateway, default false + type: boolean + storageMode: + description: storageMode indicates whether the storage for a volume should be ThickProvisioned or ThinProvisioned. Default is ThinProvisioned. + type: string + storagePool: + description: storagePool is the ScaleIO Storage Pool associated with the protection domain. + type: string + system: + description: system is the name of the storage system as configured in ScaleIO. + type: string + volumeName: + description: volumeName is the name of a volume already created in the ScaleIO system that is associated with this volume source. + type: string + secret: + description: 'secret represents a secret that should populate this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret' + type: object + properties: + defaultMode: + description: 'defaultMode is Optional: mode bits used to set permissions on created files by default. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. Defaults to 0644. Directories within the path are not affected by this setting. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + items: + description: items If unspecified, each key-value pair in the Data field of the referenced Secret will be projected into the volume as a file whose name is the key and content is the value. If specified, the listed keys will be projected into the specified paths, and unlisted keys will not be present. If a key is specified which is not present in the Secret, the volume setup will error unless it is marked optional. Paths must be relative and may not contain the '..' path or start with '..'. + type: array + items: + description: Maps a string key to a path within a volume. + type: object + required: + - key + - path + properties: + key: + description: key is the key to project. + type: string + mode: + description: 'mode is Optional: mode bits used to set permissions on this file. Must be an octal value between 0000 and 0777 or a decimal value between 0 and 511. YAML accepts both octal and decimal values, JSON requires decimal values for mode bits. If not specified, the volume defaultMode will be used. This might be in conflict with other options that affect the file mode, like fsGroup, and the result can be other mode bits set.' + type: integer + format: int32 + path: + description: path is the relative path of the file to map the key to. May not be an absolute path. May not contain the path element '..'. May not start with the string '..'. + type: string + optional: + description: optional field specify whether the Secret or its keys must be defined + type: boolean + secretName: + description: 'secretName is the name of the secret in the pod''s namespace to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret' + type: string + storageos: + description: storageOS represents a StorageOS volume attached and mounted on Kubernetes nodes. + type: object + properties: + fsType: + description: fsType is the filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + readOnly: + description: readOnly defaults to false (read/write). ReadOnly here will force the ReadOnly setting in VolumeMounts. + type: boolean + secretRef: + description: secretRef specifies the secret to use for obtaining the StorageOS API credentials. If not specified, default values will be attempted. + type: object + properties: + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names TODO: Add other useful fields. apiVersion, kind, uid?' + type: string + volumeName: + description: volumeName is the human-readable name of the StorageOS volume. Volume names are only unique within a namespace. + type: string + volumeNamespace: + description: volumeNamespace specifies the scope of the volume within StorageOS. If no namespace is specified then the Pod's namespace will be used. This allows the Kubernetes name scoping to be mirrored within StorageOS for tighter integration. Set VolumeName to any name to override the default behaviour. Set to "default" if you are not using namespaces within StorageOS. Namespaces that do not pre-exist within StorageOS will be created. + type: string + vsphereVolume: + description: vsphereVolume represents a vSphere volume attached and mounted on kubelets host machine + type: object + required: + - volumePath + properties: + fsType: + description: fsType is filesystem type to mount. Must be a filesystem type supported by the host operating system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred to be "ext4" if unspecified. + type: string + storagePolicyID: + description: storagePolicyID is the storage Policy Based Management (SPBM) profile ID associated with the StoragePolicyName. + type: string + storagePolicyName: + description: storagePolicyName is the storage Policy Based Management (SPBM) profile name. + type: string + volumePath: + description: volumePath is the path that identifies vSphere volume vmdk + type: string + installPlanApproval: + description: Approval is the user approval policy for an InstallPlan. It must be one of "Automatic" or "Manual". + type: string + name: + type: string + source: + type: string + sourceNamespace: + type: string + startingCSV: + type: string + status: + type: object + required: + - lastUpdated + properties: + catalogHealth: + description: CatalogHealth contains the Subscription's view of its relevant CatalogSources' status. It is used to determine SubscriptionStatusConditions related to CatalogSources. + type: array + items: + description: SubscriptionCatalogHealth describes the health of a CatalogSource the Subscription knows about. + type: object + required: + - catalogSourceRef + - healthy + - lastUpdated + properties: + catalogSourceRef: + description: CatalogSourceRef is a reference to a CatalogSource. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + healthy: + description: Healthy is true if the CatalogSource is healthy; false otherwise. + type: boolean + lastUpdated: + description: LastUpdated represents the last time that the CatalogSourceHealth changed + type: string + format: date-time + conditions: + description: Conditions is a list of the latest available observations about a Subscription's current state. + type: array + items: + description: SubscriptionCondition represents the latest available observations of a Subscription's state. + type: object + required: + - status + - type + properties: + lastHeartbeatTime: + description: LastHeartbeatTime is the last time we got an update on a given condition + type: string + format: date-time + lastTransitionTime: + description: LastTransitionTime is the last time the condition transit from one status to another + type: string + format: date-time + message: + description: Message is a human-readable message indicating details about last transition. + type: string + reason: + description: Reason is a one-word CamelCase reason for the condition's last transition. + type: string + status: + description: Status is the status of the condition, one of True, False, Unknown. + type: string + type: + description: Type is the type of Subscription condition. + type: string + currentCSV: + description: CurrentCSV is the CSV the Subscription is progressing to. + type: string + installPlanGeneration: + description: InstallPlanGeneration is the current generation of the installplan + type: integer + installPlanRef: + description: InstallPlanRef is a reference to the latest InstallPlan that contains the Subscription's current CSV. + type: object + properties: + apiVersion: + description: API version of the referent. + type: string + fieldPath: + description: 'If referring to a piece of an object instead of an entire object, this string should contain a valid JSON/Go field access statement, such as desiredState.manifest.containers[2]. For example, if the object reference is to a container within a pod, this would take on a value like: "spec.containers{name}" (where "name" refers to the name of the container that triggered the event) or if no container name is specified "spec.containers[2]" (container with index 2 in this pod). This syntax is chosen only to have some well-defined way of referencing a part of an object. TODO: this design is not final and this field is subject to change in the future.' + type: string + kind: + description: 'Kind of the referent. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + name: + description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names' + type: string + namespace: + description: 'Namespace of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/' + type: string + resourceVersion: + description: 'Specific resourceVersion to which this reference is made, if any. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency' + type: string + uid: + description: 'UID of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#uids' + type: string + installedCSV: + description: InstalledCSV is the CSV currently installed by the Subscription. + type: string + installplan: + description: 'Install is a reference to the latest InstallPlan generated for the Subscription. DEPRECATED: InstallPlanRef' + type: object + required: + - apiVersion + - kind + - name + - uuid + properties: + apiVersion: + type: string + kind: + type: string + name: + type: string + uuid: + description: UID is a type that holds unique ID values, including UUIDs. Because we don't ONLY use UUIDs, this is an alias to string. Being a type captures intent and helps make sure that UIDs and names do not get conflated. + type: string + lastUpdated: + description: LastUpdated represents the last time that the Subscription status was updated. + type: string + format: date-time + reason: + description: Reason is the reason the Subscription was transitioned to its current state. + type: string + state: + description: State represents the current state of the Subscription + type: string + served: true + storage: true + subresources: + status: {} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/vizier_crd.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/vizier_crd.yaml new file mode 100644 index 0000000000..b25d7b5924 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/crds/vizier_crd.yaml @@ -0,0 +1,347 @@ + +--- +apiVersion: apiextensions.k8s.io/v1 +kind: CustomResourceDefinition +metadata: + annotations: + controller-gen.kubebuilder.io/version: v0.4.1 + creationTimestamp: null + name: viziers.px.dev +spec: + group: px.dev + names: + kind: Vizier + listKind: VizierList + plural: viziers + singular: vizier + scope: Namespaced + versions: + - name: v1alpha1 + schema: + openAPIV3Schema: + description: Vizier is the Schema for the viziers API + properties: + apiVersion: + description: 'APIVersion defines the versioned schema of this representation + of an object. Servers should convert recognized schemas to the latest + internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' + type: string + kind: + description: 'Kind is a string value representing the REST resource this + object represents. Servers may infer this from the endpoint the client + submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' + type: string + metadata: + type: object + spec: + description: VizierSpec defines the desired state of Vizier + properties: + autopilot: + description: Autopilot should be set if running Pixie on GKE Autopilot. + type: boolean + clockConverter: + description: ClockConverter specifies which routine to use for converting + timestamps to a synced reference time. + enum: + - default + - grpc + type: string + cloudAddr: + description: CloudAddr is the address of the cloud instance that the + Vizier should be pointing to. + type: string + clusterName: + description: ClusterName is a name for the Vizier instance, usually + specifying which cluster the Vizier is deployed to. If not specified, + a random name will be generated. + type: string + customDeployKeySecret: + description: CustomDeployKeySecret is the name of the secret where + the deploy key is stored. + type: string + dataAccess: + description: DataAccess defines the level of data that may be accesssed + when executing a script on the cluster. If none specified, assumes + full data access. + enum: + - Full + - Restricted + type: string + dataCollectorParams: + description: DataCollectorParams specifies the set of params for configuring + the dataCollector. If no params are specified, defaults are used. + properties: + customPEMFlags: + additionalProperties: + type: string + description: This contains custom flags that should be passed + to the PEM via environment variables. + type: object + datastreamBufferSize: + description: DatastreamBufferSize is the data buffer size per + connection. Default size is 1 Mbyte. For high-throughput applications, + try increasing this number if experiencing data loss. + format: int32 + type: integer + datastreamBufferSpikeSize: + description: DatastreamBufferSpikeSize is the maximum temporary + size of a data stream buffer before processing. + format: int32 + type: integer + type: object + deployKey: + description: DeployKey is the deploy key associated with the Vizier + instance. This is used to link the Vizier to a specific user/org. + This is required unless specifying a CustomDeployKeySecret. + type: string + devCloudNamespace: + description: 'DevCloudNamespace should be specified only for dev versions + of Pixie cloud which have no ingress to help redirect traffic to + the correct service. The DevCloudNamespace is the namespace that + the dev Pixie cloud is running on, for example: "plc-dev".' + type: string + disableAutoUpdate: + description: DisableAutoUpdate specifies whether auto update should + be enabled for the Vizier instance. + type: boolean + leadershipElectionParams: + description: LeadershipElectionParams specifies configurable values + for the K8s leaderships elections which Vizier uses manage pod leadership. + properties: + electionPeriodMs: + description: ElectionPeriodMs defines how frequently Vizier attempts + to run a K8s leader election, in milliseconds. The period also + determines how long Vizier waits for a leader election response + back from the K8s API. If the K8s API is slow to respond, consider + increasing this number. + format: int64 + type: integer + type: object + patches: + additionalProperties: + type: string + description: Patches defines patches that should be applied to Vizier + resources. The key of the patch should be the name of the resource + that is patched. The value of the patch is the patch, encoded as + a string which follow the "strategic merge patch" rules for K8s. + type: object + pemMemoryLimit: + description: PemMemoryLimit is a memory limit applied specifically + to PEM pods. + type: string + pemMemoryRequest: + description: PemMemoryRequest is a memory request applied specifically + to PEM pods. It will automatically use the value of pemMemoryLimit + if not specified. + type: string + pod: + description: Pod defines the policy for creating Vizier pods. + properties: + annotations: + additionalProperties: + type: string + description: Annotations specifies the annotations to attach to + pods the operator creates. + type: object + labels: + additionalProperties: + type: string + description: Labels specifies the labels to attach to pods the + operator creates. + type: object + nodeSelector: + additionalProperties: + type: string + description: 'NodeSelector is a selector which must be true for + the pod to fit on a node. Selector which must match a node''s + labels for the pod to be scheduled on that node. More info: + https://kubernetes.io/docs/concepts/configuration/assign-pod-node/ + This field cannot be updated once the cluster is created.' + type: object + resources: + description: Resources is the resource requirements for a container. + This field cannot be updated once the cluster is created. + properties: + claims: + description: "Claims lists the names of resources, defined + in spec.resourceClaims, that are used by this container. + \n This is an alpha field and requires enabling the DynamicResourceAllocation + feature gate. \n This field is immutable." + items: + description: ResourceClaim references one entry in PodSpec.ResourceClaims. + properties: + name: + description: Name must match the name of one entry in + pod.spec.resourceClaims of the Pod where this field + is used. It makes that resource available inside a + container. + type: string + required: + - name + type: object + type: array + x-kubernetes-list-map-keys: + - name + x-kubernetes-list-type: map + limits: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: 'Limits describes the maximum amount of compute + resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + requests: + additionalProperties: + anyOf: + - type: integer + - type: string + pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$ + x-kubernetes-int-or-string: true + description: 'Requests describes the minimum amount of compute + resources required. If Requests is omitted for a container, + it defaults to Limits if that is explicitly specified, otherwise + to an implementation-defined value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/' + type: object + type: object + securityContext: + description: The securityContext which should be set on non-privileged + pods. All pods which require privileged permissions will still + require a privileged securityContext. + properties: + enabled: + description: Whether a securityContext should be set on the + pod. In cases where no PSPs are applied to the cluster, + this is not necessary. + type: boolean + fsGroup: + description: A special supplemental group that applies to + all containers in a pod. + format: int64 + type: integer + runAsGroup: + description: The GID to run the entrypoint of the container + process. + format: int64 + type: integer + runAsUser: + description: The UID to run the entrypoint of the container + process. + format: int64 + type: integer + type: object + tolerations: + description: 'Tolerations allows scheduling pods on nodes with + matching taints. More info: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/: + This field cannot be updated once the cluster is created.' + items: + description: The pod this Toleration is attached to tolerates + any taint that matches the triple using + the matching operator . + properties: + effect: + description: Effect indicates the taint effect to match. + Empty means match all taint effects. When specified, allowed + values are NoSchedule, PreferNoSchedule and NoExecute. + type: string + key: + description: Key is the taint key that the toleration applies + to. Empty means match all taint keys. If the key is empty, + operator must be Exists; this combination means to match + all values and all keys. + type: string + operator: + description: Operator represents a key's relationship to + the value. Valid operators are Exists and Equal. Defaults + to Equal. Exists is equivalent to wildcard for value, + so that a pod can tolerate all taints of a particular + category. + type: string + tolerationSeconds: + description: TolerationSeconds represents the period of + time the toleration (which must be of effect NoExecute, + otherwise this field is ignored) tolerates the taint. + By default, it is not set, which means tolerate the taint + forever (do not evict). Zero and negative values will + be treated as 0 (evict immediately) by the system. + format: int64 + type: integer + value: + description: Value is the taint value the toleration matches + to. If the operator is Exists, the value should be empty, + otherwise just a regular string. + type: string + type: object + type: array + type: object + registry: + description: 'Registry specifies the image registry to use rather + than Pixie''s default registry (gcr.io). We expect any forward slashes + in Pixie''s image paths are replaced with a "-". For example: "gcr.io/pixie-oss/pixie-dev/vizier/metadata_server_image:latest" + should be pushed to "$registry/gcr.io-pixie-oss-pixie-dev-vizier-metadata_server_image:latest".' + type: string + useEtcdOperator: + description: UseEtcdOperator specifies whether the metadata service + should use etcd for storage. + type: boolean + version: + description: Version is the desired version of the Vizier instance. + type: string + type: object + status: + description: VizierStatus defines the observed state of Vizier + properties: + checksum: + description: A checksum of the last reconciled Vizier spec. If this + checksum does not match the checksum of the current vizier spec, + reconciliation should be performed. + format: byte + type: string + lastReconciliationPhaseTime: + description: LastReconciliationPhaseTime is the last time that the + ReconciliationPhase changed. + format: date-time + type: string + message: + description: Message is a human-readable message with details about + why the Vizier is in this condition. + type: string + operatorVersion: + description: OperatorVersion is the actual version of the Operator + instance. + type: string + reconciliationPhase: + description: ReconciliationPhase describes the state the Reconciler + is in for this Vizier. See the documentation above the ReconciliationPhase + type for more information. + type: string + sentryDSN: + description: SentryDSN is key for Viziers that is used to send errors + and stacktraces to Sentry. + type: string + version: + description: Version is the actual version of the Vizier instance. + type: string + vizierPhase: + description: VizierPhase is a high-level summary of where the Vizier + is in its lifecycle. + type: string + vizierReason: + description: VizierReason is a short, machine understandable string + that gives the reason for the transition into the Vizier's current + status. + type: string + type: object + type: object + served: true + storage: true + subresources: + status: {} +status: + acceptedNames: + kind: "" + plural: "" + conditions: [] + storedVersions: [] diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/00_olm.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/00_olm.yaml new file mode 100644 index 0000000000..fe058140f1 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/00_olm.yaml @@ -0,0 +1,232 @@ +{{- $olmCRDFound := false }} +{{- $nsLookup := len (lookup "v1" "Namespace" "" "") }} +{{- range $index, $crdLookup := (lookup "apiextensions.k8s.io/v1" "CustomResourceDefinition" "" "").items -}}{{ if eq $crdLookup.metadata.name "operators.operators.coreos.com"}}{{ $olmCRDFound = true }}{{ end }}{{end}} +{{ if and (not $olmCRDFound) (not (eq $nsLookup 0))}}{{ fail "CRDs missing! Please deploy CRDs from https://github.com/pixie-io/pixie/tree/main/k8s/operator/helm/crds to continue with deploy." }}{{end}} +{{- $lookupLen := 0 -}}{{- $opLookup := (lookup "operators.coreos.com/v1" "OperatorGroup" "" "").items -}}{{if $opLookup }}{{ $lookupLen = len $opLookup }}{{ end }} +{{ if (or (eq (.Values.deployOLM | toString) "true") (and (not (eq (.Values.deployOLM | toString) "false")) (eq $lookupLen 0))) }} +{{ if not (eq .Values.olmNamespace .Release.Namespace) }} +--- +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.olmNamespace }} +{{ end }} +--- +kind: ServiceAccount +apiVersion: v1 +metadata: + name: olm-operator-serviceaccount + namespace: {{ .Values.olmNamespace }} +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: system:controller:operator-lifecycle-manager +rules: +- apiGroups: ["*"] + resources: ["*"] + verbs: ["*"] +- nonResourceURLs: ["*"] + verbs: ["*"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: olm-operator-cluster-binding-olm +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: system:controller:operator-lifecycle-manager +subjects: +- kind: ServiceAccount + name: olm-operator-serviceaccount + namespace: {{ .Values.olmNamespace }} +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: olm-operator + namespace: {{ .Values.olmNamespace }} + labels: + app: olm-operator +spec: + strategy: + type: RollingUpdate + replicas: 1 + selector: + matchLabels: + app: olm-operator + template: + metadata: + labels: + app: olm-operator + spec: + serviceAccountName: olm-operator-serviceaccount + containers: + - name: olm-operator + command: + - /bin/olm + args: + - --namespace + - $(OPERATOR_NAMESPACE) + - --writeStatusName + - "" + image: {{ if .Values.registry }}{{ .Values.registry }}/quay.io-operator-framework-{{ else }}quay.io/operator-framework/{{ end }}olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6 + ports: + - containerPort: 8080 + - containerPort: 8081 + name: metrics + protocol: TCP + livenessProbe: + httpGet: + path: /healthz + port: 8080 + readinessProbe: + httpGet: + path: /healthz + port: 8080 + terminationMessagePolicy: FallbackToLogsOnError + env: + - name: OPERATOR_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: OPERATOR_NAME + value: olm-operator + resources: + requests: + cpu: 10m + memory: 160Mi + nodeSelector: + kubernetes.io/os: linux + tolerations: + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoExecute" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoExecute" +--- +apiVersion: apps/v1 +kind: Deployment +metadata: + name: catalog-operator + namespace: {{ .Values.olmNamespace }} + labels: + app: catalog-operator +spec: + strategy: + type: RollingUpdate + replicas: 1 + selector: + matchLabels: + app: catalog-operator + template: + metadata: + labels: + app: catalog-operator + spec: + serviceAccountName: olm-operator-serviceaccount + containers: + - name: catalog-operator + command: + - /bin/catalog + args: + - '--namespace' + - {{ .Values.olmNamespace }} + - --configmapServerImage={{ if .Values.registry }}{{ .Values.registry }}/quay.io-operator-framework-{{ else }}quay.io/operator-framework/{{ end }}configmap-operator-registry:latest + - --util-image + - {{ if .Values.registry }}{{ .Values.registry }}/quay.io-operator-framework-{{ else }}quay.io/operator-framework/{{ end }}olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6 + - --opmImage + - {{ if .Values.registry }}{{ .Values.registry }}/quay.io-operator-framework-{{ else }}quay.io/operator-framework/{{ end }}opm@sha256:d999588bd4e9509ec9e75e49adfb6582d256e9421e454c7fb5e9fe57e7b1aada + image: {{ if .Values.registry }}{{ .Values.registry }}/quay.io-operator-framework-{{ else }}quay.io/operator-framework/{{ end }}olm@sha256:1b6002156f568d722c29138575733591037c24b4bfabc67946f268ce4752c3e6 + imagePullPolicy: IfNotPresent + ports: + - containerPort: 8080 + - containerPort: 8081 + name: metrics + protocol: TCP + livenessProbe: + httpGet: + path: /healthz + port: 8080 + readinessProbe: + httpGet: + path: /healthz + port: 8080 + terminationMessagePolicy: FallbackToLogsOnError + env: + resources: + requests: + cpu: 10m + memory: 80Mi + nodeSelector: + kubernetes.io/os: linux + tolerations: + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoExecute" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoExecute" +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: aggregate-olm-edit + labels: + rbac.authorization.k8s.io/aggregate-to-admin: "true" + rbac.authorization.k8s.io/aggregate-to-edit: "true" +rules: +- apiGroups: ["operators.coreos.com"] + resources: ["subscriptions"] + verbs: ["create", "update", "patch", "delete"] +- apiGroups: ["operators.coreos.com"] + resources: ["clusterserviceversions", "catalogsources", "installplans", "subscriptions"] + verbs: ["delete"] +--- +kind: ClusterRole +apiVersion: rbac.authorization.k8s.io/v1 +metadata: + name: aggregate-olm-view + labels: + rbac.authorization.k8s.io/aggregate-to-admin: "true" + rbac.authorization.k8s.io/aggregate-to-edit: "true" + rbac.authorization.k8s.io/aggregate-to-view: "true" +rules: +- apiGroups: ["operators.coreos.com"] + resources: ["clusterserviceversions", "catalogsources", "installplans", "subscriptions", "operatorgroups"] + verbs: ["get", "list", "watch"] +- apiGroups: ["packages.operators.coreos.com"] + resources: ["packagemanifests", "packagemanifests/icon"] + verbs: ["get", "list", "watch"] +--- +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: olm-operators + namespace: {{ .Values.olmNamespace }} +spec: + targetNamespaces: + - {{ .Values.olmNamespace }} +{{- end}} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/01_px_olm.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/01_px_olm.yaml new file mode 100644 index 0000000000..2c2921958e --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/01_px_olm.yaml @@ -0,0 +1,13 @@ +{{ if not (eq .Values.olmOperatorNamespace .Release.Namespace) }} +--- +apiVersion: v1 +kind: Namespace +metadata: + name: {{ .Values.olmOperatorNamespace }} +{{ end }} +--- +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: global-operators + namespace: {{ .Values.olmOperatorNamespace }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/02_catalog.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/02_catalog.yaml new file mode 100644 index 0000000000..e7f68804a0 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/02_catalog.yaml @@ -0,0 +1,37 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: CatalogSource +metadata: + name: pixie-operator-index + namespace: {{ .Values.olmOperatorNamespace }} + {{- if .Values.olmCatalogSource.annotations }} + annotations: {{ .Values.olmCatalogSource.annotations | toYaml | nindent 4 }} + {{- end }} + {{- if .Values.olmCatalogSource.labels }} + labels: {{ .Values.olmCatalogSource.labels | toYaml | nindent 4 }} + {{- end }} +spec: + sourceType: grpc + image: {{ if .Values.registry }}{{ .Values.registry }}/gcr.io-pixie-oss-pixie-prod-operator-bundle_index:0.0.1{{ else }}gcr.io/pixie-oss/pixie-prod/operator/bundle_index:0.0.1{{ end }} + displayName: Pixie Vizier Operator + publisher: px.dev + updateStrategy: + registryPoll: + interval: 10m + grpcPodConfig: + tolerations: + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "amd64" + effect: "NoExecute" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoSchedule" + - key: "kubernetes.io/arch" + operator: "Equal" + value: "arm64" + effect: "NoExecute" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/03_subscription.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/03_subscription.yaml new file mode 100644 index 0000000000..78223cc9e3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/03_subscription.yaml @@ -0,0 +1,11 @@ +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: pixie-operator-subscription + namespace: {{ .Values.olmOperatorNamespace }} +spec: + channel: {{ .Values.olmBundleChannel }} + name: pixie-operator + source: pixie-operator-index + sourceNamespace: {{ .Values.olmOperatorNamespace }} + installPlanApproval: Automatic diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/04_vizier.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/04_vizier.yaml new file mode 100644 index 0000000000..7c8ca65ad2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/04_vizier.yaml @@ -0,0 +1,100 @@ +apiVersion: px.dev/v1alpha1 +kind: Vizier +metadata: + name: {{ .Values.name }} + namespace: {{ .Release.Namespace }} +spec: + {{- if .Values.version }} + version: {{ .Values.version }} + {{- end }} + {{- if .Values.deployKey }} + deployKey: {{ .Values.deployKey }} + {{- end }} + {{- if .Values.customDeployKeySecret }} + customDeployKeySecret: {{ .Values.customDeployKeySecret }} + {{- end }} + cloudAddr: {{ .Values.cloudAddr }} + disableAutoUpdate: {{ .Values.disableAutoUpdate }} + useEtcdOperator: {{ .Values.useEtcdOperator }} + {{- if (.Values.global).cluster }} + clusterName: {{ .Values.global.cluster }} + {{- else if .Values.clusterName }} + clusterName: {{ .Values.clusterName }} + {{- end }} + {{- if .Values.devCloudNamespace }} + devCloudNamespace: {{ .Values.devCloudNamespace }} + {{- end }} + {{- if .Values.pemMemoryLimit }} + pemMemoryLimit: {{ .Values.pemMemoryLimit }} + {{- end }} + {{- if .Values.pemMemoryRequest }} + pemMemoryRequest: {{ .Values.pemMemoryRequest }} + {{- end }} + {{- if .Values.dataAccess }} + dataAccess: {{ .Values.dataAccess }} + {{- end }} + {{- if .Values.patches }} + patches: {{ .Values.patches | toYaml | nindent 4 }} + {{- end }} + {{- if ((.Values.global).images).registry }} + registry: {{ .Values.global.images.registry }} + {{- else if .Values.registry }} + registry: {{ .Values.registry }} + {{- end}} + {{- if .Values.autopilot }} + autopilot: {{ .Values.autopilot }} + {{- end}} + {{- if .Values.dataCollectorParams }} + dataCollectorParams: + {{- if .Values.dataCollectorParams.datastreamBufferSize }} + datastreamBufferSize: {{ .Values.dataCollectorParams.datastreamBufferSize }} + {{- end }} + {{- if .Values.dataCollectorParams.datastreamBufferSpikeSize }} + datastreamBufferSpikeSize: {{ .Values.dataCollectorParams.datastreamBufferSpikeSize }} + {{- end }} + {{- if .Values.dataCollectorParams.customPEMFlags }} + customPEMFlags: + {{- range $key, $value := .Values.dataCollectorParams.customPEMFlags}} + {{$key}}: "{{$value}}" + {{- end}} + {{- end }} + {{- end}} + {{- if .Values.leadershipElectionParams }} + leadershipElectionParams: + {{- if .Values.leadershipElectionParams.electionPeriodMs }} + electionPeriodMs: {{ .Values.leadershipElectionParams.electionPeriodMs }} + {{- end }} + {{- end }} + {{- if or .Values.pod.securityContext (or .Values.pod.nodeSelector (or .Values.pod.tolerations (or .Values.pod.annotations (or .Values.pod.labels .Values.pod.resources)))) }} + pod: + {{- if .Values.pod.annotations }} + annotations: {{ .Values.pod.annotations | toYaml | nindent 6 }} + {{- end }} + {{- if .Values.pod.labels }} + labels: {{ .Values.pod.labels | toYaml | nindent 6 }} + {{- end }} + {{- if .Values.pod.resources }} + resources: {{ .Values.pod.resources | toYaml | nindent 6 }} + {{- end }} + {{- if .Values.pod.nodeSelector }} + nodeSelector: {{ .Values.pod.nodeSelector | toYaml | nindent 6 }} + {{- end }} + {{- if .Values.pod.tolerations }} + tolerations: {{ .Values.pod.tolerations | toYaml | nindent 4 }} + {{- end }} + {{- if .Values.pod.securityContext }} + securityContext: + enabled: {{ .Values.pod.securityContext.enabled }} + {{- if .Values.pod.securityContext.enabled }} + {{- if .Values.pod.securityContext.fsGroup }} + fsGroup: {{ .Values.pod.securityContext.fsGroup }} + {{- end }} + {{- if .Values.pod.securityContext.runAsUser }} + runAsUser: {{ .Values.pod.securityContext.runAsUser }} + {{- end }} + {{- if .Values.pod.securityContext.runAsGroup }} + runAsGroup: {{ .Values.pod.securityContext.runAsGroup }} + {{- end }} + {{- end }} + {{- end }} + {{- end }} diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter.yaml new file mode 100644 index 0000000000..b1cde0c927 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter.yaml @@ -0,0 +1,25 @@ +apiVersion: batch/v1 +kind: Job +metadata: + annotations: + helm.sh/hook: pre-delete + helm.sh/hook-delete-policy: hook-succeeded + name: vizier-deleter + namespace: '{{ .Release.Namespace }}' +spec: + template: + metadata: + name: vizier-deleter + spec: + containers: + - env: + - name: PL_NAMESPACE + valueFrom: + fieldRef: + fieldPath: metadata.namespace + - name: PL_VIZIER_NAME + value: '{{ .Values.name }}' + image: gcr.io/pixie-oss/pixie-prod/operator-vizier_deleter:0.1.6 + name: delete-job + restartPolicy: Never + serviceAccountName: pl-deleter-service-account diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter_role.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter_role.yaml new file mode 100644 index 0000000000..73e5ec7e48 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/templates/deleter_role.yaml @@ -0,0 +1,77 @@ +--- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: pl-deleter-service-account +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: pl-deleter-cluster-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: pl-deleter-role +subjects: +- kind: ServiceAccount + name: pl-deleter-service-account + namespace: "{{ .Release.Namespace }}" +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: pl-deleter-cluster-role +rules: +# Allow actions on Kubernetes objects +- apiGroups: + - rbac.authorization.k8s.io + - etcd.database.coreos.com + - nats.io + resources: + - clusterroles + - clusterrolebindings + - persistentvolumes + - etcdclusters + - natsclusters + verbs: ["*"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: pl-deleter-role +rules: +- apiGroups: + - "" + - apps + - rbac.authorization.k8s.io + - extensions + - batch + - policy + resources: + - configmaps + - secrets + - pods + - services + - deployments + - daemonsets + - persistentvolumes + - roles + - rolebindings + - serviceaccounts + - statefulsets + - cronjobs + - jobs + verbs: ["*"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: pl-deleter-binding +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: pl-deleter-role +subjects: +- kind: ServiceAccount + name: pl-deleter-service-account + namespace: "{{ .Release.Namespace }}" diff --git a/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/values.yaml b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/values.yaml new file mode 100644 index 0000000000..a3ffe7c9d3 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/charts/pixie-operator-chart/values.yaml @@ -0,0 +1,75 @@ +## OLM configuration +# OLM is used for deploying and ensuring the operator is up-to-date. +# deployOLM indicates whether OLM should be deployed. This should only be +# disabled if an instance of OLM is already configured on the cluster. +# Should be string "true" if true, but "false" otherwise. If empty, defaults +# to whether OLM is present in the cluster. +deployOLM: "" +# The namespace that olm should run in. If olm has already been deployed +# to the cluster, this should be the namespace that olm is already running in. +olmNamespace: "olm" +# The namespace which olm operators should run in. If olm has already +# been deployed to the cluster, this should be the namespace that the olm operators +# are running in. +olmOperatorNamespace: "px-operator" +# The bundle channel which OLM should listen to for the Vizier operator bundles. +# Should be "stable" for production-versions of the operator, and "test" for release candidates. +olmBundleChannel: "stable" +# Optional annotations and labels for CatalogSource. +olmCatalogSource: + # Optional custom annotations to add to deployed pods managed by CatalogSource object. + annotations: {} + # Optional custom labels to add to deployed pods managed by CatalogSource object. + labels: {} +## Vizier configuration +# The name of the Vizier instance deployed to the cluster. +name: "pixie" +# The name of the cluster that the Vizier is monitoring. If empty, +# a random name will be generated. +clusterName: "" +# The version of the Vizier instance deployed to the cluster. If empty, +# the operator will automatically deploy the latest version. +version: "" +# The deploy key is used to link the deployed Vizier to a specific user/project. +# This is required if not specifying a customDeployKeySecret, and can be generated through the UI or CLI. +deployKey: "" +# The deploy key may be read from a custom secret in the Pixie namespace. This secret should be formatted where the +# key of the deploy key is "deploy-key". +customDeployKeySecret: "" +# Whether auto-update should be disabled. +disableAutoUpdate: false +# Whether the metadata service should use etcd for in-memory storage. Recommended +# only for clusters which do not have persistent volumes configured. +useEtcdOperator: false +# The address of the Pixie cloud instance that the Vizier should be connected to. +# This should only be updated when using a self-hosted version of Pixie Cloud. +cloudAddr: "withpixie.ai:443" +# DevCloudNamespace should be specified only for self-hosted versions of Pixie cloud which have no ingress to help +# redirect traffic to the correct service. The DevCloudNamespace is the namespace that the dev Pixie cloud is +# running on, for example: "plc-dev". +devCloudNamespace: "" +# A memory limit applied specifically to PEM pods. If none is specified, a default limit of 2Gi is set. +pemMemoryLimit: "" +# A memory request applied specifically to PEM pods. If none is specified, it will default to pemMemoryLimit. +pemMemoryRequest: "" +# DataAccess defines the level of data that may be accesssed when executing a script on the cluster. +dataAccess: "Full" +pod: + # Optional custom annotations to add to deployed pods. + annotations: {} + # Optional custom labels to add to deployed pods. + labels: {} + resources: {} + # limits: + # cpu: 500m + # memory: 7Gi + # requests: + # cpu: 100m + # memory: 5Gi + nodeSelector: {} + tolerations: [] +# A set of custom patches to apply to the deployed Vizier resources. +# The key should be the name of the resource to apply the patch to, and the value is the patch to apply. +# Currently, only a JSON format is accepted, such as: +# `{"spec": {"template": {"spec": { "tolerations": [{"key": "test", "operator": "Exists", "effect": "NoExecute" }]}}}}` +patches: {} diff --git a/charts/new-relic/nri-bundle/5.0.94/ci/test-values.yaml b/charts/new-relic/nri-bundle/5.0.94/ci/test-values.yaml new file mode 100644 index 0000000000..7ba6c8c329 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/ci/test-values.yaml @@ -0,0 +1,21 @@ +global: + licenseKey: 1234567890abcdef1234567890abcdef12345678 + cluster: test-cluster + +infrastructure: + enabled: true + +prometheus: + enabled: true + +webhook: + enabled: true + +ksm: + enabled: true + +kubeEvents: + enabled: true + +logging: + enabled: true diff --git a/charts/new-relic/nri-bundle/5.0.94/questions.yaml b/charts/new-relic/nri-bundle/5.0.94/questions.yaml new file mode 100644 index 0000000000..de3fa9fea2 --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/questions.yaml @@ -0,0 +1,113 @@ +questions: +- variable: infrastructure.enabled + default: true + required: false + type: boolean + label: Enable Infrastructure + group: "Select Components" +- variable: prometheus.enabled + default: false + required: false + type: boolean + label: Enable Prometheus + group: "Select Components" +- variable: ksm.enabled + default: false + required: false + type: boolean + label: Enable KSM + group: "Select Components" + description: "This is mandatory if `Enable Infrastructure` is set to `true` and the user does not provide its own instance of KSM version >=1.8 and <=2.0" +- variable: webhook.enabled + default: true + required: false + type: boolean + label: Enable webhook + group: "Select Components" +- variable: kubeEvents.enabled + default: false + required: false + type: boolean + label: Enable Kube Events + group: "Select Components" +- variable: logging.enabled + default: false + required: false + type: boolean + label: Enable Logging + group: "Select Components" +- variable: newrelic-pixie.enabled + default: false + required: false + type: boolean + label: Enable New Relic Pixie Integration + group: "Select Components" + show_subquestion_if: true + subquestions: + - variable: newrelic-pixie.apiKey + default: "" + required: false + type: string + label: New Relic Pixie API Key + group: "Select Components" + description: "Required if deploying Pixie." +- variable: pixie-chart.enabled + default: false + required: false + type: boolean + label: Enable Pixie Chart + group: "Select Components" + show_subquestion_if: true + subquestions: + - variable: pixie-chart.deployKey + default: "" + required: false + type: string + label: Pixie Deploy Key + group: "Select Components" + description: "Required if deploying Pixie." + - variable: pixie-chart.clusterName + default: "" + required: false + type: string + label: Kubernetes Cluster Name for Pixie + group: "Select Components" + description: "Required if deploying Pixie." +- variable: newrelic-infra-operator.enabled + default: false + required: false + type: boolean + label: Enable New Relic Infra Operator + group: "Select Components" +- variable: metrics-adapter.enabled + default: false + required: false + type: boolean + label: Enable Metrics Adapter + group: "Select Components" +- variable: global.licenseKey + default: "xxxx" + required: true + type: string + label: New Relic License Key + group: "Global Settings" +- variable: global.cluster + default: "xxxx" + required: true + type: string + label: Name of Kubernetes Cluster for New Relic + group: "Global Settings" +- variable: global.lowDataMode + default: false + required: false + type: boolean + label: Enable Low Data Mode + description: "Reduces amount of data ingest by New Relic." + group: "Global Settings" +- variable: global.privileged + default: false + required: false + type: boolean + label: Enable Privileged Mode + description: "Allows for access to underlying node from container." + group: "Global Settings" diff --git a/charts/new-relic/nri-bundle/5.0.94/values.yaml b/charts/new-relic/nri-bundle/5.0.94/values.yaml new file mode 100644 index 0000000000..47c58df8ec --- /dev/null +++ b/charts/new-relic/nri-bundle/5.0.94/values.yaml @@ -0,0 +1,169 @@ +newrelic-infrastructure: + # newrelic-infrastructure.enabled -- Install the [`newrelic-infrastructure` chart](https://github.com/newrelic/nri-kubernetes/tree/main/charts/newrelic-infrastructure) + enabled: true + +nri-prometheus: + # nri-prometheus.enabled -- Install the [`nri-prometheus` chart](https://github.com/newrelic/nri-prometheus/tree/main/charts/nri-prometheus) + enabled: false + +nri-metadata-injection: + # nri-metadata-injection.enabled -- Install the [`nri-metadata-injection` chart](https://github.com/newrelic/k8s-metadata-injection/tree/main/charts/nri-metadata-injection) + enabled: true + +kube-state-metrics: + # kube-state-metrics.enabled -- Install the [`kube-state-metrics` chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-state-metrics) from the stable helm charts repository. + # This is mandatory if `infrastructure.enabled` is set to `true` and the user does not provide its own instance of KSM version >=1.8 and <=2.0. Note, kube-state-metrics v2+ disables labels/annotations + # metrics by default. You can enable the target labels/annotations metrics to be monitored by using the metricLabelsAllowlist/metricAnnotationsAllowList options described [here](https://github.com/prometheus-community/helm-charts/blob/159cd8e4fb89b8b107dcc100287504bb91bf30e0/charts/kube-state-metrics/values.yaml#L274) in + # your Kubernetes clusters. + enabled: false + +nri-kube-events: + # nri-kube-events.enabled -- Install the [`nri-kube-events` chart](https://github.com/newrelic/nri-kube-events/tree/main/charts/nri-kube-events) + enabled: false + +newrelic-logging: + # newrelic-logging.enabled -- Install the [`newrelic-logging` chart](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging) + enabled: false + +newrelic-pixie: + # newrelic-pixie.enabled -- Install the [`newrelic-pixie`](https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie) + enabled: false + +k8s-agents-operator: + # k8s-agents-operator.enabled -- Install the [`k8s-agents-operator` chart](https://github.com/newrelic/k8s-agents-operator/tree/main/charts/k8s-agents-operator) + enabled: false + +pixie-chart: + # pixie-chart.enabled -- Install the [`pixie-chart` chart](https://docs.pixielabs.ai/installing-pixie/install-schemes/helm/#3.-deploy) + enabled: false + +newrelic-infra-operator: + # newrelic-infra-operator.enabled -- Install the [`newrelic-infra-operator` chart](https://github.com/newrelic/newrelic-infra-operator/tree/main/charts/newrelic-infra-operator) (Beta) + enabled: false + +newrelic-prometheus-agent: + # newrelic-prometheus-agent.enabled -- Install the [`newrelic-prometheus-agent` chart](https://github.com/newrelic/newrelic-prometheus-configurator/tree/main/charts/newrelic-prometheus-agent) + enabled: false + +newrelic-k8s-metrics-adapter: + # newrelic-k8s-metrics-adapter.enabled -- Install the [`newrelic-k8s-metrics-adapter.` chart](https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/main/charts/newrelic-k8s-metrics-adapter) (Beta) + enabled: false + + +# -- change the behaviour globally to all the supported helm charts. +# See [user's guide of the common library](https://github.com/newrelic/helm-charts/blob/master/library/common-library/README.md) for further information. +# @default -- See [`values.yaml`](values.yaml) +global: + # -- The cluster name for the Kubernetes cluster. + cluster: "" + + # -- The license key for your New Relic Account. This will be preferred configuration option if both `licenseKey` and `customSecret` are specified. + licenseKey: "" + # -- The license key for your New Relic Account. This will be preferred configuration option if both `insightsKey` and `customSecret` are specified. + insightsKey: "" + # -- Name of the Secret object where the license key is stored + customSecretName: "" + # -- Key in the Secret object where the license key is stored + customSecretLicenseKey: "" + + # -- Additional labels for chart objects + labels: {} + # -- Additional labels for chart pods + podLabels: {} + + images: + # -- Changes the registry where to get the images. Useful when there is an internal image cache/proxy + registry: "" + # -- Set secrets to be able to fetch images + pullSecrets: [] + + serviceAccount: + # -- Add these annotations to the service account we create + annotations: {} + # -- Configures if the service account should be created or not + create: + # -- Change the name of the service account. This is honored if you disable on this chart the creation of the service account so you can use your own + name: + + # -- (bool) Sets pod's hostNetwork + # @default -- false + hostNetwork: + # -- Sets pod's dnsConfig + dnsConfig: {} + + # -- Sets pod's priorityClassName + priorityClassName: "" + # -- Sets security context (at pod level) + podSecurityContext: {} + # -- Sets security context (at container level) + containerSecurityContext: {} + + # -- Sets pod/node affinities + affinity: {} + # -- Sets pod's node selector + nodeSelector: {} + # -- Sets pod's tolerations to node taints + tolerations: [] + + # -- Adds extra attributes to the cluster and all the metrics emitted to the backend + customAttributes: {} + + # -- (bool) Reduces number of metrics sent in order to reduce costs + # @default -- false + lowDataMode: + + # -- (bool) In each integration it has different behavior. See [Further information](#values-managed-globally-3) but all aims to send less metrics to the backend to try to save costs | + # @default -- false + privileged: + + # -- (bool) Must be set to `true` when deploying in an EKS Fargate environment + # @default -- false + fargate: + + # -- Configures the integration to send all HTTP/HTTPS request through the proxy in that URL. The URL should have a standard format like `https://user:password@hostname:port` + proxy: "" + + # -- (bool) Send the metrics to the staging backend. Requires a valid staging license key + # @default -- false + nrStaging: + fedramp: + # fedramp.enabled -- (bool) Enables FedRAMP + # @default -- false + enabled: + + # -- (bool) Sets the debug logs to this integration or all integrations if it is set globally + # @default -- false + verboseLog: + + +# To add values to the subcharts. Follow Helm's guide: https://helm.sh/docs/chart_template_guide/subcharts_and_globals + +# If you wish to monitor services running on Kubernetes you can provide integrations +# configuration under `integrations_config` that it will passed down to the `newrelic-infrastructure` chart. +# +# You just need to create a new entry where the "name" is the filename of the configuration file and the data is the content of +# the integration configuration. The name must end in ".yaml" as this will be the +# filename generated and the Infrastructure agent only looks for YAML files. +# +# The data part is the actual integration configuration as described in the spec here: +# https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 +# +# In the following example you can see how to monitor a Redis integration with autodiscovery +# +# +# newrelic-infrastructure: +# integrations: +# nri-redis-sampleapp: +# discovery: +# command: +# exec: /var/db/newrelic-infra/nri-discovery-kubernetes --tls --port 10250 +# match: +# label.app: sampleapp +# integrations: +# - name: nri-redis +# env: +# # using the discovered IP as the hostname address +# HOSTNAME: ${discovery.ip} +# PORT: 6379 +# labels: +# env: test diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/.helmignore b/charts/stackstate/stackstate-k8s-agent/1.0.98/.helmignore new file mode 100644 index 0000000000..15a5c12775 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/.helmignore @@ -0,0 +1,26 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ +linter_values.yaml +ci/ +installation/ +logo.svg diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.lock b/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.lock new file mode 100644 index 0000000000..a8df83f136 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.lock @@ -0,0 +1,6 @@ +dependencies: +- name: http-header-injector + repository: https://helm.stackstate.io + version: 0.0.11 +digest: sha256:ae5ad7c3176f89b71aabef7cd75f99394750f4fffb9905b86fb45c345595c24c +generated: "2024-05-30T13:30:45.346757+02:00" diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.yaml new file mode 100644 index 0000000000..b5973dd895 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/Chart.yaml @@ -0,0 +1,25 @@ +annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: StackState Agent + catalog.cattle.io/kube-version: '>=1.19.0-0' + catalog.cattle.io/release-name: stackstate-k8s-agent +apiVersion: v2 +appVersion: 3.0.0 +dependencies: +- alias: httpHeaderInjectorWebhook + name: http-header-injector + repository: https://helm.stackstate.io + version: 0.0.11 +description: Helm chart for the StackState Agent. +home: https://github.com/StackVista/stackstate-agent +icon: file://assets/icons/stackstate-k8s-agent.svg +keywords: +- monitoring +- observability +- stackstate +kubeVersion: '>=1.19.0-0' +maintainers: +- email: ops@stackstate.com + name: Stackstate +name: stackstate-k8s-agent +version: 1.0.98 diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md b/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md new file mode 100644 index 0000000000..a8d6620350 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md @@ -0,0 +1,263 @@ +# stackstate-k8s-agent + +Helm chart for the StackState Agent. + +Current chart version is `1.0.98` + +**Homepage:** + +## Requirements + +| Repository | Name | Version | +|------------|------|---------| +| https://helm.stackstate.io | httpHeaderInjectorWebhook(http-header-injector) | 0.0.11 | + +## Required Values + +In order to successfully install this chart, you **must** provide the following variables: + +* `stackstate.apiKey` +* `stackstate.cluster.name` +* `stackstate.url` + +The parameter `stackstate.cluster.name` is entered when installing the Cluster Agent StackPack. + +Install them on the command line on Helm with the following command: + +```shell +helm install \ +--set-string 'stackstate.apiKey'='' \ +--set-string 'stackstate.cluster.name'='' \ +--set-string 'stackstate.url'='' \ +stackstate/stackstate-k8s-agent +``` + +## Recommended Values + +It is also recommended that you set a value for `stackstate.cluster.authToken`. If it is not provided, a value will be generated for you, but the value will change each time an upgrade is performed. + +The command for **also** installing with a set token would be: + +```shell +helm install \ +--set-string 'stackstate.apiKey'='' \ +--set-string 'stackstate.cluster.name'='' \ +--set-string 'stackstate.cluster.authToken'='' \ +--set-string 'stackstate.url'='' \ +stackstate/stackstate-k8s-agent +``` + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| all.hardening.enabled | bool | `false` | An indication of whether the containers will be evaluated for hardening at runtime | +| all.image.registry | string | `"quay.io"` | The image registry to use. | +| checksAgent.affinity | object | `{}` | Affinity settings for pod assignment. | +| checksAgent.apm.enabled | bool | `true` | Enable / disable the agent APM module. | +| checksAgent.checksTagCardinality | string | `"orchestrator"` | | +| checksAgent.config | object | `{"override":[]}` | | +| checksAgent.config.override | list | `[]` | A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap | +| checksAgent.enabled | bool | `true` | Enable / disable runnning cluster checks in a separately deployed pod | +| checksAgent.image.pullPolicy | string | `"IfNotPresent"` | Default container image pull policy. | +| checksAgent.image.repository | string | `"stackstate/stackstate-k8s-agent"` | Base container image repository. | +| checksAgent.image.tag | string | `"c4caacef"` | Default container image tag. | +| checksAgent.livenessProbe.enabled | bool | `true` | Enable use of livenessProbe check. | +| checksAgent.livenessProbe.failureThreshold | int | `3` | `failureThreshold` for the liveness probe. | +| checksAgent.livenessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the liveness probe. | +| checksAgent.livenessProbe.periodSeconds | int | `15` | `periodSeconds` for the liveness probe. | +| checksAgent.livenessProbe.successThreshold | int | `1` | `successThreshold` for the liveness probe. | +| checksAgent.livenessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the liveness probe. | +| checksAgent.logLevel | string | `"INFO"` | Logging level for clusterchecks agent processes. | +| checksAgent.networkTracing.enabled | bool | `true` | Enable / disable the agent network tracing module. | +| checksAgent.nodeSelector | object | `{}` | Node labels for pod assignment. | +| checksAgent.priorityClassName | string | `""` | Priority class for clusterchecks agent pods. | +| checksAgent.processAgent.enabled | bool | `true` | Enable / disable the agent process agent module. | +| checksAgent.readinessProbe.enabled | bool | `true` | Enable use of readinessProbe check. | +| checksAgent.readinessProbe.failureThreshold | int | `3` | `failureThreshold` for the readiness probe. | +| checksAgent.readinessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the readiness probe. | +| checksAgent.readinessProbe.periodSeconds | int | `15` | `periodSeconds` for the readiness probe. | +| checksAgent.readinessProbe.successThreshold | int | `1` | `successThreshold` for the readiness probe. | +| checksAgent.readinessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the readiness probe. | +| checksAgent.replicas | int | `1` | Number of clusterchecks agent pods to schedule | +| checksAgent.resources.limits.cpu | string | `"400m"` | CPU resource limits. | +| checksAgent.resources.limits.memory | string | `"600Mi"` | Memory resource limits. | +| checksAgent.resources.requests.cpu | string | `"20m"` | CPU resource requests. | +| checksAgent.resources.requests.memory | string | `"512Mi"` | Memory resource requests. | +| checksAgent.scc.enabled | bool | `false` | Enable / disable the installation of the SecurityContextConfiguration needed for installation on OpenShift | +| checksAgent.serviceaccount.annotations | object | `{}` | Annotations for the service account for the cluster checks pods | +| checksAgent.skipSslValidation | bool | `false` | Set to true if self signed certificates are used. | +| checksAgent.strategy | object | `{"type":"RollingUpdate"}` | The strategy for the Deployment object. | +| checksAgent.tolerations | list | `[]` | Toleration labels for pod assignment. | +| clusterAgent.affinity | object | `{}` | Affinity settings for pod assignment. | +| clusterAgent.collection.kubeStateMetrics.annotationsAsTags | object | `{}` | Extra annotations to collect from resources and to turn into StackState tag. | +| clusterAgent.collection.kubeStateMetrics.clusterCheck | bool | `false` | For large clusters where the Kubernetes State Metrics Check Core needs to be distributed on dedicated workers. | +| clusterAgent.collection.kubeStateMetrics.enabled | bool | `true` | Enable / disable the cluster agent kube-state-metrics collection. | +| clusterAgent.collection.kubeStateMetrics.labelsAsTags | object | `{}` | Extra labels to collect from resources and to turn into StackState tag. # It has the following structure: # labelsAsTags: # : # can be pod, deployment, node, etc. # : # where is the kubernetes label and is the StackState tag # : # : # : # # Warning: the label must match the transformation done by kube-state-metrics, # for example tags.stackstate/version becomes tags_stackstate_version. | +| clusterAgent.collection.kubernetesEvents | bool | `true` | Enable / disable the cluster agent events collection. | +| clusterAgent.collection.kubernetesMetrics | bool | `true` | Enable / disable the cluster agent metrics collection. | +| clusterAgent.collection.kubernetesResources.configmaps | bool | `true` | Enable / disable collection of ConfigMaps. | +| clusterAgent.collection.kubernetesResources.cronjobs | bool | `true` | Enable / disable collection of CronJobs. | +| clusterAgent.collection.kubernetesResources.daemonsets | bool | `true` | Enable / disable collection of DaemonSets. | +| clusterAgent.collection.kubernetesResources.deployments | bool | `true` | Enable / disable collection of Deployments. | +| clusterAgent.collection.kubernetesResources.endpoints | bool | `true` | Enable / disable collection of Endpoints. If endpoints are disabled then StackState won't be able to connect a Service to Pods that serving it | +| clusterAgent.collection.kubernetesResources.horizontalpodautoscalers | bool | `true` | Enable / disable collection of HorizontalPodAutoscalers. | +| clusterAgent.collection.kubernetesResources.ingresses | bool | `true` | Enable / disable collection of Ingresses. | +| clusterAgent.collection.kubernetesResources.jobs | bool | `true` | Enable / disable collection of Jobs. | +| clusterAgent.collection.kubernetesResources.limitranges | bool | `true` | Enable / disable collection of LimitRanges. | +| clusterAgent.collection.kubernetesResources.namespaces | bool | `true` | Enable / disable collection of Namespaces. | +| clusterAgent.collection.kubernetesResources.persistentvolumeclaims | bool | `true` | Enable / disable collection of PersistentVolumeClaims. Disabling these will not let StackState connect PersistentVolumes to pods they are attached to | +| clusterAgent.collection.kubernetesResources.persistentvolumes | bool | `true` | Enable / disable collection of PersistentVolumes. | +| clusterAgent.collection.kubernetesResources.poddisruptionbudgets | bool | `true` | Enable / disable collection of PodDisruptionBudgets. | +| clusterAgent.collection.kubernetesResources.replicasets | bool | `true` | Enable / disable collection of ReplicaSets. | +| clusterAgent.collection.kubernetesResources.replicationcontrollers | bool | `true` | Enable / disable collection of ReplicationControllers. | +| clusterAgent.collection.kubernetesResources.resourcequotas | bool | `true` | Enable / disable collection of ResourceQuotas. | +| clusterAgent.collection.kubernetesResources.secrets | bool | `true` | Enable / disable collection of Secrets. | +| clusterAgent.collection.kubernetesResources.statefulsets | bool | `true` | Enable / disable collection of StatefulSets. | +| clusterAgent.collection.kubernetesResources.storageclasses | bool | `true` | Enable / disable collection of StorageClasses. | +| clusterAgent.collection.kubernetesResources.volumeattachments | bool | `true` | Enable / disable collection of Volume Attachments. Used to bind Nodes to Persistent Volumes. | +| clusterAgent.collection.kubernetesTimeout | int | `10` | Default timeout (in seconds) when obtaining information from the Kubernetes API. | +| clusterAgent.collection.kubernetesTopology | bool | `true` | Enable / disable the cluster agent topology collection. | +| clusterAgent.config | object | `{"configMap":{"maxDataSize":null},"events":{"categories":{}},"override":[],"topology":{"collectionInterval":90}}` | | +| clusterAgent.config.configMap.maxDataSize | string | `nil` | Maximum amount of characters for the data property of a ConfigMap collected by the kubernetes topology check | +| clusterAgent.config.events.categories | object | `{}` | Custom mapping from Kubernetes event reason to StackState event category. Categories allowed: Alerts, Activities, Changes, Others | +| clusterAgent.config.override | list | `[]` | A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap | +| clusterAgent.config.topology.collectionInterval | int | `90` | Interval for running topology collection, in seconds | +| clusterAgent.enabled | bool | `true` | Enable / disable the cluster agent. | +| clusterAgent.image.pullPolicy | string | `"IfNotPresent"` | Default container image pull policy. | +| clusterAgent.image.repository | string | `"stackstate/stackstate-k8s-cluster-agent"` | Base container image repository. | +| clusterAgent.image.tag | string | `"c4caacef"` | Default container image tag. | +| clusterAgent.livenessProbe.enabled | bool | `true` | Enable use of livenessProbe check. | +| clusterAgent.livenessProbe.failureThreshold | int | `3` | `failureThreshold` for the liveness probe. | +| clusterAgent.livenessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the liveness probe. | +| clusterAgent.livenessProbe.periodSeconds | int | `15` | `periodSeconds` for the liveness probe. | +| clusterAgent.livenessProbe.successThreshold | int | `1` | `successThreshold` for the liveness probe. | +| clusterAgent.livenessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the liveness probe. | +| clusterAgent.logLevel | string | `"INFO"` | Logging level for stackstate-k8s-agent processes. | +| clusterAgent.nodeSelector | object | `{}` | Node labels for pod assignment. | +| clusterAgent.priorityClassName | string | `""` | Priority class for stackstate-k8s-agent pods. | +| clusterAgent.readinessProbe.enabled | bool | `true` | Enable use of readinessProbe check. | +| clusterAgent.readinessProbe.failureThreshold | int | `3` | `failureThreshold` for the readiness probe. | +| clusterAgent.readinessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the readiness probe. | +| clusterAgent.readinessProbe.periodSeconds | int | `15` | `periodSeconds` for the readiness probe. | +| clusterAgent.readinessProbe.successThreshold | int | `1` | `successThreshold` for the readiness probe. | +| clusterAgent.readinessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the readiness probe. | +| clusterAgent.replicaCount | int | `1` | Number of replicas of the cluster agent to deploy. | +| clusterAgent.resources.limits.cpu | string | `"400m"` | CPU resource limits. | +| clusterAgent.resources.limits.memory | string | `"800Mi"` | Memory resource limits. | +| clusterAgent.resources.requests.cpu | string | `"70m"` | CPU resource requests. | +| clusterAgent.resources.requests.memory | string | `"512Mi"` | Memory resource requests. | +| clusterAgent.service.port | int | `5005` | Change the Cluster Agent service port | +| clusterAgent.service.targetPort | int | `5005` | Change the Cluster Agent service targetPort | +| clusterAgent.serviceaccount.annotations | object | `{}` | Annotations for the service account for the cluster agent pods | +| clusterAgent.skipSslValidation | bool | `false` | If true, ignores the server certificate being signed by an unknown authority. | +| clusterAgent.strategy | object | `{"type":"RollingUpdate"}` | The strategy for the Deployment object. | +| clusterAgent.tolerations | list | `[]` | Toleration labels for pod assignment. | +| fullnameOverride | string | `""` | Override the fullname of the chart. | +| global.extraAnnotations | object | `{}` | Extra annotations added ta all resources created by the helm chart | +| global.extraEnv.open | object | `{}` | Extra open environment variables to inject into pods. | +| global.extraEnv.secret | object | `{}` | Extra secret environment variables to inject into pods via a `Secret` object. | +| global.extraLabels | object | `{}` | Extra labels added ta all resources created by the helm chart | +| global.imagePullCredentials | object | `{}` | Globally define credentials for pulling images. | +| global.imagePullSecrets | list | `[]` | Secrets / credentials needed for container image registry. | +| global.proxy.url | string | `""` | Proxy for all traffic to stackstate | +| global.skipSslValidation | bool | `false` | Enable tls validation from client | +| httpHeaderInjectorWebhook.enabled | bool | `false` | Enable the webhook for injection http header injection sidecar proxy | +| logsAgent.affinity | object | `{}` | Affinity settings for pod assignment. | +| logsAgent.enabled | bool | `true` | Enable / disable k8s pod log collection | +| logsAgent.image.pullPolicy | string | `"IfNotPresent"` | Default container image pull policy. | +| logsAgent.image.repository | string | `"stackstate/promtail"` | Base container image repository. | +| logsAgent.image.tag | string | `"2.9.8-5b179aee"` | Default container image tag. | +| logsAgent.nodeSelector | object | `{}` | Node labels for pod assignment. | +| logsAgent.priorityClassName | string | `""` | Priority class for logsAgent pods. | +| logsAgent.resources.limits.cpu | string | `"1300m"` | CPU resource limits. | +| logsAgent.resources.limits.memory | string | `"192Mi"` | Memory resource limits. | +| logsAgent.resources.requests.cpu | string | `"20m"` | CPU resource requests. | +| logsAgent.resources.requests.memory | string | `"100Mi"` | Memory resource requests. | +| logsAgent.serviceaccount.annotations | object | `{}` | Annotations for the service account for the daemonset pods | +| logsAgent.skipSslValidation | bool | `false` | If true, ignores the server certificate being signed by an unknown authority. | +| logsAgent.tolerations | list | `[]` | Toleration labels for pod assignment. | +| logsAgent.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":100},"type":"RollingUpdate"}` | The update strategy for the DaemonSet object. | +| nameOverride | string | `""` | Override the name of the chart. | +| nodeAgent.affinity | object | `{}` | Affinity settings for pod assignment. | +| nodeAgent.apm.enabled | bool | `true` | Enable / disable the nodeAgent APM module. | +| nodeAgent.autoScalingEnabled | bool | `false` | Enable / disable autoscaling for the node agent pods. | +| nodeAgent.checksTagCardinality | string | `"orchestrator"` | low, orchestrator or high. Orchestrator level adds pod_name, high adds display_container_name | +| nodeAgent.config | object | `{"override":[]}` | | +| nodeAgent.config.override | list | `[]` | A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap | +| nodeAgent.containerRuntime.customSocketPath | string | `""` | If the container socket path does not match the default for CRI-O, Containerd or Docker, supply a custom socket path. | +| nodeAgent.containerRuntime.hostProc | string | `"/proc"` | | +| nodeAgent.containers.agent.env | object | `{}` | Additional environment variables for the agent container | +| nodeAgent.containers.agent.image.pullPolicy | string | `"IfNotPresent"` | Default container image pull policy. | +| nodeAgent.containers.agent.image.repository | string | `"stackstate/stackstate-k8s-agent"` | Base container image repository. | +| nodeAgent.containers.agent.image.tag | string | `"c4caacef"` | Default container image tag. | +| nodeAgent.containers.agent.livenessProbe.enabled | bool | `true` | Enable use of livenessProbe check. | +| nodeAgent.containers.agent.livenessProbe.failureThreshold | int | `3` | `failureThreshold` for the liveness probe. | +| nodeAgent.containers.agent.livenessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the liveness probe. | +| nodeAgent.containers.agent.livenessProbe.periodSeconds | int | `15` | `periodSeconds` for the liveness probe. | +| nodeAgent.containers.agent.livenessProbe.successThreshold | int | `1` | `successThreshold` for the liveness probe. | +| nodeAgent.containers.agent.livenessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the liveness probe. | +| nodeAgent.containers.agent.logLevel | string | `nil` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off # If not set, fall back to the value of agent.logLevel. | +| nodeAgent.containers.agent.processAgent.enabled | bool | `false` | Enable / disable the agent process agent module. - deprecated | +| nodeAgent.containers.agent.readinessProbe.enabled | bool | `true` | Enable use of readinessProbe check. | +| nodeAgent.containers.agent.readinessProbe.failureThreshold | int | `3` | `failureThreshold` for the readiness probe. | +| nodeAgent.containers.agent.readinessProbe.initialDelaySeconds | int | `15` | `initialDelaySeconds` for the readiness probe. | +| nodeAgent.containers.agent.readinessProbe.periodSeconds | int | `15` | `periodSeconds` for the readiness probe. | +| nodeAgent.containers.agent.readinessProbe.successThreshold | int | `1` | `successThreshold` for the readiness probe. | +| nodeAgent.containers.agent.readinessProbe.timeoutSeconds | int | `5` | `timeoutSeconds` for the readiness probe. | +| nodeAgent.containers.agent.resources.limits.cpu | string | `"270m"` | CPU resource limits. | +| nodeAgent.containers.agent.resources.limits.memory | string | `"420Mi"` | Memory resource limits. | +| nodeAgent.containers.agent.resources.requests.cpu | string | `"20m"` | CPU resource requests. | +| nodeAgent.containers.agent.resources.requests.memory | string | `"180Mi"` | Memory resource requests. | +| nodeAgent.containers.processAgent.enabled | bool | `true` | Enable / disable the process agent container. | +| nodeAgent.containers.processAgent.env | object | `{}` | Additional environment variables for the process-agent container | +| nodeAgent.containers.processAgent.image.pullPolicy | string | `"IfNotPresent"` | Process-agent container image pull policy. | +| nodeAgent.containers.processAgent.image.registry | string | `nil` | | +| nodeAgent.containers.processAgent.image.repository | string | `"stackstate/stackstate-k8s-process-agent"` | Process-agent container image repository. | +| nodeAgent.containers.processAgent.image.tag | string | `"cae7a4fa"` | Default process-agent container image tag. | +| nodeAgent.containers.processAgent.logLevel | string | `nil` | Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off # If not set, fall back to the value of agent.logLevel. | +| nodeAgent.containers.processAgent.procVolumeReadOnly | bool | `true` | Configure whether /host/proc is read only for the process agent container | +| nodeAgent.containers.processAgent.resources.limits.cpu | string | `"125m"` | CPU resource limits. | +| nodeAgent.containers.processAgent.resources.limits.memory | string | `"400Mi"` | Memory resource limits. | +| nodeAgent.containers.processAgent.resources.requests.cpu | string | `"25m"` | CPU resource requests. | +| nodeAgent.containers.processAgent.resources.requests.memory | string | `"128Mi"` | Memory resource requests. | +| nodeAgent.httpTracing.enabled | bool | `true` | | +| nodeAgent.logLevel | string | `"INFO"` | Logging level for agent processes. | +| nodeAgent.networkTracing.enabled | bool | `true` | Enable / disable the nodeAgent network tracing module. | +| nodeAgent.nodeSelector | object | `{}` | Node labels for pod assignment. | +| nodeAgent.priorityClassName | string | `""` | Priority class for nodeAgent pods. | +| nodeAgent.protocolInspection.enabled | bool | `true` | Enable / disable the nodeAgent protocol inspection. | +| nodeAgent.scaling.autoscalerLimits.agent.maximum.cpu | string | `"200m"` | Maximum CPU resource limits for main agent. | +| nodeAgent.scaling.autoscalerLimits.agent.maximum.memory | string | `"450Mi"` | Maximum memory resource limits for main agent. | +| nodeAgent.scaling.autoscalerLimits.agent.minimum.cpu | string | `"20m"` | Minimum CPU resource limits for main agent. | +| nodeAgent.scaling.autoscalerLimits.agent.minimum.memory | string | `"180Mi"` | Minimum memory resource limits for main agent. | +| nodeAgent.scaling.autoscalerLimits.processAgent.maximum.cpu | string | `"200m"` | Maximum CPU resource limits for process agent. | +| nodeAgent.scaling.autoscalerLimits.processAgent.maximum.memory | string | `"500Mi"` | Maximum memory resource limits for process agent. | +| nodeAgent.scaling.autoscalerLimits.processAgent.minimum.cpu | string | `"25m"` | Minimum CPU resource limits for process agent. | +| nodeAgent.scaling.autoscalerLimits.processAgent.minimum.memory | string | `"100Mi"` | Minimum memory resource limits for process agent. | +| nodeAgent.scc.enabled | bool | `false` | Enable / disable the installation of the SecurityContextConfiguration needed for installation on OpenShift. | +| nodeAgent.service | object | `{"annotations":{},"loadBalancerSourceRanges":["10.0.0.0/8"],"type":"ClusterIP"}` | The Kubernetes service for the agent | +| nodeAgent.service.annotations | object | `{}` | Annotations for the service | +| nodeAgent.service.loadBalancerSourceRanges | list | `["10.0.0.0/8"]` | The IP4 CIDR allowed to reach LoadBalancer for the service. For LoadBalancer type of service only. | +| nodeAgent.service.type | string | `"ClusterIP"` | Type of Kubernetes service: ClusterIP, LoadBalancer, NodePort | +| nodeAgent.serviceaccount.annotations | object | `{}` | Annotations for the service account for the agent daemonset pods | +| nodeAgent.skipKubeletTLSVerify | bool | `false` | Set to true if you want to skip kubelet tls verification. | +| nodeAgent.skipSslValidation | bool | `false` | Set to true if self signed certificates are used. | +| nodeAgent.tolerations | list | `[]` | Toleration labels for pod assignment. | +| nodeAgent.updateStrategy | object | `{"rollingUpdate":{"maxUnavailable":100},"type":"RollingUpdate"}` | The update strategy for the DaemonSet object. | +| openShiftLogging.installSecret | bool | `false` | Install a secret for logging on openshift | +| processAgent.checkIntervals.connections | int | `30` | Override the default value of the connections check interval in seconds. | +| processAgent.checkIntervals.container | int | `28` | Override the default value of the container check interval in seconds. | +| processAgent.checkIntervals.process | int | `32` | Override the default value of the process check interval in seconds. | +| processAgent.softMemoryLimit.goMemLimit | string | `"340MiB"` | Soft-limit for golang heap allocation, for sanity, must be around 85% of nodeAgent.containers.processAgent.resources.limits.cpu. | +| processAgent.softMemoryLimit.httpObservationsBufferSize | int | `40000` | Sets a maximum for the number of http observations to keep in memory between check runs, to use 40k requires around ~400Mib of memory. | +| processAgent.softMemoryLimit.httpStatsBufferSize | int | `40000` | Sets a maximum for the number of http stats to keep in memory between check runs, to use 40k requires around ~400Mib of memory. | +| stackstate.apiKey | string | `nil` | **PROVIDE YOUR API KEY HERE** API key to be used by the StackState agent. | +| stackstate.cluster.authToken | string | `""` | Provide a token to enable secure communication between the agent and the cluster agent. | +| stackstate.cluster.name | string | `nil` | **PROVIDE KUBERNETES CLUSTER NAME HERE** Name of the Kubernetes cluster where the agent will be installed. | +| stackstate.customApiKeySecretKey | string | `"sts-api-key"` | Key in the secret containing the receiver API key. | +| stackstate.customClusterAuthTokenSecretKey | string | `"sts-cluster-auth-token"` | Key in the secret containing the cluster auth token. | +| stackstate.customSecretName | string | `""` | Name of the secret containing the receiver API key. | +| stackstate.manageOwnSecrets | bool | `false` | Set to true if you don't want this helm chart to create secrets for you. | +| stackstate.url | string | `nil` | **PROVIDE STACKSTATE URL HERE** URL of the StackState installation to receive data from the agent. | +| targetSystem | string | `"linux"` | Target OS for this deployment (possible values: linux) | diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md.gotmpl b/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md.gotmpl new file mode 100644 index 0000000000..7909e6f0d7 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/README.md.gotmpl @@ -0,0 +1,45 @@ +{{ template "chart.header" . }} +{{ template "chart.description" . }} + +Current chart version is `{{ template "chart.version" . }}` + +{{ template "chart.homepageLine" . }} + +{{ template "chart.requirementsSection" . }} + +## Required Values + +In order to successfully install this chart, you **must** provide the following variables: + +* `stackstate.apiKey` +* `stackstate.cluster.name` +* `stackstate.url` + +The parameter `stackstate.cluster.name` is entered when installing the Cluster Agent StackPack. + +Install them on the command line on Helm with the following command: + +```shell +helm install \ +--set-string 'stackstate.apiKey'='' \ +--set-string 'stackstate.cluster.name'='' \ +--set-string 'stackstate.url'='' \ +stackstate/stackstate-k8s-agent +``` + +## Recommended Values + +It is also recommended that you set a value for `stackstate.cluster.authToken`. If it is not provided, a value will be generated for you, but the value will change each time an upgrade is performed. + +The command for **also** installing with a set token would be: + +```shell +helm install \ +--set-string 'stackstate.apiKey'='' \ +--set-string 'stackstate.cluster.name'='' \ +--set-string 'stackstate.cluster.authToken'='' \ +--set-string 'stackstate.url'='' \ +stackstate/stackstate-k8s-agent +``` + +{{ template "chart.valuesSection" . }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/Releasing.md b/charts/stackstate/stackstate-k8s-agent/1.0.98/Releasing.md new file mode 100644 index 0000000000..bab6c2b944 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/Releasing.md @@ -0,0 +1,15 @@ +To make a new release of this helm chart, follow the following steps: + + +- Create a branch from master +- Set the latest tags for the docker images, based on the dev settings (while we do not promote to prod, the moment we promote to prod we should take those tags) from https://gitlab.com/stackvista/devops/agent-promoter/-/blob/master/config.yml. Set the value to the folowing keys: + * stackstate-k8s-cluster-agent: + * [clusterAgent.image.tag] + * stackstate-k8s-agent: + * [nodeAgent.containers.agent.image.tag] + * [checksAgent.image.tag] + * stackstate-k8s-process-agent: + * [nodeAgent.containers.processAgent.image.tag] +- Bump the version of the chart +- Merge the mr and hit the public release button on the ci pipeline +- Manually smoke-test (deploy) the newly released stackstate/stackstate-k8s-agent chart to make sure it runs diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/app-readme.md b/charts/stackstate/stackstate-k8s-agent/1.0.98/app-readme.md new file mode 100644 index 0000000000..8025fe1d36 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/app-readme.md @@ -0,0 +1,5 @@ +## Introduction + +StackState is a modern Application Troubleshooting and Observability solution designed for the rapid evolving engineering landscape. With specific enhancements for Kubernetes environments it empowers engineers, allowing them to remediate application issues independently in production. + +The StackState Agent auto-discovers your entire environment in minutes, assimilating topology, logs, metrics, and events and sends this of to the StackState server. By using StackState you're able to tracke all activity in your environment in real-time and over time. StackState provides instant understanding of the business impact of an issue, offering end-to-end chain observability and ensuring that you can quickly correlate any product or environmental changes to the overall health of your cloud-native implementation. diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/.helmignore b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/.helmignore new file mode 100644 index 0000000000..69790771c9 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/.helmignore @@ -0,0 +1,25 @@ +# Patterns to ignore when building packages. +# This supports shell glob matching, relative path matching, and +# negation (prefixed with !). Only one pattern per line. +.DS_Store +# Common VCS dirs +.git/ +.gitignore +.bzr/ +.bzrignore +.hg/ +.hgignore +.svn/ +# Common backup files +*.swp +*.bak +*.tmp +*~ +# Various IDEs +.project +.idea/ +*.tmproj +.vscode/ +linter_values.yaml +ci/ +installation/ diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Chart.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Chart.yaml new file mode 100644 index 0000000000..ff28ae8dab --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Chart.yaml @@ -0,0 +1,15 @@ +apiVersion: v2 +appVersion: 0.0.1 +description: 'Helm chart for deploying the http-header-injector sidecar, which automatically + injects x-request-id into http traffic going through the cluster for pods which + have the annotation `http-header-injector.stackstate.io/inject: enabled` is set. ' +home: https://github.com/StackVista/http-header-injector +icon: https://www.stackstate.com/wp-content/uploads/2019/02/152x152-favicon.png +keywords: +- monitoring +- stackstate +maintainers: +- email: ops@stackstate.com + name: Stackstate Lupulus Team +name: http-header-injector +version: 0.0.11 diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/README.md b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/README.md new file mode 100644 index 0000000000..840ff52404 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/README.md @@ -0,0 +1,56 @@ +# http-header-injector + +![Version: 0.0.11](https://img.shields.io/badge/Version-0.0.11-informational?style=flat-square) ![AppVersion: 0.0.1](https://img.shields.io/badge/AppVersion-0.0.1-informational?style=flat-square) + +Helm chart for deploying the http-header-injector sidecar, which automatically injects x-request-id into http traffic +going through the cluster for pods which have the annotation `http-header-injector.stackstate.io/inject: enabled` is set. + +**Homepage:** + +## Maintainers + +| Name | Email | Url | +| ---- | ------ | --- | +| Stackstate Lupulus Team | | | + +## Values + +| Key | Type | Default | Description | +|-----|------|---------|-------------| +| certificatePrehook | object | `{"image":{"pullPolicy":"IfNotPresent","registry":null,"repository":"stackstate/container-tools","tag":"1.4.0"},"resources":{"limits":{"cpu":"100m","memory":"200Mi"},"requests":{"cpu":"100m","memory":"200Mi"}}}` | Helm prehook to setup/remove a certificate for the sidecarInjector mutationwebhook | +| certificatePrehook.image.pullPolicy | string | `"IfNotPresent"` | Policy when pulling an image | +| certificatePrehook.image.registry | string | `nil` | Registry for the docker image. | +| certificatePrehook.image.tag | string | `"1.4.0"` | The tag for the docker image | +| debug | bool | `false` | Enable debugging. This will leave leave artifacts around like the prehook jobs for further inspection | +| enabled | bool | `true` | Enable/disable the mutationwebhook | +| global.extraAnnotations | object | `{}` | Extra annotations added ta all resources created by the helm chart | +| global.extraLabels | object | `{}` | Extra labels added ta all resources created by the helm chart | +| global.imagePullCredentials | object | `{}` | Globally define credentials for pulling images. | +| global.imagePullSecrets | list | `[]` | Globally add image pull secrets that are used. | +| global.imageRegistry | string | `nil` | Globally override the image registry that is used. Can be overridden by specific containers. Defaults to quay.io | +| images.pullSecretName | string | `nil` | | +| proxy | object | `{"image":{"pullPolicy":"IfNotPresent","registry":null,"repository":"stackstate/http-header-injector-proxy","tag":"sha-5ff79451"},"resources":{"limits":{"memory":"40Mi"},"requests":{"memory":"25Mi"}}}` | Proxy being injected into pods for rewriting http headers | +| proxy.image.pullPolicy | string | `"IfNotPresent"` | Policy when pulling an image | +| proxy.image.registry | string | `nil` | Registry for the docker image. | +| proxy.image.tag | string | `"sha-5ff79451"` | The tag for the docker image | +| proxy.resources.limits.memory | string | `"40Mi"` | Memory resource limits. | +| proxy.resources.requests.memory | string | `"25Mi"` | Memory resource requests. | +| proxyInit | object | `{"image":{"pullPolicy":"IfNotPresent","registry":null,"repository":"stackstate/http-header-injector-proxy-init","tag":"sha-5ff79451"}}` | InitContainer within pod which redirects traffic to the proxy container. | +| proxyInit.image.pullPolicy | string | `"IfNotPresent"` | Policy when pulling an image | +| proxyInit.image.registry | string | `nil` | Registry for the docker image | +| proxyInit.image.tag | string | `"sha-5ff79451"` | The tag for the docker image | +| sidecarInjector | object | `{"image":{"pullPolicy":"IfNotPresent","registry":null,"repository":"stackstate/generic-sidecar-injector","tag":"sha-9c852245"}}` | Service for injecting the proxy sidecar into pods | +| sidecarInjector.image.pullPolicy | string | `"IfNotPresent"` | Policy when pulling an image | +| sidecarInjector.image.registry | string | `nil` | Registry for the docker image. | +| sidecarInjector.image.tag | string | `"sha-9c852245"` | The tag for the docker image | +| webhook | object | `{"failurePolicy":"Ignore","tls":{"certManager":{"issuer":"","issuerKind":"ClusterIssuer","issuerNamespace":""},"mode":"generated","provided":{"caBundle":"","crt":"","key":""},"secret":{"name":""}}}` | MutationWebhook that will be installed to inject a sidecar into pods | +| webhook.failurePolicy | string | `"Ignore"` | How should the webhook fail? Best is to use Ignore, because there is a brief moment at initialization when the hook s there but the service not. Also, putting this to fail can cause the control plane be unresponsive. | +| webhook.tls.certManager.issuer | string | `""` | The issuer that is used for the webhook. Only used if you set webhook.tls.mode to "cert-manager". | +| webhook.tls.certManager.issuerKind | string | `"ClusterIssuer"` | The issuer kind that is used for the webhook, valid values are "Issuer" or "ClusterIssuer". Only used if you set webhook.tls.mode to "cert-manager". | +| webhook.tls.certManager.issuerNamespace | string | `""` | The namespace the cert-manager issuer is located in. If left empty defaults to the release's namespace that is used for the webhook. Only used if you set webhook.tls.mode to "cert-manager". | +| webhook.tls.mode | string | `"generated"` | The mode for the webhook. Can be "provided", "generated", "secret" or "cert-manager". If you want to use cert-manager, you need to install it first. NOTE: If you choose "generated", additional privileges are required to create the certificate and webhook at runtime. | +| webhook.tls.provided.caBundle | string | `""` | The caBundle that is used for the webhook. This is the certificate that is used to sign the webhook. Only used if you set webhook.tls.mode to "provided". | +| webhook.tls.provided.crt | string | `""` | The certificate that is used for the webhook. Only used if you set webhook.tls.mode to "provided". | +| webhook.tls.provided.key | string | `""` | The key that is used for the webhook. Only used if you set webhook.tls.mode to "provided". | +| webhook.tls.secret.name | string | `""` | The name of the secret containing the pre-provisioned certificate data that is used for the webhook. Only used if you set webhook.tls.mode to "secret". | + diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Readme.md.gotpl b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Readme.md.gotpl new file mode 100644 index 0000000000..225032aa2a --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/Readme.md.gotpl @@ -0,0 +1,26 @@ +{{ template "chart.header" . }} +{{ template "chart.description" . }} + +Current chart version is `{{ template "chart.version" . }}` + +{{ template "chart.homepageLine" . }} + +{{ template "chart.requirementsSection" . }} + +## Required Values + +No values have to be included to install this chart. After installing this chart, it becomes possible to annotate pods with +the `http-header-injector.stackstate.io/inject: enabled` annotation to make sure the sidecar provided by this chart is +activated on a pod. + +## Recommended Values + +{{ template "chart.valuesSection" . -}} + +## Install + +Install from the command line on Helm with the following command: + +```shell +helm install stackstate/http-header-injector +``` diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/_defines.tpl b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/_defines.tpl new file mode 100644 index 0000000000..ee6b7320ef --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/_defines.tpl @@ -0,0 +1,131 @@ +{{- define "http-header-injector.app.name" -}} +{{ .Release.Name }}-http-header-injector +{{- end -}} + +{{- define "http-header-injector.webhook-service.name" -}} +{{ .Release.Name }}-http-header-injector +{{- end -}} + +{{- define "http-header-injector.webhook-service.fqname" -}} +{{ .Release.Name }}-http-header-injector.{{ .Release.Namespace }}.svc +{{- end -}} + +{{- define "http-header-injector.cert-secret.name" -}} +{{- if eq .Values.webhook.tls.mode "secret" -}} +{{ .Values.webhook.tls.secret.name }} +{{- else -}} +{{ .Release.Name }}-http-injector-cert +{{- end -}} +{{- end -}} + +{{- define "http-header-injector.cert-clusterrole.name" -}} +{{ .Release.Name }}-http-injector-cert-cluster-role +{{- end -}} + +{{- define "http-header-injector.cert-serviceaccount.name" -}} +{{ .Release.Name }}-http-injector-cert-sa +{{- end -}} + +{{- define "http-header-injector.cert-config.name" -}} +{{ .Release.Name }}-cert-config +{{- end -}} + +{{- define "http-header-injector.mutatingwebhookconfiguration.name" -}} +{{ .Release.Name }}-http-header-injector-webhook.stackstate.io +{{- end -}} + +{{- define "http-header-injector.webhook-config.name" -}} +{{ .Release.Name }}-http-header-injector-config +{{- end -}} + +{{- define "http-header-injector.mutating-webhook.name" -}} +{{ .Release.Name }}-http-header-injector-webhook +{{- end -}} + +{{- define "http-header-injector.pull-secret.name" -}} +{{ include "http-header-injector.app.name" . }}-pull-secret +{{- end -}} + +{{/* If the issuer is located in a different namespace, it is possible to set that, else default to the release namespace */}} +{{- define "cert-manager.certificate.namespace" -}} +{{ .Values.webhook.tls.certManager.issuerNamespace | default .Release.Namespace }} +{{- end -}} + +{{- define "http-header-injector.image.registry.global" -}} + {{- if .Values.global }} + {{- .Values.global.imageRegistry | default "quay.io" -}} + {{- else -}} + quay.io + {{- end -}} +{{- end -}} + +{{- define "http-header-injector.image.registry" -}} + {{- if ((.ContainerConfig).image).registry -}} + {{- tpl .ContainerConfig.image.registry . -}} + {{- else -}} + {{- include "http-header-injector.image.registry.global" . }} + {{- end -}} +{{- end -}} + +{{- define "http-header-injector.image.pullSecrets" -}} + {{- $pullSecrets := list }} + {{- $pullSecrets = append $pullSecrets (include "http-header-injector.pull-secret.name" .) }} + {{- range .Values.global.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets . -}} + {{- end -}} + {{- if (not (empty $pullSecrets)) -}} +imagePullSecrets: + {{- range $pullSecrets | uniq }} + - name: {{ . }} + {{- end }} + {{- end -}} +{{- end -}} + +{{- define "http-header-injector.cert-setup.container.main" }} +{{- $containerConfig := dict "ContainerConfig" .Values.certificatePrehook -}} +name: webhook-cert-setup +image: "{{ include "http-header-injector.image.registry" (merge $containerConfig .) }}/{{ .Values.certificatePrehook.image.repository }}:{{ .Values.certificatePrehook.image.tag }}" +imagePullPolicy: {{ .Values.certificatePrehook.image.pullPolicy }} +{{- with .Values.certificatePrehook.resources }} +resources: + {{- toYaml . | nindent 2 }} +{{- end }} +volumeMounts: + - name: "{{ include "http-header-injector.cert-config.name" . }}" + mountPath: /scripts + readOnly: true +command: ["/scripts/generate-cert.sh"] +{{- end }} + +{{- define "http-header-injector.cert-delete.container.main" }} +{{- $containerConfig := dict "ContainerConfig" .Values.certificatePrehook -}} +name: webhook-cert-delete +image: "{{ include "http-header-injector.image.registry" (merge $containerConfig .) }}/{{ .Values.certificatePrehook.image.repository }}:{{ .Values.certificatePrehook.image.tag }}" +imagePullPolicy: {{ .Values.certificatePrehook.image.pullPolicy }} +{{- with .Values.certificatePrehook.resources }} +resources: + {{- toYaml . | nindent 2 }} +{{- end }} +volumeMounts: + - name: "{{ include "http-header-injector.cert-config.name" . }}" + mountPath: /scripts +command: [ "/scripts/delete-cert.sh" ] +{{- end }} + +{{/* +Returns a YAML with extra annotations. +*/}} +{{- define "http-header-injector.global.extraAnnotations" -}} +{{- with .Values.global.extraAnnotations }} +{{- toYaml . }} +{{- end }} +{{- end -}} + +{{/* +Returns a YAML with extra labels. +*/}} +{{- define "http-header-injector.global.extraLabels" -}} +{{- with .Values.global.extraLabels }} +{{- toYaml . }} +{{- end }} +{{- end -}} \ No newline at end of file diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrolbinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrolbinding.yaml new file mode 100644 index 0000000000..fc0c012585 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrolbinding.yaml @@ -0,0 +1,24 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: "{{ include "http-header-injector.cert-serviceaccount.name" . }}" + labels: + app.kubernetes.io/component: http-header-injector-cert-hook + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-delete,post-upgrade + "helm.sh/hook-weight": "-3" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: "{{ include "http-header-injector.cert-clusterrole.name" . }}" +subjects: + - kind: ServiceAccount + name: "{{ include "http-header-injector.cert-serviceaccount.name" . }}" + namespace: {{ .Release.Namespace }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrole.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrole.yaml new file mode 100644 index 0000000000..afab838b3c --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-clusterrole.yaml @@ -0,0 +1,26 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: "{{ include "http-header-injector.cert-clusterrole.name" . }}" + labels: + app.kubernetes.io/component: http-header-injector-cert-hook + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-delete,post-upgrade + "helm.sh/hook-weight": "-4" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +rules: + - apiGroups: [ "admissionregistration.k8s.io" ] + resources: [ "mutatingwebhookconfigurations" ] + verbs: [ "get", "create", "patch","update","delete" ] + - apiGroups: [ "" ] + resources: [ "secrets" ] + verbs: [ "create", "get", "patch","update","delete" ] + - apiGroups: [ "apps" ] + resources: [ "deployments" ] + verbs: [ "get" ] +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-config.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-config.yaml new file mode 100644 index 0000000000..a22bdf4fb9 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-config.yaml @@ -0,0 +1,158 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: "{{ include "http-header-injector.cert-config.name" . }}" + labels: + app.kubernetes.io/component: http-header-injector-cert-hook + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-delete,post-upgrade + "helm.sh/hook-weight": "-3" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +data: + generate-cert.sh: | + #!/bin/bash + + # We are going for a self-signed certificate here. We would like to use k8s CertificateSigningRequest, however, + # currently there are no out of the box signers that can sign a 'server auth' certificate, which is required for mutation webhooks. + set -ex + + SCRIPTDIR="${BASH_SOURCE%/*}" + + DIR=`mktemp -d` + + cd "$DIR" + + {{ if .Values.enabled }} + echo "Chart enabled, creating secret and webhook" + + openssl genrsa -out ca.key 2048 + + openssl req -x509 -new -nodes -key ca.key -subj "/CN={{ include "http-header-injector.webhook-service.fqname" . }}" -days 10000 -out ca.crt + + openssl genrsa -out tls.key 2048 + + openssl req -new -key tls.key -out tls.csr -config "$SCRIPTDIR/csr.conf" + + openssl x509 -req -in tls.csr -CA ca.crt -CAkey ca.key \ + -CAcreateserial -out tls.crt -days 10000 \ + -extensions v3_ext -extfile "$SCRIPTDIR/csr.conf" -sha256 + + # Create or update the secret + echo "Applying secret" + kubectl create secret tls "{{ include "http-header-injector.cert-secret.name" . }}" \ + -n "{{ .Release.Namespace }}" \ + --cert=./tls.crt \ + --key=./tls.key \ + --dry-run=client \ + -o yaml | kubectl apply -f - + + echo "Applying mutationwebhook" + caBundle=`base64 -w 0 ca.crt` + cat "$SCRIPTDIR/mutatingwebhookconfiguration.yaml" | sed "s/\\\$CA_BUNDLE/$caBundle/g" | kubectl apply -f - + {{ else }} + echo "Chart disabled, not creating secret and webhook" + {{ end }} + delete-cert.sh: | + #!/bin/bash + + set -x + + DIR="${BASH_SOURCE%/*}" + if [[ ! -d "$DIR" ]]; then DIR="$PWD"; fi + if [[ "$DIR" = "." ]]; then DIR="$PWD"; fi + + cd "$DIR" + + # Using detection of deployment hee to also make this work in post-delete. + if kubectl get deployments "{{ include "http-header-injector.app.name" . }}" -n "{{ .Release.Namespace }}"; then + echo "Chart enabled, not removing secret and mutationwebhook" + exit 0 + else + echo "Chart disabled, removing secret and mutationwebhook" + fi + + # Create or update the secret + echo "Deleting secret" + kubectl delete secret "{{ include "http-header-injector.cert-secret.name" . }}" -n "{{ .Release.Namespace }}" + + echo "Applying mutationwebhook" + kubectl delete MutatingWebhookConfiguration "{{ include "http-header-injector.mutating-webhook.name" . }}" -n "{{ .Release.Namespace }}" + + exit 0 + + csr.conf: | + [ req ] + default_bits = 2048 + prompt = no + default_md = sha256 + req_extensions = req_ext + distinguished_name = dn + + [ dn ] + C = NL + ST = Utrecht + L = Hilversum + O = StackState + OU = Dev + CN = {{ include "http-header-injector.webhook-service.fqname" . }} + + [ req_ext ] + subjectAltName = @alt_names + + [ alt_names ] + DNS.1 = {{ include "http-header-injector.webhook-service.fqname" . }} + + [ v3_ext ] + authorityKeyIdentifier=keyid,issuer:always + basicConstraints=CA:FALSE + keyUsage=keyEncipherment,dataEncipherment + extendedKeyUsage=serverAuth + subjectAltName=@alt_names + + mutatingwebhookconfiguration.yaml: | + apiVersion: admissionregistration.k8s.io/v1 + kind: MutatingWebhookConfiguration + metadata: + name: "{{ include "http-header-injector.mutating-webhook.name" . }}" + namespace: "{{ .Release.Namespace }}" + labels: +{{ include "http-header-injector.global.extraLabels" . | indent 8 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 8 }} + webhooks: + - clientConfig: + caBundle: "$CA_BUNDLE" + service: + name: "{{ include "http-header-injector.webhook-service.name" . }}" + path: /mutate + namespace: {{ .Release.Namespace }} + port: 8443 + # Putting failure on ignore, not doing so can crash the entire control plane if something goes wrong with the service. + failurePolicy: "{{ .Values.webhook.failurePolicy }}" + name: "{{ include "http-header-injector.mutatingwebhookconfiguration.name" . }}" + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: + - kube-system + - cert-manager + - {{ .Release.Namespace }} + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + sideEffects: None + admissionReviewVersions: + - v1 +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-delete.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-delete.yaml new file mode 100644 index 0000000000..6f72ce2478 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-delete.yaml @@ -0,0 +1,39 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ .Release.Name }}-header-injector-cert-delete + labels: + app.kubernetes.io/component: http-header-injector-cert-hook-delete + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + "helm.sh/hook": post-delete,post-upgrade + "helm.sh/hook-weight": "-2" + "helm.sh/hook-delete-policy": before-hook-creation{{- if not .Values.debug -}},hook-succeeded{{- end }} +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +spec: + template: + metadata: + labels: + app.kubernetes.io/component: http-header-injector-delete + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 8 }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/cert-hook-config.yaml") . | sha256sum }} +{{ include "http-header-injector.global.extraAnnotations" . | indent 8 }} + spec: + serviceAccountName: "{{ include "http-header-injector.cert-serviceaccount.name" . }}" + {{- include "http-header-injector.image.pullSecrets" . | nindent 6 }} + volumes: + - name: "{{ include "http-header-injector.cert-config.name" . }}" + configMap: + name: "{{ include "http-header-injector.cert-config.name" . }}" + defaultMode: 0777 + containers: + - {{ include "http-header-injector.cert-delete.container.main" . | nindent 8 }} + restartPolicy: Never + backoffLimit: 0 +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-setup.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-setup.yaml new file mode 100644 index 0000000000..cc1c89631f --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-job-setup.yaml @@ -0,0 +1,39 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: batch/v1 +kind: Job +metadata: + name: {{ .Release.Name }}-header-injector-cert-setup + labels: + app.kubernetes.io/component: http-header-injector-cert-hook-setup + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade + "helm.sh/hook-weight": "-2" + "helm.sh/hook-delete-policy": before-hook-creation{{- if not .Values.debug -}},hook-succeeded{{- end }} +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +spec: + template: + metadata: + labels: + app.kubernetes.io/component: http-header-injector-setup + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 8 }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/cert-hook-config.yaml") . | sha256sum }} +{{ include "http-header-injector.global.extraAnnotations" . | indent 8 }} + spec: + serviceAccountName: "{{ include "http-header-injector.cert-serviceaccount.name" . }}" + {{- include "http-header-injector.image.pullSecrets" . | nindent 6 }} + volumes: + - name: "{{ include "http-header-injector.cert-config.name" . }}" + configMap: + name: "{{ include "http-header-injector.cert-config.name" . }}" + defaultMode: 0777 + containers: + - {{ include "http-header-injector.cert-setup.container.main" . | nindent 8 }} + restartPolicy: Never + backoffLimit: 0 +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-serviceaccount.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-serviceaccount.yaml new file mode 100644 index 0000000000..29b26df95b --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/cert-hook-serviceaccount.yaml @@ -0,0 +1,18 @@ +{{- if eq .Values.webhook.tls.mode "generated" }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: "{{ include "http-header-injector.cert-serviceaccount.name" . }}" + namespace: {{ .Release.Namespace }} + annotations: + "helm.sh/hook": pre-install,pre-upgrade,post-delete,post-upgrade + "helm.sh/hook-weight": "-4" + "helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} + labels: + app.kubernetes.io/component: http-header-injector-cert-hook + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} + app: "{{ include "http-header-injector.app.name" . }}" +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/pull-secret.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/pull-secret.yaml new file mode 100644 index 0000000000..80b4ee404b --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/pull-secret.yaml @@ -0,0 +1,32 @@ +{{- $defaultRegistry := .Values.global.imageRegistry }} +{{- $top := . }} +{{- $registryAuthMap := dict }} + +{{- range $registry, $credentials := .Values.global.imagePullCredentials }} + {{- $registryAuthDocument := dict -}} + {{- $_ := set $registryAuthDocument "username" $credentials.username }} + {{- $_ := set $registryAuthDocument "password" $credentials.password }} + {{- $authMessage := printf "%s:%s" $registryAuthDocument.username $registryAuthDocument.password | b64enc }} + {{- $_ := set $registryAuthDocument "auth" $authMessage }} + {{- if eq $registry "default" }} + {{- $registryAuthMap := set $registryAuthMap (include "http-header-injector.image.registry.global" $top) $registryAuthDocument }} + {{ else }} + {{- $registryAuthMap := set $registryAuthMap $registry $registryAuthDocument }} + {{- end }} +{{- end }} +{{- $dockerAuthsDocuments := dict "auths" $registryAuthMap }} + +apiVersion: v1 +kind: Secret +metadata: + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} + name: {{ include "http-header-injector.pull-secret.name" . }} +data: + .dockerconfigjson: {{ $dockerAuthsDocuments | toJson | b64enc | quote }} +type: kubernetes.io/dockerconfigjson \ No newline at end of file diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-cert-secret.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-cert-secret.yaml new file mode 100644 index 0000000000..f571ca86bb --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-cert-secret.yaml @@ -0,0 +1,18 @@ +{{- if eq .Values.webhook.tls.mode "provided" }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "http-header-injector.cert-secret.name" . }} + namespace: {{ .Release.Namespace }} + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +type: kubernetes.io/tls +data: + tls.crt: {{ .Values.webhook.tls.provided.crt | b64enc }} + tls.key: {{ .Values.webhook.tls.provided.key | b64enc }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-certificate.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-certificate.yaml new file mode 100644 index 0000000000..a68c7c5f62 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-certificate.yaml @@ -0,0 +1,23 @@ +{{- if eq .Values.webhook.tls.mode "cert-manager" }} +apiVersion: cert-manager.io/v1 +kind: Certificate +metadata: + name: {{ include "http-header-injector.webhook-service.name" . }} + namespace: {{ include "cert-manager.certificate.namespace" . }} + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +spec: + secretName: {{ include "http-header-injector.cert-secret.name" . }} + issuerRef: + name: {{ .Values.webhook.tls.certManager.issuer }} + kind: {{ .Values.webhook.tls.certManager.issuerKind }} + dnsNames: + - "{{ include "http-header-injector.webhook-service.name" . }}" + - "{{ include "http-header-injector.webhook-service.name" . }}.{{ .Release.Namespace }}" + - "{{ include "http-header-injector.webhook-service.name" . }}.{{ .Release.Namespace }}.svc" +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-config.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-config.yaml new file mode 100644 index 0000000000..20b38ce963 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-config.yaml @@ -0,0 +1,128 @@ +{{- if .Values.enabled -}} +{{- $proxyContainerConfig := dict "ContainerConfig" .Values.proxy -}} +{{- $proxyInitContainerConfig := dict "ContainerConfig" .Values.proxyInit -}} +apiVersion: v1 +kind: ConfigMap +metadata: + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} + name: {{ .Release.Name }}-http-header-injector-config +data: + sidecarconfig.yaml: | + initContainers: + - name: http-header-proxy-init + image: "{{ include "http-header-injector.image.registry" (merge $proxyInitContainerConfig .) }}/{{ .Values.proxyInit.image.repository }}:{{ .Values.proxyInit.image.tag }}" + imagePullPolicy: {{ .Values.proxyInit.image.pullPolicy }} + command: ["/init-iptables.sh"] + env: + - name: CHART_VERSION + value: "{{ .Chart.Version }}" + - name: PROXY_PORT + value: {% if index .Annotations "config.http-header-injector.stackstate.io/proxy-port" %}"{% index .Annotations "config.http-header-injector.stackstate.io/proxy-port" %}"{% else %}"7060"{% end %} + - name: PROXY_UID + value: {% if index .Annotations "config.http-header-injector.stackstate.io/proxy-uid" %}"{% index .Annotations "config.http-header-injector.stackstate.io/proxy-uid" %}"{% else %}"2103"{% end %} + - name: POD_HOST_NETWORK + value: {% .Spec.HostNetwork %} + {% if eq (index .Annotations "linkerd.io/inject") "enabled" %} + - name: LINKERD + value: true + # Reference: https://linkerd.io/2.13/reference/proxy-configuration/ + - name: LINKERD_PROXY_UID + value: {% if index .Annotations "config.linkerd.io/proxy-uid" %}"{% index .Annotations "config.linkerd.io/proxy-uid" %}"{% else %}"2102"{% end %} + # Due to https://github.com/linkerd/linkerd2/issues/10981 this is now not realy possible, still bringing in the code for future reference + - name: LINKERD_ADMIN_PORT + value: {% if index .Annotations "config.linkerd.io/admin-port" %}"{% index .Annotations "config.linkerd.io/admin-port" %}"{% else %}"4191"{% end %} + {% end %} + securityContext: + allowPrivilegeEscalation: false + capabilities: + add: + - NET_ADMIN + - NET_RAW + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: false + runAsUser: 0 + seccompProfile: + type: RuntimeDefault + volumeMounts: + # This is required for iptables to be able to run + - mountPath: /run + name: http-header-proxy-init-xtables-lock + + containers: + - name: http-header-proxy + image: "{{ include "http-header-injector.image.registry" (merge $proxyContainerConfig .) }}/{{ .Values.proxy.image.repository }}:{{ .Values.proxy.image.tag }}" + imagePullPolicy: {{ .Values.proxy.image.pullPolicy }} + env: + - name: CHART_VERSION + value: "{{ .Chart.Version }}" + - name: PORT + value: {% if index .Annotations "config.http-header-injector.stackstate.io/proxy-port" %}"{% index .Annotations "config.http-header-injector.stackstate.io/proxy-port" %}"{% else %}"7060"{% end %} + - name: DEBUG + value: {% if index .Annotations "config.http-header-injector.stackstate.io/debug" %}"{% index .Annotations "config.http-header-injector.stackstate.io/debug" %}"{% else %}"disabled"{% end %} + securityContext: + runAsUser: {% if index .Annotations "config.http-header-injector.stackstate.io/proxy-uid" %}{% index .Annotations "config.http-header-injector.stackstate.io/proxy-uid" %}{% else %}2103{% end %} + seccompProfile: + type: RuntimeDefault + {{- with .Values.proxy.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + - name: http-header-inject-debug + image: "{{ include "http-header-injector.image.registry" (merge $proxyContainerConfig .) }}/{{ .Values.proxyInit.image.repository }}:{{ .Values.proxyInit.image.tag }}" + imagePullPolicy: {{ .Values.proxyInit.image.pullPolicy }} + command: ["/bin/sh", "-c", "while echo \"Running\"; do sleep 1; done"] + securityContext: + allowPrivilegeEscalation: false + capabilities: + add: + - NET_ADMIN + - NET_RAW + privileged: false + readOnlyRootFilesystem: true + runAsNonRoot: false + runAsUser: 0 + seccompProfile: + type: RuntimeDefault + volumeMounts: + # This is required for iptables to be able to run + - mountPath: /run + name: http-header-proxy-init-xtables-lock + + volumes: + - emptyDir: {} + name: http-header-proxy-init-xtables-lock + + mutationconfig.yaml: | + mutationConfigs: + - name: "http-header-injector" + annotationNamespace: "http-header-injector.stackstate.io" + annotationTrigger: "inject" + annotationConfig: + volumeMounts: [] + initContainersBeforePodInitContainers: [ "http-header-proxy-init" ] + initContainers: [ "http-header-proxy-init" ] + containers: [ "http-header-proxy" ] + volumes: [ "http-header-proxy-init-xtables-lock" ] + volumeMounts: [ ] + # Namespaces are ignored by the mutatingwebhook + ignoreNamespaces: [ ] + - name: "http-header-injector-debug" + annotationNamespace: "http-header-injector-debug.stackstate.io" + annotationTrigger: "inject" + annotationConfig: + volumeMounts: [] + initContainersBeforePodInitContainers: [ ] + initContainers: [ ] + containers: [ "http-header-inject-debug" ] + volumes: [ "http-header-proxy-init-xtables-lock" ] + volumeMounts: [ ] + # Namespaces are ignored by the mutatingwebhook + ignoreNamespaces: [ ] + {{- end -}} \ No newline at end of file diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-deployment.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-deployment.yaml new file mode 100644 index 0000000000..8af6ff51a2 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-deployment.yaml @@ -0,0 +1,61 @@ +{{- if .Values.enabled -}} +{{- $containerConfig := dict "ContainerConfig" .Values.sidecarInjector -}} +apiVersion: apps/v1 +kind: Deployment +metadata: + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} + app: "{{ include "http-header-injector.app.name" . }}" +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} + name: "{{ include "http-header-injector.app.name" . }}" +spec: + replicas: 1 + selector: + matchLabels: + app: "{{ include "http-header-injector.app.name" . }}" + template: + metadata: + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} + app: "{{ include "http-header-injector.app.name" . }}" +{{ include "http-header-injector.global.extraLabels" . | indent 8 }} + annotations: + checksum/config: {{ include (print $.Template.BasePath "/webhook-config.yaml") . | sha256sum }} + # This is here to make sure the generic injector gets restarted and picks up a new secret that may have been generated upon upgrade. + revision: "{{ .Release.Revision }}" +{{ include "http-header-injector.global.extraAnnotations" . | indent 8 }} + name: "{{ include "http-header-injector.app.name" . }}" + spec: + {{- include "http-header-injector.image.pullSecrets" . | nindent 6 }} + volumes: + - name: "{{ include "http-header-injector.webhook-config.name" . }}" + configMap: + name: "{{ include "http-header-injector.webhook-config.name" . }}" + - name: "{{ include "http-header-injector.cert-secret.name" . }}" + secret: + secretName: "{{ include "http-header-injector.cert-secret.name" . }}" + containers: + - image: "{{ include "http-header-injector.image.registry" (merge $containerConfig .) }}/{{ .Values.sidecarInjector.image.repository }}:{{ .Values.sidecarInjector.image.tag }}" + imagePullPolicy: {{ .Values.sidecarInjector.image.pullPolicy }} + name: http-header-injector + volumeMounts: + - name: "{{ include "http-header-injector.webhook-config.name" . }}" + mountPath: /etc/webhook/config + readOnly: true + - name: "{{ include "http-header-injector.cert-secret.name" . }}" + mountPath: /etc/webhook/certs + readOnly: true + command: [ "/sidecarinjector" ] + args: + - --port=8443 + - --sidecar-config-file=/etc/webhook/config/sidecarconfig.yaml + - --mutation-config-file=/etc/webhook/config/mutationconfig.yaml + - --cert-file-path=/etc/webhook/certs/tls.crt + - --key-file-path=/etc/webhook/certs/tls.key +{{- end -}} \ No newline at end of file diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-mutatingwebhookconfiguration.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-mutatingwebhookconfiguration.yaml new file mode 100644 index 0000000000..de0acc1dff --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-mutatingwebhookconfiguration.yaml @@ -0,0 +1,54 @@ +{{- if not (eq .Values.webhook.tls.mode "generated") }} +apiVersion: admissionregistration.k8s.io/v1 +kind: MutatingWebhookConfiguration +metadata: + name: "{{ include "http-header-injector.mutating-webhook.name" . }}" + namespace: "{{ .Release.Namespace }}" + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: + {{- if eq .Values.webhook.tls.mode "cert-manager" }} + cert-manager.io/inject-ca-from: {{ include "cert-manager.certificate.namespace" . }}/{{ include "http-header-injector.webhook-service.name" . }} + {{- else if eq .Values.webhook.tls.mode "secret" }} + cert-manager.io/inject-ca-from-secret: {{ .Release.Namespace }}/{{ .Values.webhook.tls.secret.name | required "'webhook.tls.secret.name' is required when webhook.tls.mode is 'secret'" }} + {{- end }} +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} +webhooks: + - clientConfig: + {{- if eq .Values.webhook.tls.mode "provided" }} + caBundle: "{{ .Values.webhook.tls.provided.caBundle | b64enc }}" + {{- else if or (eq .Values.webhook.tls.mode "cert-manager") (eq .Values.webhook.tls.mode "secret") }} + caBundle: "" + {{- end }} + service: + name: "{{ include "http-header-injector.webhook-service.name" . }}" + path: /mutate + namespace: {{ .Release.Namespace }} + port: 8443 + # Putting failure on ignore, not doing so can crash the entire control plane if something goes wrong with the service. + failurePolicy: "{{ .Values.webhook.failurePolicy }}" + name: "{{ include "http-header-injector.mutatingwebhookconfiguration.name" . }}" + namespaceSelector: + matchExpressions: + - key: kubernetes.io/metadata.name + operator: NotIn + values: + - kube-system + - cert-manager + - {{ .Release.Namespace }} + rules: + - apiGroups: + - "" + apiVersions: + - v1 + operations: + - CREATE + resources: + - pods + sideEffects: None + admissionReviewVersions: + - v1 +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-service.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-service.yaml new file mode 100644 index 0000000000..55abdb022c --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/templates/webhook-service.yaml @@ -0,0 +1,20 @@ +{{- if .Values.enabled -}} +apiVersion: v1 +kind: Service +metadata: + labels: + app.kubernetes.io/component: http-header-injector + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "http-header-injector.app.name" . }} +{{ include "http-header-injector.global.extraLabels" . | indent 4 }} + annotations: +{{ include "http-header-injector.global.extraAnnotations" . | indent 4 }} + name: "{{ include "http-header-injector.webhook-service.name" . }}" +spec: + ports: + - port: 8443 + protocol: TCP + targetPort: 8443 + selector: + app: "{{ include "http-header-injector.app.name" . }}" +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/values.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/values.yaml new file mode 100644 index 0000000000..a1b4be2fcc --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/charts/http-header-injector/values.yaml @@ -0,0 +1,110 @@ +# enabled -- Enable/disable the mutationwebhook +enabled: true + +# debug -- Enable debugging. This will leave leave artifacts around like the prehook jobs for further inspection +debug: false + +global: + # global.imageRegistry -- Globally override the image registry that is used. Can be overridden by specific containers. Defaults to quay.io + imageRegistry: null + # global.imagePullSecrets -- Globally add image pull secrets that are used. + imagePullSecrets: [] + # global.imagePullCredentials -- Globally define credentials for pulling images. + imagePullCredentials: {} + + # global.extraLabels -- Extra labels added ta all resources created by the helm chart + extraLabels: {} + # global.extraAnnotations -- Extra annotations added ta all resources created by the helm chart + extraAnnotations: {} + +images: + pullSecretName: + +# proxy -- Proxy being injected into pods for rewriting http headers +proxy: + image: + # proxy.image.registry -- Registry for the docker image. + registry: + # proxy.image.repository - Repository for the docker image + repository: "stackstate/http-header-injector-proxy" + # proxy.image.pullPolicy -- Policy when pulling an image + pullPolicy: IfNotPresent + # proxy.image.tag -- The tag for the docker image + tag: sha-5ff79451 + + # proxy.resource -- Resources for the proxy container + resources: + requests: + # proxy.resources.requests.memory -- Memory resource requests. + memory: "25Mi" + limits: + # proxy.resources.limits.memory -- Memory resource limits. + memory: "40Mi" + +# proxyInit -- InitContainer within pod which redirects traffic to the proxy container. +proxyInit: + image: + # proxyInit.image.registry -- Registry for the docker image + registry: + # proxyInit.image.repository - Repository for the docker image + repository: "stackstate/http-header-injector-proxy-init" + # proxyInit.image.pullPolicy -- Policy when pulling an image + pullPolicy: IfNotPresent + # proxyInit.image.tag -- The tag for the docker image + tag: sha-5ff79451 + +# sidecarInjector -- Service for injecting the proxy sidecar into pods +sidecarInjector: + image: + # sidecarInjector.image.registry -- Registry for the docker image. + registry: + # sidecarInjector.image.repository - Repository for the docker image + repository: "stackstate/generic-sidecar-injector" + # sidecarInjector.image.pullPolicy -- Policy when pulling an image + pullPolicy: IfNotPresent + # sidecarInjector.image.tag -- The tag for the docker image + tag: sha-9c852245 + +# certificatePrehook -- Helm prehook to setup/remove a certificate for the sidecarInjector mutationwebhook +certificatePrehook: + image: + # certificatePrehook.image.registry -- Registry for the docker image. + registry: + # certificatePrehook.image.repository - Repository for the docker image. + repository: stackstate/container-tools + # certificatePrehook.image.pullPolicy -- Policy when pulling an image + pullPolicy: IfNotPresent + # certificatePrehook.image.tag -- The tag for the docker image + tag: 1.4.0 + resources: + limits: + cpu: "100m" + memory: "200Mi" + requests: + cpu: "100m" + memory: "200Mi" + +# webhook -- MutationWebhook that will be installed to inject a sidecar into pods +webhook: + # webhook.failurePolicy -- How should the webhook fail? Best is to use Ignore, because there is a brief moment at initialization when the hook s there but the service not. Also, putting this to fail can cause the control plane be unresponsive. + failurePolicy: Ignore + tls: + # webhook.tls.mode -- The mode for the webhook. Can be "provided", "generated", "secret" or "cert-manager". If you want to use cert-manager, you need to install it first. NOTE: If you choose "generated", additional privileges are required to create the certificate and webhook at runtime. + mode: "generated" + provided: + # webhook.tls.provided.caBundle -- The caBundle that is used for the webhook. This is the certificate that is used to sign the webhook. Only used if you set webhook.tls.mode to "provided". + caBundle: "" + # webhook.tls.provided.crt -- The certificate that is used for the webhook. Only used if you set webhook.tls.mode to "provided". + crt: "" + # webhook.tls.provided.key -- The key that is used for the webhook. Only used if you set webhook.tls.mode to "provided". + key: "" + certManager: + # webhook.tls.certManager.issuer -- The issuer that is used for the webhook. Only used if you set webhook.tls.mode to "cert-manager". + issuer: "" + # webhook.tls.certManager.issuerKind -- The issuer kind that is used for the webhook, valid values are "Issuer" or "ClusterIssuer". Only used if you set webhook.tls.mode to "cert-manager". + issuerKind: "ClusterIssuer" + # webhook.tls.certManager.issuerNamespace -- The namespace the cert-manager issuer is located in. If left empty defaults to the release's namespace that is used for the webhook. Only used if you set webhook.tls.mode to "cert-manager". + issuerNamespace: "" + secret: + # webhook.tls.secret.name -- The name of the secret containing the pre-provisioned certificate data that is used for the webhook. Only used if you set webhook.tls.mode to "secret". + name: "" diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/questions.yml b/charts/stackstate/stackstate-k8s-agent/1.0.98/questions.yml new file mode 100644 index 0000000000..5d6e6a0112 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/questions.yml @@ -0,0 +1,184 @@ +questions: + - variable: stackstate.apiKey + label: "StackState API Key" + type: string + description: "The API key for StackState." + required: true + group: General + - variable: stackstate.url + label: "StackState URL" + type: string + description: "The URL where StackState is running." + required: true + group: General + - variable: stackstate.cluster.name + label: "StackState Cluster Name" + type: string + description: "The StackState Cluster Name given when installing the instance of the Kubernetes StackPack in StackState. This is used to identify the cluster in StackState." + required: true + group: General + - variable: all.registry.override + label: "Override Default Image Registry" + type: boolean + description: "Whether or not to override the default image registry." + default: false + group: "General" + show_subquestions_if: true + subquestions: + - variable: all.image.registry + label: "Docker Image Registry" + type: string + description: "The registry to pull the StackState Agent images from." + default: "quay.io" + - variable: global.imagePullCredentials.username + label: "Docker Image Pull Username" + type: string + description: "The username to use when pulling the StackState Agent images." + - variable: global.imagePullCredentials.password + label: "Docker Image Pull Password" + type: secret + description: "The password to use when pulling the StackState Agent images." + - variable: nodeAgent.containers.agent.resources.override + label: "Override Node Agent Resource Allocation" + type: boolean + description: "Whether or not to override the default resources." + default: "false" + group: "Node Agent" + show_subquestions_if: true + subquestions: + - variable: nodeAgent.containers.agent.resources.requests.cpu + label: "CPU Requests" + type: string + description: "The requested CPU for the Node Agent." + default: "20m" + - variable: nodeAgent.containers.agent.resources.requests.memory + label: "Memory Requests" + type: string + description: "The requested memory for the Node Agent." + default: "180Mi" + - variable: nodeAgent.containers.agent.resources.limits.cpu + label: "CPU Limit" + type: string + description: "The CPU limit for the Node Agent." + default: "270m" + - variable: nodeAgent.containers.agent.resources.limits.memory + label: "Memory Limit" + type: string + description: "The memory limit for the Node Agent." + default: "420Mi" + - variable: nodeAgent.containers.processAgent.enabled + label: "Enable Process Agent" + type: boolean + description: "Whether or not to enable the Process Agent." + default: "true" + group: "Process Agent" + - variable: nodeAgent.skipKubeletTLSVerify + label: "Skip Kubelet TLS Verify" + type: boolean + description: "Whether or not to skip TLS verification when connecting to the kubelet API." + default: "true" + group: "Process Agent" + - variable: nodeAgent.containers.processAgent.resources.override + label: "Override Process Agent Resource Allocation" + type: boolean + description: "Whether or not to override the default resources." + default: "false" + group: "Process Agent" + show_subquestions_if: true + subquestions: + - variable: nodeAgent.containers.processAgent.resources.requests.cpu + label: "CPU Requests" + type: string + description: "The requested CPU for the Process Agent." + default: "25m" + - variable: nodeAgent.containers.processAgent.resources.requests.memory + label: "Memory Requests" + type: string + description: "The requested memory for the Process Agent." + default: "128Mi" + - variable: nodeAgent.containers.processAgent.resources.limits.cpu + label: "CPU Limit" + type: string + description: "The CPU limit for the Process Agent." + default: "125m" + - variable: nodeAgent.containers.processAgent.resources.limits.memory + label: "Memory Limit" + type: string + description: "The memory limit for the Process Agent." + default: "400Mi" + - variable: clusterAgent.enabled + label: "Enable Cluster Agent" + type: boolean + description: "Whether or not to enable the Cluster Agent." + default: "true" + group: "Cluster Agent" + - variable: clusterAgent.collection.kubernetesResources.secrets + label: "Collect Secret Resources" + type: boolean + description: | + Whether or not to collect Kubernetes Secrets. + NOTE: StackState will not send the actual data of the secrets, only the metadata and a secure hash of the data. + default: "true" + group: "Cluster Agent" + - variable: clusterAgent.resources.override + label: "Override Cluster Agent Resource Allocation" + type: boolean + description: "Whether or not to override the default resources." + default: "false" + group: "Cluster Agent" + show_subquestions_if: true + subquestions: + - variable: clusterAgent.resources.requests.cpu + label: "CPU Requests" + type: string + description: "The requested CPU for the Cluster Agent." + default: "70m" + - variable: clusterAgent.resources.requests.memory + label: "Memory Requests" + type: string + description: "The requested memory for the Cluster Agent." + default: "512Mi" + - variable: clusterAgent.resources.limits.cpu + label: "CPU Limit" + type: string + description: "The CPU limit for the Cluster Agent." + default: "400m" + - variable: clusterAgent.resources.limits.memory + label: "Memory Limit" + type: string + description: "The memory limit for the Cluster Agent." + default: "800Mi" + - variable: logsAgent.enabled + label: "Enable Logs Agent" + type: boolean + description: "Whether or not to enable the Logs Agent." + default: "true" + group: "Logs Agent" + - variable: logsAgent.resources.override + label: "Override Logs Agent Resource Allocation" + type: boolean + description: "Whether or not to override the default resources." + default: "false" + group: "Logs Agent" + show_subquestions_if: true + subquestions: + - variable: logsAgent.resources.requests.cpu + label: "CPU Requests" + type: string + description: "The requested CPU for the Logs Agent." + default: "20m" + - variable: logsAgent.resources.requests.memory + label: "Memory Requests" + type: string + description: "The requested memory for the Logs Agent." + default: "100Mi" + - variable: logsAgent.resources.limits.cpu + label: "CPU Limit" + type: string + description: "The CPU limit for the Logs Agent." + default: "1300m" + - variable: logsAgent.resources.limits.memory + label: "Memory Limit" + type: string + description: "The memory limit for the Logs Agent." + default: "192Mi" diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_cluster-agent-kube-state-metrics.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_cluster-agent-kube-state-metrics.yaml new file mode 100644 index 0000000000..f99fbf6187 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_cluster-agent-kube-state-metrics.yaml @@ -0,0 +1,62 @@ +{{- define "cluster-agent-kube-state-metrics" -}} +{{- $kubeRes := .Values.clusterAgent.collection.kubernetesResources }} +{{- if .Values.clusterAgent.collection.kubeStateMetrics.clusterCheck }} +cluster_check: true +{{- end }} +init_config: +instances: + - collectors: + - nodes + - pods + - services + {{- if $kubeRes.persistentvolumeclaims }} + - persistentvolumeclaims + {{- end }} + {{- if $kubeRes.persistentvolumes }} + - persistentvolumes + {{- end }} + {{- if $kubeRes.namespaces }} + - namespaces + {{- end }} + {{- if $kubeRes.endpoints }} + - endpoints + {{- end }} + {{- if $kubeRes.daemonsets }} + - daemonsets + {{- end }} + {{- if $kubeRes.deployments }} + - deployments + {{- end }} + {{- if $kubeRes.replicasets }} + - replicasets + {{- end }} + {{- if $kubeRes.statefulsets }} + - statefulsets + {{- end }} + {{- if $kubeRes.cronjobs }} + - cronjobs + {{- end }} + {{- if $kubeRes.jobs }} + - jobs + {{- end }} + {{- if $kubeRes.ingresses }} + - ingresses + {{- end }} + {{- if $kubeRes.secrets }} + - secrets + {{- end }} + - resourcequotas + - replicationcontrollers + - limitranges + - horizontalpodautoscalers + - poddisruptionbudgets + - storageclasses + - volumeattachments + {{- if .Values.clusterAgent.collection.kubeStateMetrics.clusterCheck }} + skip_leader_election: true + {{- end }} + labels_as_tags: + {{ .Values.clusterAgent.collection.kubeStateMetrics.labelsAsTags | toYaml | indent 8 }} + annotations_as_tags: + {{ .Values.clusterAgent.collection.kubeStateMetrics.annotationsAsTags | toYaml | indent 8 }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-agent.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-agent.yaml new file mode 100644 index 0000000000..09f9591c63 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-agent.yaml @@ -0,0 +1,191 @@ +{{- define "container-agent" -}} +- name: node-agent +{{- if .Values.all.hardening.enabled}} + lifecycle: + preStop: + exec: + command: [ "/bin/sh", "-c", "echo 'Giving slim.ai monitor time to submit data...'; sleep 120" ] +{{- end }} + image: "{{ include "stackstate-k8s-agent.imageRegistry" . }}/{{ .Values.nodeAgent.containers.agent.image.repository }}:{{ .Values.nodeAgent.containers.agent.image.tag }}" + imagePullPolicy: "{{ .Values.nodeAgent.containers.agent.image.pullPolicy }}" + env: + {{ include "stackstate-k8s-agent.apiKeyEnv" . | nindent 4 }} + - name: STS_KUBERNETES_KUBELET_HOST + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: KUBERNETES_HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: STS_HOSTNAME + value: "$(KUBERNETES_HOSTNAME)-{{ .Values.stackstate.cluster.name}}" + - name: AGENT_VERSION + value: {{ .Values.nodeAgent.containers.agent.image.tag | quote }} + - name: HOST_PROC + value: "/host/proc" + - name: HOST_SYS + value: "/host/sys" + - name: KUBERNETES + value: "true" + - name: STS_APM_ENABLED + value: {{ .Values.nodeAgent.apm.enabled | quote }} + - name: STS_APM_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + - name: STS_CLUSTER_AGENT_ENABLED + value: {{ .Values.clusterAgent.enabled | quote }} + {{- if .Values.clusterAgent.enabled }} + - name: STS_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME + value: {{ .Release.Name }}-cluster-agent + {{ include "stackstate-k8s-agent.clusterAgentAuthTokenEnv" . | nindent 4 }} + {{- end }} + - name: STS_CLUSTER_NAME + value: {{ .Values.stackstate.cluster.name | quote }} + - name: STS_SKIP_VALIDATE_CLUSTERNAME + value: "true" + - name: STS_CHECKS_TAG_CARDINALITY + value: {{ .Values.nodeAgent.checksTagCardinality | quote }} + {{- if .Values.checksAgent.enabled }} + - name: STS_EXTRA_CONFIG_PROVIDERS + value: "endpointschecks" + {{- end }} + - name: STS_HEALTH_PORT + value: "5555" + - name: STS_LEADER_ELECTION + value: "false" + - name: LOG_LEVEL + value: {{ .Values.nodeAgent.containers.agent.logLevel | default .Values.nodeAgent.logLevel | quote }} + - name: STS_LOG_LEVEL + value: {{ .Values.nodeAgent.containers.agent.logLevel | default .Values.nodeAgent.logLevel | quote }} + - name: STS_NETWORK_TRACING_ENABLED + value: {{ .Values.nodeAgent.networkTracing.enabled | quote }} + - name: STS_PROTOCOL_INSPECTION_ENABLED + value: {{ .Values.nodeAgent.protocolInspection.enabled | quote }} + - name: STS_PROCESS_AGENT_ENABLED + value: {{ .Values.nodeAgent.containers.agent.processAgent.enabled | quote }} + - name: STS_CONTAINER_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.container | quote }} + - name: STS_CONNECTION_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.connections | quote }} + - name: STS_PROCESS_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.process | quote }} + - name: STS_PROCESS_AGENT_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + + - name: STS_SKIP_SSL_VALIDATION + value: {{ or .Values.global.skipSslValidation .Values.nodeAgent.skipSslValidation | quote }} + - name: STS_SKIP_KUBELET_TLS_VERIFY + value: {{ .Values.nodeAgent.skipKubeletTLSVerify | quote }} + - name: STS_STS_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + {{- if .Values.nodeAgent.containerRuntime.customSocketPath }} + - name: STS_CRI_SOCKET_PATH + value: {{ .Values.nodeAgent.containerRuntime.customSocketPath }} + {{- end }} + {{- if .Values.global.proxy.url }} + - name: STS_PROXY_HTTPS + value: {{ .Values.global.proxy.url | quote }} + - name: STS_PROXY_HTTP + value: {{ .Values.global.proxy.url | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.open }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.secret }} + - name: {{ $key }} + valueFrom: + secretKeyRef: + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: {{ $key }} + {{- end }} + {{- range $key, $value := .Values.nodeAgent.containers.agent.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- if .Values.nodeAgent.containers.agent.livenessProbe.enabled }} + livenessProbe: + httpGet: + path: /health + port: healthport + failureThreshold: {{ .Values.nodeAgent.containers.agent.livenessProbe.failureThreshold }} + initialDelaySeconds: {{ .Values.nodeAgent.containers.agent.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.nodeAgent.containers.agent.livenessProbe.periodSeconds }} + successThreshold: {{ .Values.nodeAgent.containers.agent.livenessProbe.successThreshold }} + timeoutSeconds: {{ .Values.nodeAgent.containers.agent.livenessProbe.timeoutSeconds }} + {{- end }} + {{- if .Values.nodeAgent.containers.agent.readinessProbe.enabled }} + readinessProbe: + httpGet: + path: /health + port: healthport + failureThreshold: {{ .Values.nodeAgent.containers.agent.readinessProbe.failureThreshold }} + initialDelaySeconds: {{ .Values.nodeAgent.containers.agent.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.nodeAgent.containers.agent.readinessProbe.periodSeconds }} + successThreshold: {{ .Values.nodeAgent.containers.agent.readinessProbe.successThreshold }} + timeoutSeconds: {{ .Values.nodeAgent.containers.agent.readinessProbe.timeoutSeconds }} + {{- end }} + ports: + - containerPort: 8126 + name: traceport + protocol: TCP + - containerPort: 5555 + name: healthport + protocol: TCP + {{- with .Values.nodeAgent.containers.agent.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.nodeAgent.containerRuntime.customSocketPath }} + - name: customcrisocket + mountPath: {{ .Values.nodeAgent.containerRuntime.customSocketPath }} + readOnly: true + {{- end }} + - name: crisocket + mountPath: /var/run/crio/crio.sock + readOnly: true + - name: containerdsocket + mountPath: /var/run/containerd/containerd.sock + readOnly: true + - name: kubelet + mountPath: /var/lib/kubelet + readOnly: true + - name: nfs + mountPath: /var/lib/nfs + readOnly: true + - name: dockersocket + mountPath: /var/run/docker.sock + readOnly: true + - name: dockernetns + mountPath: /run/docker/netns + readOnly: true + - name: dockeroverlay2 + mountPath: /var/lib/docker/overlay2 + readOnly: true + - name: procdir + mountPath: /host/proc + readOnly: true + - name: cgroups + mountPath: /host/sys/fs/cgroup + readOnly: true + {{- if .Values.nodeAgent.config.override }} + {{- range .Values.nodeAgent.config.override }} + - name: config-override-volume + mountPath: {{ .path }}/{{ .name }} + subPath: {{ .path | replace "/" "_"}}_{{ .name }} + readOnly: true + {{- end }} + {{- end }} +{{- if .Values.all.hardening.enabled}} + securityContext: + privileged: true + runAsUser: 0 # root + capabilities: + add: [ "ALL" ] + readOnlyRootFilesystem: false +{{- else }} + securityContext: + privileged: false +{{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-process-agent.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-process-agent.yaml new file mode 100644 index 0000000000..893f11581a --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_container-process-agent.yaml @@ -0,0 +1,160 @@ +{{- define "container-process-agent" -}} +- name: process-agent +{{ if .Values.nodeAgent.containers.processAgent.image.registry }} + image: "{{ .Values.nodeAgent.containers.processAgent.image.registry }}/{{ .Values.nodeAgent.containers.processAgent.image.repository }}:{{ .Values.nodeAgent.containers.processAgent.image.tag }}" +{{ else }} + image: "{{ include "stackstate-k8s-agent.imageRegistry" . }}/{{ .Values.nodeAgent.containers.processAgent.image.repository }}:{{ .Values.nodeAgent.containers.processAgent.image.tag }}" +{{- end }} + imagePullPolicy: "{{ .Values.nodeAgent.containers.processAgent.image.pullPolicy }}" + ports: + - containerPort: 6063 + env: + {{ include "stackstate-k8s-agent.apiKeyEnv" . | nindent 4 }} + - name: STS_KUBERNETES_KUBELET_HOST + valueFrom: + fieldRef: + fieldPath: status.hostIP + - name: KUBERNETES_HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: STS_HOSTNAME + value: "$(KUBERNETES_HOSTNAME)-{{ .Values.stackstate.cluster.name}}" + - name: AGENT_VERSION + value: {{ .Values.nodeAgent.containers.processAgent.image.tag | quote }} + - name: STS_LOG_TO_CONSOLE + value: "true" + - name: HOST_PROC + value: "/host/proc" + - name: HOST_SYS + value: "/host/sys" + - name: HOST_ETC + value: "/host/etc" + - name: KUBERNETES + value: "true" + - name: STS_CLUSTER_AGENT_ENABLED + value: {{ .Values.clusterAgent.enabled | quote }} + {{- if .Values.clusterAgent.enabled }} + - name: STS_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME + value: {{ .Release.Name }}-cluster-agent + {{ include "stackstate-k8s-agent.clusterAgentAuthTokenEnv" . | nindent 4 }} + {{- end }} + - name: STS_CLUSTER_NAME + value: {{ .Values.stackstate.cluster.name | quote }} + - name: STS_SKIP_VALIDATE_CLUSTERNAME + value: "true" + - name: LOG_LEVEL + value: {{ .Values.nodeAgent.containers.processAgent.logLevel | default .Values.nodeAgent.logLevel | quote }} + - name: STS_LOG_LEVEL + value: {{ .Values.nodeAgent.containers.processAgent.logLevel | default .Values.nodeAgent.logLevel | quote }} + - name: STS_NETWORK_TRACING_ENABLED + value: {{ .Values.nodeAgent.networkTracing.enabled | quote }} + - name: STS_PROTOCOL_INSPECTION_ENABLED + value: {{ .Values.nodeAgent.protocolInspection.enabled | quote }} + - name: STS_PROCESS_AGENT_ENABLED + value: {{ .Values.nodeAgent.containers.processAgent.enabled | quote }} + - name: STS_CONTAINER_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.container | quote }} + - name: STS_CONNECTION_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.connections | quote }} + - name: STS_PROCESS_CHECK_INTERVAL + value: {{ .Values.processAgent.checkIntervals.process | quote }} + - name: GOMEMLIMIT + value: {{ .Values.processAgent.softMemoryLimit.goMemLimit | quote }} + - name: STS_HTTP_STATS_BUFFER_SIZE + value: {{ .Values.processAgent.softMemoryLimit.httpStatsBufferSize | quote }} + - name: STS_HTTP_OBSERVATIONS_BUFFER_SIZE + value: {{ .Values.processAgent.softMemoryLimit.httpObservationsBufferSize | quote }} + - name: STS_PROCESS_AGENT_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + - name: STS_SKIP_SSL_VALIDATION + value: {{ or .Values.global.skipSslValidation .Values.nodeAgent.skipSslValidation | quote }} + - name: STS_SKIP_KUBELET_TLS_VERIFY + value: {{ .Values.nodeAgent.skipKubeletTLSVerify | quote }} + - name: STS_STS_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + - name: STS_HTTP_TRACING_ENABLED + value: {{ .Values.nodeAgent.httpTracing.enabled | quote }} + {{- if .Values.nodeAgent.containerRuntime.customSocketPath }} + - name: STS_CRI_SOCKET_PATH + value: {{ .Values.nodeAgent.containerRuntime.customSocketPath }} + {{- end }} + {{- if .Values.global.proxy.url }} + - name: STS_PROXY_HTTPS + value: {{ .Values.global.proxy.url | quote }} + - name: STS_PROXY_HTTP + value: {{ .Values.global.proxy.url | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.open }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.secret }} + - name: {{ $key }} + valueFrom: + secretKeyRef: + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: {{ $key }} + {{- end }} + {{- range $key, $value := .Values.nodeAgent.containers.processAgent.env }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- with .Values.nodeAgent.containers.processAgent.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + {{- if .Values.nodeAgent.containerRuntime.customSocketPath }} + - name: customcrisocket + mountPath: {{ .Values.nodeAgent.containerRuntime.customSocketPath }} + readOnly: true + {{- end }} + - name: crisocket + mountPath: /var/run/crio/crio.sock + readOnly: true + - name: containerdsocket + mountPath: /var/run/containerd/containerd.sock + readOnly: true + - name: sys-kernel-debug + mountPath: /sys/kernel/debug + # Having sys-kernel-debug as read only breaks specific monitors from receiving metrics + # readOnly: true + - name: dockersocket + mountPath: /var/run/docker.sock + readOnly: true + # The agent needs access to /etc to figure out what os it is running on. + - name: etcdir + mountPath: /host/etc + readOnly: true + - name: procdir + mountPath: /host/proc + # We have an agent option STS_DISABLE_BPF_JIT_HARDEN that write to /proc. this is a debug setting but if we want to use + # it, we have the option to make /proc writable. + readOnly: {{ .Values.nodeAgent.containers.processAgent.procVolumeReadOnly }} + - name: passwd + mountPath: /etc/passwd + readOnly: true + - name: cgroups + mountPath: /host/sys/fs/cgroup + readOnly: true + {{- if .Values.nodeAgent.config.override }} + {{- range .Values.nodeAgent.config.override }} + - name: config-override-volume + mountPath: {{ .path }}/{{ .name }} + subPath: {{ .path | replace "/" "_"}}_{{ .name }} + readOnly: true + {{- end }} + {{- end }} +{{- if .Values.all.hardening.enabled}} + securityContext: + privileged: true + runAsUser: 0 # root + capabilities: + add: [ "ALL" ] + readOnlyRootFilesystem: false +{{- else }} + securityContext: + privileged: true +{{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_helpers.tpl b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_helpers.tpl new file mode 100644 index 0000000000..3c51bc3087 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/_helpers.tpl @@ -0,0 +1,219 @@ +{{/* vim: set filetype=mustache: */}} +{{/* +Expand the name of the chart. +*/}} +{{- define "stackstate-k8s-agent.name" -}} +{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Create a default fully qualified app name. +We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec). +If release name contains chart name it will be used as a full name. +*/}} +{{- define "stackstate-k8s-agent.fullname" -}} +{{- if .Values.fullnameOverride -}} +{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- $name := default .Chart.Name .Values.nameOverride -}} +{{- if contains $name .Release.Name -}} +{{- .Release.Name | trunc 63 | trimSuffix "-" -}} +{{- else -}} +{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}} +{{- end -}} +{{- end -}} +{{- end -}} + +{{/* +Create chart name and version as used by the chart label. +*/}} +{{- define "stackstate-k8s-agent.chart" -}} +{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}} +{{- end -}} + +{{/* +Common labels +*/}} +{{- define "stackstate-k8s-agent.labels" -}} +app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +helm.sh/chart: {{ include "stackstate-k8s-agent.chart" . }} +app.kubernetes.io/instance: {{ .Release.Name }} +{{- if .Chart.AppVersion }} +app.kubernetes.io/version: {{ .Chart.AppVersion | quote }} +{{- end }} +app.kubernetes.io/managed-by: {{ .Release.Service }} +{{- end -}} + +{{/* +Cluster agent checksum annotations +*/}} +{{- define "stackstate-k8s-agent.checksum-configs" }} +checksum/secret: {{ include (print $.Template.BasePath "/secret.yaml") . | sha256sum }} +{{- end }} + +{{/* +StackState URL function +*/}} +{{- define "stackstate-k8s-agent.stackstate.url" -}} +{{ tpl .Values.stackstate.url . | quote }} +{{- end }} + +{{- define "stackstate-k8s-agent.configmap.override.checksum" -}} +{{- if .Values.clusterAgent.config.override }} +checksum/override-configmap: {{ include (print $.Template.BasePath "/cluster-agent-configmap.yaml") . | sha256sum }} +{{- end }} +{{- end }} + +{{- define "stackstate-k8s-agent.nodeAgent.configmap.override.checksum" -}} +{{- if .Values.nodeAgent.config.override }} +checksum/override-configmap: {{ include (print $.Template.BasePath "/node-agent-configmap.yaml") . | sha256sum }} +{{- end }} +{{- end }} + +{{- define "stackstate-k8s-agent.logsAgent.configmap.override.checksum" -}} +checksum/override-configmap: {{ include (print $.Template.BasePath "/logs-agent-configmap.yaml") . | sha256sum }} +{{- end }} + +{{- define "stackstate-k8s-agent.checksAgent.configmap.override.checksum" -}} +{{- if .Values.checksAgent.config.override }} +checksum/override-configmap: {{ include (print $.Template.BasePath "/checks-agent-configmap.yaml") . | sha256sum }} +{{- end }} +{{- end }} + + +{{/* +Return the image registry +*/}} +{{- define "stackstate-k8s-agent.imageRegistry" -}} + {{- if .Values.global }} + {{- .Values.global.imageRegistry | default .Values.all.image.registry -}} + {{- else -}} + {{- .Values.all.image.registry -}} + {{- end -}} +{{- end -}} + +{{/* +Renders a value that contains a template. +Usage: +{{ include "stackstate-k8s-agent.tplvalue.render" ( dict "value" .Values.path.to.the.Value "context" $) }} +*/}} +{{- define "stackstate-k8s-agent.tplvalue.render" -}} + {{- if typeIs "string" .value }} + {{- tpl .value .context }} + {{- else }} + {{- tpl (.value | toYaml) .context }} + {{- end }} +{{- end -}} + +{{- define "stackstate-k8s-agent.pull-secret.name" -}} +{{ include "stackstate-k8s-agent.fullname" . }}-pull-secret +{{- end -}} + +{{/* +Return the proper Docker Image Registry Secret Names evaluating values as templates +{{ include "stackstate-k8s-agent.image.pullSecrets" ( dict "images" (list .Values.path.to.the.image1, .Values.path.to.the.image2) "context" $) }} +*/}} +{{- define "stackstate-k8s-agent.image.pullSecrets" -}} + {{- $pullSecrets := list }} + {{- $context := .context }} + {{- if $context.Values.global }} + {{- range $context.Values.global.imagePullSecrets -}} + {{/* Is plain array of strings, compatible with all bitnami charts */}} + {{- $pullSecrets = append $pullSecrets (include "stackstate-k8s-agent.tplvalue.render" (dict "value" . "context" $context)) -}} + {{- end -}} + {{- end -}} + {{- range $context.Values.imagePullSecrets -}} + {{- $pullSecrets = append $pullSecrets (include "stackstate-k8s-agent.tplvalue.render" (dict "value" .name "context" $context)) -}} + {{- end -}} + {{- range .images -}} + {{- if .pullSecretName -}} + {{- $pullSecrets = append $pullSecrets (include "stackstate-k8s-agent.tplvalue.render" (dict "value" .pullSecretName "context" $context)) -}} + {{- end -}} + {{- end -}} + {{- $pullSecrets = append $pullSecrets (include "stackstate-k8s-agent.pull-secret.name" $context) -}} + {{- if (not (empty $pullSecrets)) -}} +imagePullSecrets: + {{- range $pullSecrets | uniq }} + - name: {{ . }} + {{- end }} + {{- end }} +{{- end -}} + +{{/* +Check whether the kubernetes-state-metrics configuration is overridden. If so, return 'true' else return nothing (which is false). +{{ include "stackstate-k8s-agent.kube-state-metrics.overridden" $ }} +*/}} +{{- define "stackstate-k8s-agent.kube-state-metrics.overridden" -}} +{{- if .Values.clusterAgent.config.override }} + {{- range $i, $val := .Values.clusterAgent.config.override }} + {{- if and (eq $val.name "conf.yaml") (eq $val.path "/etc/stackstate-agent/conf.d/kubernetes_state.d") }} +true + {{- end }} + {{- end }} +{{- end }} +{{- end -}} + +{{- define "stackstate-k8s-agent.nodeAgent.kube-state-metrics.overridden" -}} +{{- if .Values.nodeAgent.config.override }} + {{- range $i, $val := .Values.nodeAgent.config.override }} + {{- if and (eq $val.name "auto_conf.yaml") (eq $val.path "/etc/stackstate-agent/conf.d/kubernetes_state.d") }} +true + {{- end }} + {{- end }} +{{- end }} +{{- end -}} + +{{/* +Return the appropriate os label +*/}} +{{- define "label.os" -}} +{{- if semverCompare "^1.14-0" .Capabilities.KubeVersion.GitVersion -}} +kubernetes.io/os +{{- else -}} +beta.kubernetes.io/os +{{- end -}} +{{- end -}} + +{{/* +Returns a YAML with extra annotations +*/}} +{{- define "stackstate-k8s-agent.global.extraAnnotations" -}} +{{- with .Values.global.extraAnnotations }} +{{- toYaml . }} +{{- end }} +{{- end -}} + +{{/* +Returns a YAML with extra labels +*/}} +{{- define "stackstate-k8s-agent.global.extraLabels" -}} +{{- with .Values.global.extraLabels }} +{{- toYaml . }} +{{- end }} +{{- end -}} + +{{- define "stackstate-k8s-agent.apiKeyEnv" -}} +- name: STS_API_KEY + valueFrom: + secretKeyRef: +{{- if not .Values.stackstate.manageOwnSecrets }} + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: sts-api-key +{{- else }} + name: {{ .Values.stackstate.customSecretName | quote }} + key: {{ .Values.stackstate.customApiKeySecretKey | quote }} +{{- end }} +{{- end -}} + +{{- define "stackstate-k8s-agent.clusterAgentAuthTokenEnv" -}} +- name: STS_CLUSTER_AGENT_AUTH_TOKEN + valueFrom: + secretKeyRef: +{{- if not .Values.stackstate.manageOwnSecrets }} + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: sts-cluster-auth-token +{{- else }} + name: {{ .Values.stackstate.customSecretName | quote }} + key: {{ .Values.stackstate.customClusterAuthTokenSecretKey | quote }} +{{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-clusterrolebinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-clusterrolebinding.yaml new file mode 100644 index 0000000000..4fd0eadbcd --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-clusterrolebinding.yaml @@ -0,0 +1,21 @@ +{{- if .Values.checksAgent.enabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ .Release.Name }}-checks-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: checks-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ .Release.Name }}-node-agent +subjects: +- apiGroup: "" + kind: ServiceAccount + name: {{ .Release.Name }}-checks-agent + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-configmap.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-configmap.yaml new file mode 100644 index 0000000000..54a1abf2fe --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-configmap.yaml @@ -0,0 +1,17 @@ +{{- if and .Values.checksAgent.enabled .Values.checksAgent.config.override }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Release.Name }}-checks-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: checks-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +data: +{{- range .Values.checksAgent.config.override }} + {{ .path | replace "/" "_"}}_{{ .name }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-deployment.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-deployment.yaml new file mode 100644 index 0000000000..37a0b1a1db --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-deployment.yaml @@ -0,0 +1,185 @@ +{{- if .Values.checksAgent.enabled }} +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Release.Name }}-checks-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: checks-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + selector: + matchLabels: + app.kubernetes.io/component: checks-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} + replicas: {{ .Values.checksAgent.replicas }} +{{- with .Values.checksAgent.strategy }} + strategy: + {{- toYaml . | nindent 4 }} +{{- end }} + template: + metadata: + annotations: + {{- include "stackstate-k8s-agent.checksum-configs" . | nindent 8 }} + {{- include "stackstate-k8s-agent.nodeAgent.configmap.override.checksum" . | nindent 8 }} +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 8 }} + labels: + app.kubernetes.io/component: checks-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 8 }} + spec: + {{- include "stackstate-k8s-agent.image.pullSecrets" (dict "images" (list .Values.checksAgent.image .Values.all.image) "context" $) | nindent 6 }} + {{- if .Values.all.hardening.enabled}} + terminationGracePeriodSeconds: 240 + {{- end }} + containers: + - name: {{ .Chart.Name }} + image: "{{ include "stackstate-k8s-agent.imageRegistry" . }}/{{ .Values.checksAgent.image.repository }}:{{ .Values.checksAgent.image.tag }}" + imagePullPolicy: "{{ .Values.checksAgent.image.pullPolicy }}" + {{- if .Values.all.hardening.enabled}} + lifecycle: + preStop: + exec: + command: [ "/bin/sh", "-c", "echo 'Giving slim.ai monitor time to submit data...'; sleep 120" ] + {{- end }} + env: + {{ include "stackstate-k8s-agent.apiKeyEnv" . | nindent 10 }} + - name: KUBERNETES_HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: STS_HOSTNAME + value: "$(KUBERNETES_HOSTNAME)-{{ .Values.stackstate.cluster.name}}" + - name: AGENT_VERSION + value: {{ .Values.checksAgent.image.tag | quote }} + - name: LOG_LEVEL + value: {{ .Values.checksAgent.logLevel | quote }} + - name: STS_APM_ENABLED + value: "false" + - name: STS_CLUSTER_AGENT_ENABLED + value: {{ .Values.clusterAgent.enabled | quote }} + {{- if .Values.clusterAgent.enabled }} + - name: STS_CLUSTER_AGENT_KUBERNETES_SERVICE_NAME + value: {{ .Release.Name }}-cluster-agent + {{ include "stackstate-k8s-agent.clusterAgentAuthTokenEnv" . | nindent 10 }} + {{- end }} + - name: STS_CLUSTER_NAME + value: {{ .Values.stackstate.cluster.name | quote }} + - name: STS_SKIP_VALIDATE_CLUSTERNAME + value: "true" + - name: STS_CHECKS_TAG_CARDINALITY + value: {{ .Values.checksAgent.checksTagCardinality | quote }} + - name: STS_EXTRA_CONFIG_PROVIDERS + value: "clusterchecks" + - name: STS_HEALTH_PORT + value: "5555" + - name: STS_LEADER_ELECTION + value: "false" + - name: STS_LOG_LEVEL + value: {{ .Values.checksAgent.logLevel | quote }} + - name: STS_NETWORK_TRACING_ENABLED + value: "false" + - name: STS_PROCESS_AGENT_ENABLED + value: "false" + - name: STS_SKIP_SSL_VALIDATION + value: {{ or .Values.global.skipSslValidation .Values.checksAgent.skipSslValidation | quote }} + - name: STS_STS_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + {{- if .Values.global.proxy.url }} + - name: STS_PROXY_HTTPS + value: {{ .Values.global.proxy.url | quote }} + - name: STS_PROXY_HTTP + value: {{ .Values.global.proxy.url | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.open }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.secret }} + - name: {{ $key }} + valueFrom: + secretKeyRef: + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: {{ $key }} + {{- end }} + livenessProbe: + httpGet: + path: /health + port: healthport + failureThreshold: {{ .Values.checksAgent.livenessProbe.failureThreshold }} + initialDelaySeconds: {{ .Values.checksAgent.livenessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.checksAgent.livenessProbe.periodSeconds }} + successThreshold: {{ .Values.checksAgent.livenessProbe.successThreshold }} + timeoutSeconds: {{ .Values.checksAgent.livenessProbe.timeoutSeconds }} + readinessProbe: + httpGet: + path: /health + port: healthport + failureThreshold: {{ .Values.checksAgent.readinessProbe.failureThreshold }} + initialDelaySeconds: {{ .Values.checksAgent.readinessProbe.initialDelaySeconds }} + periodSeconds: {{ .Values.checksAgent.readinessProbe.periodSeconds }} + successThreshold: {{ .Values.checksAgent.readinessProbe.successThreshold }} + timeoutSeconds: {{ .Values.checksAgent.readinessProbe.timeoutSeconds }} + ports: + - containerPort: 5555 + name: healthport + protocol: TCP + {{- if .Values.all.hardening.enabled}} + securityContext: + privileged: true + runAsUser: 0 # root + capabilities: + add: [ "ALL" ] + readOnlyRootFilesystem: false + {{- else }} + securityContext: + privileged: false + {{- end }} + {{- with .Values.checksAgent.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: confd-empty-volume + mountPath: /etc/stackstate-agent/conf.d +# setting as readOnly: false because we need the ability to write data on /etc/stackstate-agent/conf.d as we enable checks to run. + readOnly: false + {{- if .Values.checksAgent.config.override }} + {{- range .Values.checksAgent.config.override }} + - name: config-override-volume + mountPath: {{ .path }}/{{ .name }} + subPath: {{ .path | replace "/" "_"}}_{{ .name }} + readOnly: true + {{- end }} + {{- end }} + {{- if .Values.checksAgent.priorityClassName }} + priorityClassName: {{ .Values.checksAgent.priorityClassName }} + {{- end }} + serviceAccountName: {{ .Release.Name }}-checks-agent + nodeSelector: + {{ template "label.os" . }}: {{ .Values.targetSystem }} + {{- with .Values.checksAgent.nodeSelector }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.checksAgent.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.checksAgent.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: confd-empty-volume + emptyDir: {} + {{- if .Values.checksAgent.config.override }} + - name: config-override-volume + configMap: + name: {{ .Release.Name }}-checks-agent + {{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-poddisruptionbudget.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-poddisruptionbudget.yaml new file mode 100644 index 0000000000..19d3924ea3 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-poddisruptionbudget.yaml @@ -0,0 +1,23 @@ +{{- if .Values.checksAgent.enabled }} +{{- if .Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +apiVersion: policy/v1 +{{- else }} +apiVersion: policy/v1beta1 +{{- end }} +kind: PodDisruptionBudget +metadata: + name: {{ .Release.Name }}-checks-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: checks-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + maxUnavailable: 1 + selector: + matchLabels: + app.kubernetes.io/component: checks-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-serviceaccount.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-serviceaccount.yaml new file mode 100644 index 0000000000..a90a43589d --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/checks-agent-serviceaccount.yaml @@ -0,0 +1,16 @@ +{{- if .Values.checksAgent.enabled }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ .Release.Name }}-checks-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: checks-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +{{- with .Values.checksAgent.serviceaccount.annotations }} + {{- toYaml . | nindent 4 }} +{{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrole.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrole.yaml new file mode 100644 index 0000000000..021a43ebdf --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrole.yaml @@ -0,0 +1,152 @@ +{{- $kubeRes := .Values.clusterAgent.collection.kubernetesResources }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +rules: +- apiGroups: + - "" + resources: + - events + - nodes + - pods + - services + {{- if $kubeRes.namespaces }} + - namespaces + {{- end }} + {{- if .Values.clusterAgent.collection.kubernetesMetrics }} + - componentstatuses + {{- end }} + {{- if $kubeRes.configmaps }} + - configmaps + {{- end }} + {{- if $kubeRes.endpoints }} + - endpoints + {{- end }} + {{- if $kubeRes.persistentvolumeclaims }} + - persistentvolumeclaims + {{- end }} + {{- if $kubeRes.persistentvolumes }} + - persistentvolumes + {{- end }} + {{- if $kubeRes.secrets }} + - secrets + {{- end }} + {{- if $kubeRes.resourcequotas }} + - resourcequotas + {{- end }} + verbs: + - get + - list + - watch +{{- if or $kubeRes.daemonsets $kubeRes.deployments $kubeRes.replicasets $kubeRes.statefulsets }} +- apiGroups: + - "apps" + resources: + {{- if $kubeRes.daemonsets }} + - daemonsets + {{- end }} + {{- if $kubeRes.deployments }} + - deployments + {{- end }} + {{- if $kubeRes.replicasets }} + - replicasets + {{- end }} + {{- if $kubeRes.statefulsets }} + - statefulsets + {{- end }} + verbs: + - get + - list + - watch +{{- end}} +{{- if $kubeRes.ingresses }} +- apiGroups: + - "extensions" + - "networking.k8s.io" + resources: + - ingresses + verbs: + - get + - list + - watch +{{- end}} +{{- if or $kubeRes.cronjobs $kubeRes.jobs }} +- apiGroups: + - "batch" + resources: + {{- if $kubeRes.cronjobs }} + - cronjobs + {{- end }} + {{- if $kubeRes.jobs }} + - jobs + {{- end }} + verbs: + - get + - list + - watch +{{- end}} +- nonResourceURLs: + - "/healthz" + - "/version" + verbs: + - get +- apiGroups: + - "storage.k8s.io" + resources: + {{- if $kubeRes.volumeattachments }} + - volumeattachments + {{- end }} + {{- if $kubeRes.storageclasses }} + - storageclasses + {{- end }} + verbs: + - get + - list + - watch +- apiGroups: + - "policy" + resources: + {{- if $kubeRes.poddisruptionbudgets }} + - poddisruptionbudgets + {{- end }} + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + {{- if $kubeRes.replicationcontrollers }} + - replicationcontrollers + {{- end }} + verbs: + - get + - list + - watch +- apiGroups: + - "autoscaling" + resources: + {{- if $kubeRes.horizontalpodautoscalers }} + - horizontalpodautoscalers + {{- end }} + verbs: + - get + - list + - watch +- apiGroups: + - "" + resources: + {{- if $kubeRes.limitranges }} + - limitranges + {{- end }} + verbs: + - get + - list + - watch diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrolebinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrolebinding.yaml new file mode 100644 index 0000000000..207613dd9e --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-clusterrolebinding.yaml @@ -0,0 +1,19 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ include "stackstate-k8s-agent.fullname" . }} +subjects: +- apiGroup: "" + kind: ServiceAccount + name: {{ include "stackstate-k8s-agent.fullname" . }} + namespace: {{ .Release.Namespace }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-configmap.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-configmap.yaml new file mode 100644 index 0000000000..37d10217f8 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-configmap.yaml @@ -0,0 +1,31 @@ +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Release.Name }}-cluster-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +data: + kubernetes_api_events_conf: | + init_config: + instances: + - collect_events: {{ .Values.clusterAgent.collection.kubernetesEvents }} + event_categories:{{ .Values.clusterAgent.config.events.categories | toYaml | nindent 10 }} + kubernetes_api_topology_conf: | + init_config: + instances: + - collection_interval: {{ .Values.clusterAgent.config.topology.collectionInterval }} + resources:{{ .Values.clusterAgent.collection.kubernetesResources | toYaml | nindent 10 }} + {{- if .Values.clusterAgent.collection.kubeStateMetrics.enabled }} + kube_state_metrics_core_conf: | + {{- include "cluster-agent-kube-state-metrics" . | nindent 6 }} + {{- end }} +{{- if .Values.clusterAgent.config.override }} +{{- range .Values.clusterAgent.config.override }} + {{ .path | replace "/" "_"}}_{{ .name }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-deployment.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-deployment.yaml new file mode 100644 index 0000000000..51025a6703 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-deployment.yaml @@ -0,0 +1,169 @@ +apiVersion: apps/v1 +kind: Deployment +metadata: + name: {{ .Release.Name }}-cluster-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + replicas: {{ .Values.clusterAgent.replicaCount }} + selector: + matchLabels: + app.kubernetes.io/component: cluster-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{- with .Values.clusterAgent.strategy }} + strategy: + {{- toYaml . | nindent 4 }} +{{- end }} + template: + metadata: + annotations: + {{- include "stackstate-k8s-agent.checksum-configs" . | nindent 8 }} + {{- include "stackstate-k8s-agent.configmap.override.checksum" . | nindent 8 }} +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 8 }} + labels: + app.kubernetes.io/component: cluster-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 8 }} + spec: + {{- include "stackstate-k8s-agent.image.pullSecrets" (dict "images" (list .Values.clusterAgent.image .Values.all.image) "context" $) | nindent 6 }} + {{- if .Values.clusterAgent.priorityClassName }} + priorityClassName: {{ .Values.clusterAgent.priorityClassName }} + {{- end }} + serviceAccountName: {{ include "stackstate-k8s-agent.fullname" . }} + {{- if .Values.all.hardening.enabled}} + terminationGracePeriodSeconds: 240 + {{- end }} + containers: + - name: cluster-agent + image: "{{ include "stackstate-k8s-agent.imageRegistry" . }}/{{ .Values.clusterAgent.image.repository }}:{{ .Values.clusterAgent.image.tag }}" + imagePullPolicy: "{{ .Values.clusterAgent.image.pullPolicy }}" + {{- if .Values.all.hardening.enabled}} + lifecycle: + preStop: + exec: + command: [ "/bin/sh", "-c", "echo 'Giving slim.ai monitor time to submit data...'; sleep 120" ] + {{- end }} + env: + {{ include "stackstate-k8s-agent.apiKeyEnv" . | nindent 10 }} + {{ include "stackstate-k8s-agent.clusterAgentAuthTokenEnv" . | nindent 10 }} + - name: KUBERNETES_HOSTNAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + - name: STS_HOSTNAME + value: "$(KUBERNETES_HOSTNAME)-{{ .Values.stackstate.cluster.name}}" + - name: LOG_LEVEL + value: {{ .Values.clusterAgent.logLevel | quote }} + {{- if .Values.checksAgent.enabled }} + - name: STS_CLUSTER_CHECKS_ENABLED + value: "true" + - name: STS_EXTRA_CONFIG_PROVIDERS + value: "kube_endpoints kube_services" + - name: STS_EXTRA_LISTENERS + value: "kube_endpoints kube_services" + {{- end }} + - name: STS_CLUSTER_NAME + value: {{.Values.stackstate.cluster.name | quote }} + - name: STS_SKIP_VALIDATE_CLUSTERNAME + value: "true" + - name: STS_SKIP_SSL_VALIDATION + value: {{ or .Values.global.skipSslValidation .Values.clusterAgent.skipSslValidation | quote }} + - name: STS_COLLECT_KUBERNETES_METRICS + value: {{ .Values.clusterAgent.collection.kubernetesMetrics | quote }} + - name: STS_COLLECT_KUBERNETES_TIMEOUT + value: {{ .Values.clusterAgent.collection.kubernetesTimeout | quote }} + - name: STS_COLLECT_KUBERNETES_TOPOLOGY + value: {{ .Values.clusterAgent.collection.kubernetesTopology | quote }} + - name: STS_LEADER_ELECTION + value: "true" + - name: STS_LOG_LEVEL + value: {{ .Values.clusterAgent.logLevel | quote }} + - name: STS_CLUSTER_AGENT_CMD_PORT + value: {{ .Values.clusterAgent.service.targetPort | quote }} + - name: STS_STS_URL + value: {{ include "stackstate-k8s-agent.stackstate.url" . }} + {{- if .Values.clusterAgent.config.configMap.maxDataSize }} + - name: STS_CONFIGMAP_MAX_DATASIZE + value: {{ .Values.clusterAgent.config.configMap.maxDataSize | quote }} + {{- end}} + {{- if .Values.global.proxy.url }} + - name: STS_PROXY_HTTPS + value: {{ .Values.global.proxy.url | quote }} + - name: STS_PROXY_HTTP + value: {{ .Values.global.proxy.url | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.open }} + - name: {{ $key }} + value: {{ $value | quote }} + {{- end }} + {{- range $key, $value := .Values.global.extraEnv.secret }} + - name: {{ $key }} + valueFrom: + secretKeyRef: + name: {{ include "stackstate-k8s-agent.fullname" . }} + key: {{ $key }} + {{- end }} + {{- if .Values.all.hardening.enabled}} + securityContext: + privileged: true + runAsUser: 0 # root + capabilities: + add: [ "ALL" ] + readOnlyRootFilesystem: false + {{- else }} + securityContext: + privileged: false + {{- end }} + {{- with .Values.clusterAgent.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: logs + mountPath: /var/log/stackstate-agent + - name: config-override-volume + mountPath: /etc/stackstate-agent/conf.d/kubernetes_api_events.d/conf.yaml + subPath: kubernetes_api_events_conf + - name: config-override-volume + mountPath: /etc/stackstate-agent/conf.d/kubernetes_api_topology.d/conf.yaml + subPath: kubernetes_api_topology_conf + readOnly: true + {{- if .Values.clusterAgent.collection.kubeStateMetrics.enabled }} + - name: config-override-volume + mountPath: /etc/stackstate-agent/conf.d/kubernetes_state_core.d/conf.yaml + subPath: kube_state_metrics_core_conf + readOnly: true + {{- end }} + {{- if .Values.clusterAgent.config.override }} + {{- range .Values.clusterAgent.config.override }} + - name: config-override-volume + mountPath: {{ .path }}/{{ .name }} + subPath: {{ .path | replace "/" "_"}}_{{ .name }} + readOnly: true + {{- end }} + {{- end }} + nodeSelector: + {{ template "label.os" . }}: {{ .Values.targetSystem }} + {{- with .Values.clusterAgent.nodeSelector }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.clusterAgent.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.clusterAgent.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: logs + emptyDir: {} + - name: config-override-volume + configMap: + name: {{ .Release.Name }}-cluster-agent diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-poddisruptionbudget.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-poddisruptionbudget.yaml new file mode 100644 index 0000000000..64a265b7db --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-poddisruptionbudget.yaml @@ -0,0 +1,21 @@ +{{- if .Capabilities.APIVersions.Has "policy/v1/PodDisruptionBudget" }} +apiVersion: policy/v1 +{{- else }} +apiVersion: policy/v1beta1 +{{- end }} +kind: PodDisruptionBudget +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + maxUnavailable: 1 + selector: + matchLabels: + app.kubernetes.io/component: cluster-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-role.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-role.yaml new file mode 100644 index 0000000000..eabc5bde36 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-role.yaml @@ -0,0 +1,21 @@ +{{- $kubeRes := .Values.clusterAgent.collection.kubernetesResources }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: Role +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +rules: +- apiGroups: + - "" + resources: + - configmaps + verbs: + - create + - get + - patch + - update diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-rolebinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-rolebinding.yaml new file mode 100644 index 0000000000..adabad45e0 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-rolebinding.yaml @@ -0,0 +1,18 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: RoleBinding +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: Role + name: {{ include "stackstate-k8s-agent.fullname" . }} +subjects: +- apiGroup: "" + kind: ServiceAccount + name: {{ include "stackstate-k8s-agent.fullname" . }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-service.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-service.yaml new file mode 100644 index 0000000000..8b687e8f76 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-service.yaml @@ -0,0 +1,21 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ .Release.Name }}-cluster-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + ports: + - name: clusteragent + port: {{int .Values.clusterAgent.service.port }} + protocol: TCP + targetPort: {{int .Values.clusterAgent.service.targetPort }} + selector: + app.kubernetes.io/component: cluster-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-serviceaccount.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-serviceaccount.yaml new file mode 100644 index 0000000000..6cbc89699e --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/cluster-agent-serviceaccount.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: cluster-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +{{- with .Values.clusterAgent.serviceaccount.annotations }} + {{- toYaml . | nindent 4 }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrole.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrole.yaml new file mode 100644 index 0000000000..da6cd59dd5 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrole.yaml @@ -0,0 +1,23 @@ +{{- if .Values.logsAgent.enabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ .Release.Name }}-logs-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: logs-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +rules: +- apiGroups: # Kubelet connectivity + - "" + resources: + - nodes + - services + - pods + verbs: + - get + - watch + - list +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrolebinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrolebinding.yaml new file mode 100644 index 0000000000..1f6e7cfcff --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-clusterrolebinding.yaml @@ -0,0 +1,21 @@ +{{- if .Values.logsAgent.enabled }} +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ .Release.Name }}-logs-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: logs-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ .Release.Name }}-logs-agent +subjects: +- apiGroup: "" + kind: ServiceAccount + name: {{ .Release.Name }}-logs-agent + namespace: {{ .Release.Namespace }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-configmap.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-configmap.yaml new file mode 100644 index 0000000000..ff9440a4b7 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-configmap.yaml @@ -0,0 +1,63 @@ +{{- if .Values.logsAgent.enabled }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Release.Name }}-logs-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: logs-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +data: + promtail.yaml: | + server: + http_listen_port: 9080 + grpc_listen_port: 0 + + clients: + - url: {{ tpl .Values.stackstate.url . }}/logs/k8s?api_key=${STS_API_KEY} + external_labels: + sts_cluster_name: {{ .Values.stackstate.cluster.name | quote }} + {{- if .Values.global.proxy.url }} + proxy_url: {{ .Values.global.proxy.url | quote }} + {{- end }} + tls_config: + insecure_skip_verify: {{ or .Values.global.skipSslValidation .Values.logsAgent.skipSslValidation }} + + + positions: + filename: /tmp/positions.yaml + target_config: + sync_period: 10s + scrape_configs: + - job_name: pod-logs + kubernetes_sd_configs: + - role: pod + pipeline_stages: + - docker: {} + - cri: {} + relabel_configs: + - action: replace + source_labels: + - __meta_kubernetes_pod_name + target_label: pod_name + - action: replace + source_labels: + - __meta_kubernetes_pod_uid + target_label: pod_uid + - action: replace + source_labels: + - __meta_kubernetes_pod_container_name + target_label: container_name + # The __path__ is required by the promtail client + - replacement: /var/log/pods/*$1/*.log + separator: / + source_labels: + - __meta_kubernetes_pod_uid + - __meta_kubernetes_pod_container_name + target_label: __path__ + # Drop all remaining labels, we do not need those + - action: drop + regex: __meta_(.*) +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-daemonset.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-daemonset.yaml new file mode 100644 index 0000000000..23cfce31f0 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-daemonset.yaml @@ -0,0 +1,91 @@ +{{- if .Values.logsAgent.enabled }} +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: {{ .Release.Name }}-logs-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: logs-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + selector: + matchLabels: + app.kubernetes.io/component: logs-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{- with .Values.logsAgent.updateStrategy }} + updateStrategy: + {{- toYaml . | nindent 4 }} +{{- end }} + template: + metadata: + annotations: + {{- include "stackstate-k8s-agent.checksum-configs" . | nindent 8 }} + {{- include "stackstate-k8s-agent.logsAgent.configmap.override.checksum" . | nindent 8 }} +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 8 }} + labels: + app.kubernetes.io/component: logs-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 8 }} + spec: + {{- include "stackstate-k8s-agent.image.pullSecrets" (dict "images" (list .Values.logsAgent.image .Values.all.image) "context" $) | nindent 6 }} + containers: + - name: logs-agent + image: "{{ include "stackstate-k8s-agent.imageRegistry" . }}/{{ .Values.logsAgent.image.repository }}:{{ .Values.logsAgent.image.tag }}" + args: + - -config.expand-env=true + - -config.file=/etc/promtail/promtail.yaml + imagePullPolicy: "{{ .Values.logsAgent.image.pullPolicy }}" + env: + {{ include "stackstate-k8s-agent.apiKeyEnv" . | nindent 10 }} + - name: "HOSTNAME" # needed when using kubernetes_sd_configs + valueFrom: + fieldRef: + fieldPath: "spec.nodeName" + securityContext: + privileged: false + {{- with .Values.logsAgent.resources }} + resources: + {{- toYaml . | nindent 12 }} + {{- end }} + volumeMounts: + - name: logs + mountPath: /var/log + readOnly: true + - name: logs-agent-config + mountPath: /etc/promtail + readOnly: true + - name: varlibdockercontainers + mountPath: /var/lib/docker/containers + readOnly: true + {{- if .Values.logsAgent.priorityClassName }} + priorityClassName: {{ .Values.logsAgent.priorityClassName }} + {{- end }} + serviceAccountName: {{ .Release.Name }}-logs-agent + {{- with .Values.logsAgent.nodeSelector }} + nodeSelector: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.logsAgent.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.logsAgent.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + - name: logs + hostPath: + path: /var/log + - name: varlibdockercontainers + hostPath: + path: /var/lib/docker/containers + - name: logs-agent-config + configMap: + name: {{ .Release.Name }}-logs-agent +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-serviceaccount.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-serviceaccount.yaml new file mode 100644 index 0000000000..91cfdb137d --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/logs-agent-serviceaccount.yaml @@ -0,0 +1,16 @@ +{{- if .Values.logsAgent.enabled }} +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ .Release.Name }}-logs-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: logs-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +{{- with .Values.logsAgent.serviceaccount.annotations }} + {{- toYaml . | nindent 4 }} +{{- end }} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrole.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrole.yaml new file mode 100644 index 0000000000..1ded16cc29 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrole.yaml @@ -0,0 +1,21 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: {{ .Release.Name }}-node-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +rules: +- apiGroups: # Kubelet connectivity + - "" + resources: + - nodes/metrics + - nodes/proxy + - nodes/spec + - endpoints + verbs: + - get + - list diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrolebinding.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrolebinding.yaml new file mode 100644 index 0000000000..b3f033ebbf --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-clusterrolebinding.yaml @@ -0,0 +1,19 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: {{ .Release.Name }}-node-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +roleRef: + apiGroup: rbac.authorization.k8s.io + kind: ClusterRole + name: {{ .Release.Name }}-node-agent +subjects: +- apiGroup: "" + kind: ServiceAccount + name: {{ .Release.Name }}-node-agent + namespace: {{ .Release.Namespace }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-configmap.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-configmap.yaml new file mode 100644 index 0000000000..8f6b2ed3ac --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-configmap.yaml @@ -0,0 +1,17 @@ +{{- if .Values.nodeAgent.config.override }} +apiVersion: v1 +kind: ConfigMap +metadata: + name: {{ .Release.Name }}-node-agent + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +data: +{{- range .Values.nodeAgent.config.override }} + {{ .path | replace "/" "_"}}_{{ .name }}: | +{{ .data | indent 4 -}} +{{- end -}} +{{- end -}} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-daemonset.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-daemonset.yaml new file mode 100644 index 0000000000..4c8dbdd5a9 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-daemonset.yaml @@ -0,0 +1,110 @@ +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: {{ .Release.Name }}-node-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + selector: + matchLabels: + app.kubernetes.io/component: node-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{- with .Values.nodeAgent.updateStrategy }} + updateStrategy: + {{- toYaml . | nindent 4 }} +{{- end }} + template: + metadata: + annotations: + {{- include "stackstate-k8s-agent.checksum-configs" . | nindent 8 }} + {{- include "stackstate-k8s-agent.nodeAgent.configmap.override.checksum" . | nindent 8 }} +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 8 }} + labels: + app.kubernetes.io/component: node-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 8 }} + spec: + {{- include "stackstate-k8s-agent.image.pullSecrets" (dict "images" (list .Values.nodeAgent.containers.agent.image .Values.all.image) "context" $) | nindent 6 }} + {{- if .Values.all.hardening.enabled}} + terminationGracePeriodSeconds: 240 + {{- end }} + containers: + {{- include "container-agent" . | nindent 6 }} + {{- if .Values.nodeAgent.containers.processAgent.enabled }} + {{- include "container-process-agent" . | nindent 6 }} + {{- end }} + dnsPolicy: ClusterFirstWithHostNet + hostNetwork: true + hostPID: true + {{- if .Values.nodeAgent.priorityClassName }} + priorityClassName: {{ .Values.nodeAgent.priorityClassName }} + {{- end }} + serviceAccountName: {{ .Release.Name }}-node-agent + nodeSelector: + {{ template "label.os" . }}: {{ .Values.targetSystem }} + {{- with .Values.nodeAgent.nodeSelector }} + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.nodeAgent.affinity }} + affinity: + {{- toYaml . | nindent 8 }} + {{- end }} + {{- with .Values.nodeAgent.tolerations }} + tolerations: + {{- toYaml . | nindent 8 }} + {{- end }} + volumes: + {{- if .Values.nodeAgent.containerRuntime.customSocketPath }} + - hostPath: + path: {{ .Values.nodeAgent.containerRuntime.customSocketPath }} + name: customcrisocket + {{- end }} + - hostPath: + path: /var/lib/kubelet + name: kubelet + - hostPath: + path: /var/lib/nfs + name: nfs + - hostPath: + path: /var/lib/docker/overlay2 + name: dockeroverlay2 + - hostPath: + path: /run/docker/netns + name: dockernetns + - hostPath: + path: /var/run/crio/crio.sock + name: crisocket + - hostPath: + path: /var/run/containerd/containerd.sock + name: containerdsocket + - hostPath: + path: /sys/kernel/debug + name: sys-kernel-debug + - hostPath: + path: /var/run/docker.sock + name: dockersocket + - hostPath: + path: {{ .Values.nodeAgent.containerRuntime.hostProc }} + name: procdir + - hostPath: + path: /etc + name: etcdir + - hostPath: + path: /etc/passwd + name: passwd + - hostPath: + path: /sys/fs/cgroup + name: cgroups + {{- if .Values.nodeAgent.config.override }} + - name: config-override-volume + configMap: + name: {{ .Release.Name }}-node-agent + {{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-podautoscaler.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-podautoscaler.yaml new file mode 100644 index 0000000000..38298d4147 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-podautoscaler.yaml @@ -0,0 +1,39 @@ +--- +{{- if .Values.nodeAgent.autoScalingEnabled }} +apiVersion: "autoscaling.k8s.io/v1" +kind: VerticalPodAutoscaler +metadata: + name: {{ .Release.Name }}-node-agent-vpa + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +spec: + targetRef: + apiVersion: "apps/v1" + kind: DaemonSet + name: {{ .Release.Name }}-node-agent + resourcePolicy: + containerPolicies: + - containerName: 'node-agent' + minAllowed: + cpu: {{ .Values.nodeAgent.scaling.autoscalerLimits.agent.minimum.cpu }} + memory: {{ .Values.nodeAgent.scaling.autoscalerLimits.agent.minimum.memory }} + maxAllowed: + cpu: {{ .Values.nodeAgent.scaling.autoscalerLimits.agent.maximum.cpu }} + memory: {{ .Values.nodeAgent.scaling.autoscalerLimits.agent.maximum.memory }} + controlledResources: ["cpu", "memory"] + controlledValues: RequestsAndLimits + - containerName: 'process-agent' + minAllowed: + cpu: {{ .Values.nodeAgent.scaling.autoscalerLimits.processAgent.minimum.cpu }} + memory: {{ .Values.nodeAgent.scaling.autoscalerLimits.processAgent.minimum.memory }} + maxAllowed: + cpu: {{ .Values.nodeAgent.scaling.autoscalerLimits.processAgent.maximum.cpu }} + memory: {{ .Values.nodeAgent.scaling.autoscalerLimits.processAgent.maximum.memory }} + controlledResources: ["cpu", "memory"] + controlledValues: RequestsAndLimits + updatePolicy: + updateMode: "Auto" +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-scc.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-scc.yaml new file mode 100644 index 0000000000..a09da78c17 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-scc.yaml @@ -0,0 +1,60 @@ +{{- if .Values.nodeAgent.scc.enabled }} +allowHostDirVolumePlugin: true +# was true +allowHostIPC: true +# was true +allowHostNetwork: true +# Allow host PID for dogstatsd origin detection +allowHostPID: true +# Allow host ports for dsd / trace / logs intake +allowHostPorts: true +allowPrivilegeEscalation: true +# was true +allowPrivilegedContainer: true +# was - '*' +allowedCapabilities: [] +allowedUnsafeSysctls: +- '*' +apiVersion: security.openshift.io/v1 +defaultAddCapabilities: null +fsGroup: +# was RunAsAny + type: MustRunAs +groups: [] +kind: SecurityContextConstraints +metadata: + name: {{ .Release.Name }}-node-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +priority: null +readOnlyRootFilesystem: false +requiredDropCapabilities: null +# was RunAsAny +runAsUser: + type: MustRunAsRange +# Use the `spc_t` selinux type to access the +# docker socket + proc and cgroup stats +seLinuxContext: + type: RunAsAny + seLinuxOptions: + user: "system_u" + role: "system_r" + type: "spc_t" + level: "s0" +# was - '*' +seccompProfiles: [] +supplementalGroups: + type: RunAsAny +users: +- system:serviceaccount:{{ .Release.Namespace }}:{{ .Release.Name }}-node-agent +# Allow hostPath for docker / process metrics +volumes: + - configMap + - downwardAPI + - emptyDir + - hostPath + - secret +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-service.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-service.yaml new file mode 100644 index 0000000000..0b6cd6ec03 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-service.yaml @@ -0,0 +1,28 @@ +apiVersion: v1 +kind: Service +metadata: + name: {{ .Release.Name }}-node-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +{{- with .Values.nodeAgent.service.annotations }} + {{- toYaml . | nindent 4 }} +{{- end }} +spec: + type: {{ .Values.nodeAgent.service.type }} +{{- if eq .Values.nodeAgent.service.type "LoadBalancer" }} + loadBalancerSourceRanges: {{ toYaml .Values.nodeAgent.service.loadBalancerSourceRanges | nindent 4}} +{{- end }} + ports: + - name: traceport + port: 8126 + protocol: TCP + targetPort: 8126 + selector: + app.kubernetes.io/component: node-agent + app.kubernetes.io/instance: {{ .Release.Name }} + app.kubernetes.io/name: {{ include "stackstate-k8s-agent.name" . }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-serviceaccount.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-serviceaccount.yaml new file mode 100644 index 0000000000..803d184ef7 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/node-agent-serviceaccount.yaml @@ -0,0 +1,14 @@ +apiVersion: v1 +kind: ServiceAccount +metadata: + name: {{ .Release.Name }}-node-agent + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + app.kubernetes.io/component: node-agent + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +{{- with .Values.nodeAgent.serviceaccount.annotations }} + {{- toYaml . | nindent 4 }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/openshift-logging-secret.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/openshift-logging-secret.yaml new file mode 100644 index 0000000000..ed0707f1fa --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/openshift-logging-secret.yaml @@ -0,0 +1,22 @@ +{{- if not .Values.stackstate.manageOwnSecrets }} +{{- if .Values.openShiftLogging.installSecret }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }}-logging-secret + namespace: openshift-logging + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +type: Opaque +data: + username: {{ "apikey" | b64enc | quote }} +{{- if .Values.global.receiverApiKey }} + password: {{ .Values.global.receiverApiKey | b64enc | quote }} +{{- else }} + password: {{ .Values.stackstate.apiKey | b64enc | quote }} +{{- end }} +{{- end }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/pull-secret.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/pull-secret.yaml new file mode 100644 index 0000000000..9169416657 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/pull-secret.yaml @@ -0,0 +1,39 @@ +{{- $defaultRegistry := .Values.global.imageRegistry }} +{{- $top := . }} +{{- $registryAuthMap := dict }} + +{{- range $registry, $credentials := .Values.global.imagePullCredentials }} + {{- $registryAuthDocument := dict -}} + {{- $_ := set $registryAuthDocument "username" $credentials.username }} + {{- $_ := set $registryAuthDocument "password" $credentials.password }} + {{- $authMessage := printf "%s:%s" $registryAuthDocument.username $registryAuthDocument.password | b64enc }} + {{- $_ := set $registryAuthDocument "auth" $authMessage }} + {{- if eq $registry "default" }} + {{- $registryAuthMap := set $registryAuthMap (include "stackstate-k8s-agent.imageRegistry" $top) $registryAuthDocument }} + {{ else }} + {{- $registryAuthMap := set $registryAuthMap $registry $registryAuthDocument }} + {{- end }} +{{- end }} + +{{- if .Values.all.image.pullSecretUsername }} + {{- $registryAuthDocument := dict -}} + {{- $_ := set $registryAuthDocument "username" .Values.all.image.pullSecretUsername }} + {{- $_ := set $registryAuthDocument "password" .Values.all.image.pullSecretPassword }} + {{- $authMessage := printf "%s:%s" $registryAuthDocument.username $registryAuthDocument.password | b64enc }} + {{- $_ := set $registryAuthDocument "auth" $authMessage }} + {{- $registryAuthMap := set $registryAuthMap (include "stackstate-k8s-agent.imageRegistry" $top) $registryAuthDocument }} +{{- end }} + +{{- $dockerAuthsDocuments := dict "auths" $registryAuthMap }} + +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "stackstate-k8s-agent.pull-secret.name" . }} + labels: +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +data: + .dockerconfigjson: {{ $dockerAuthsDocuments | toJson | b64enc | quote }} +type: kubernetes.io/dockerconfigjson diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/secret.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/secret.yaml new file mode 100644 index 0000000000..5e0f5f74cd --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/templates/secret.yaml @@ -0,0 +1,27 @@ +{{- if not .Values.stackstate.manageOwnSecrets }} +apiVersion: v1 +kind: Secret +metadata: + name: {{ include "stackstate-k8s-agent.fullname" . }} + namespace: {{ .Release.Namespace }} + labels: +{{ include "stackstate-k8s-agent.labels" . | indent 4 }} +{{ include "stackstate-k8s-agent.global.extraLabels" . | indent 4 }} + annotations: +{{ include "stackstate-k8s-agent.global.extraAnnotations" . | indent 4 }} +type: Opaque +data: +{{- if .Values.global.receiverApiKey }} + sts-api-key: {{ .Values.global.receiverApiKey | b64enc | quote }} +{{- else }} + sts-api-key: {{ .Values.stackstate.apiKey | b64enc | quote }} +{{- end }} +{{- if .Values.stackstate.cluster.authToken }} + sts-cluster-auth-token: {{ .Values.stackstate.cluster.authToken | b64enc | quote }} +{{- else }} + sts-cluster-auth-token: {{ randAlphaNum 32 | b64enc | quote }} +{{- end }} +{{- range $key, $value := .Values.global.extraEnv.secret }} + {{ $key }}: {{ $value | b64enc | quote }} +{{- end }} +{{- end }} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clusteragent_resources_test.go b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clusteragent_resources_test.go new file mode 100644 index 0000000000..25875e871a --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clusteragent_resources_test.go @@ -0,0 +1,145 @@ +package test + +import ( + "regexp" + "strings" + "testing" + + v1 "k8s.io/api/rbac/v1" + + "github.com/stretchr/testify/assert" + "gitlab.com/StackVista/DevOps/helm-charts/helmtestutil" +) + +var requiredRules = []string{ + "events+get,list,watch", + "nodes+get,list,watch", + "pods+get,list,watch", + "services+get,list,watch", + "configmaps+create,get,patch,update", +} + +var optionalRules = []string{ + "namespaces+get,list,watch", + "componentstatuses+get,list,watch", + "configmaps+list,watch", // get is already required + "endpoints+get,list,watch", + "persistentvolumeclaims+get,list,watch", + "persistentvolumes+get,list,watch", + "secrets+get,list,watch", + "apps/daemonsets+get,list,watch", + "apps/deployments+get,list,watch", + "apps/replicasets+get,list,watch", + "apps/statefulsets+get,list,watch", + "extensions/ingresses+get,list,watch", + "batch/cronjobs+get,list,watch", + "batch/jobs+get,list,watch", +} + +var roleDescriptionRegexp = regexp.MustCompile(`^((?P\w+)/)?(?P\w+)\+(?P[\w,]+)`) + +type Rule struct { + Group string + ResourceName string + Verb string +} + +func assertRuleExistence(t *testing.T, rules []v1.PolicyRule, roleDescription string, shouldBePresent bool) { + match := roleDescriptionRegexp.FindStringSubmatch(roleDescription) + assert.NotNil(t, match) + + var roleRules []Rule + for _, rule := range rules { + for _, group := range rule.APIGroups { + for _, resource := range rule.Resources { + for _, verb := range rule.Verbs { + roleRules = append(roleRules, Rule{group, resource, verb}) + } + } + } + } + + resGroup := match[roleDescriptionRegexp.SubexpIndex("group")] + resName := match[roleDescriptionRegexp.SubexpIndex("name")] + verbs := strings.Split(match[roleDescriptionRegexp.SubexpIndex("verbs")], ",") + + for _, verb := range verbs { + requiredRule := Rule{resGroup, resName, verb} + found := false + for _, rule := range roleRules { + if rule == requiredRule { + found = true + break + } + } + if shouldBePresent { + assert.Truef(t, found, "Rule %v has not been found", requiredRule) + } else { + assert.Falsef(t, found, "Rule %v should not be present", requiredRule) + } + } +} + +func TestAllResourcesAreEnabled(t *testing.T) { + output := helmtestutil.RenderHelmTemplate(t, "stackstate-k8s-agent", "values/minimal.yaml") + resources := helmtestutil.NewKubernetesResources(t, output) + + assert.Contains(t, resources.ClusterRoles, "stackstate-k8s-agent") + assert.Contains(t, resources.Roles, "stackstate-k8s-agent") + rules := resources.ClusterRoles["stackstate-k8s-agent"].Rules + rules = append(rules, resources.Roles["stackstate-k8s-agent"].Rules...) + + for _, requiredRole := range requiredRules { + assertRuleExistence(t, rules, requiredRole, true) + } + // be default, everything is enabled, so all the optional roles should be present as well + for _, optionalRule := range optionalRules { + assertRuleExistence(t, rules, optionalRule, true) + } +} + +func TestMostOfResourcesAreDisabled(t *testing.T) { + output := helmtestutil.RenderHelmTemplate(t, "stackstate-k8s-agent", "values/minimal.yaml", "values/disable-all-resource.yaml") + resources := helmtestutil.NewKubernetesResources(t, output) + + assert.Contains(t, resources.ClusterRoles, "stackstate-k8s-agent") + assert.Contains(t, resources.Roles, "stackstate-k8s-agent") + rules := resources.ClusterRoles["stackstate-k8s-agent"].Rules + rules = append(rules, resources.Roles["stackstate-k8s-agent"].Rules...) + + for _, requiredRole := range requiredRules { + assertRuleExistence(t, rules, requiredRole, true) + } + + // we expect all optional resources to be removed from ClusterRole with the given values + for _, optionalRule := range optionalRules { + assertRuleExistence(t, rules, optionalRule, false) + } +} + +func TestNoClusterWideModificationRights(t *testing.T) { + output := helmtestutil.RenderHelmTemplate(t, "stackstate-k8s-agent", "values/minimal.yaml", "values/http-header-injector.yaml") + resources := helmtestutil.NewKubernetesResources(t, output) + assert.Contains(t, resources.ClusterRoles, "stackstate-k8s-agent") + illegalVerbs := []string{"create", "patch", "update", "delete"} + + for _, clusterRole := range resources.ClusterRoles { + for _, rule := range clusterRole.Rules { + for _, verb := range rule.Verbs { + assert.NotContains(t, illegalVerbs, verb, "ClusterRole %s should not have %s verb for %s resource", clusterRole.Name, verb, rule.Resources) + } + } + } +} + +func TestServicePortChange(t *testing.T) { + output := helmtestutil.RenderHelmTemplate(t, "stackstate-k8s-agent", "values/minimal.yaml", "values/clustercheck_service_port_override.yaml") + resources := helmtestutil.NewKubernetesResources(t, output) + + cluster_agent_service := resources.Services["stackstate-k8s-agent-cluster-agent"] + + port := cluster_agent_service.Spec.Ports[0] + assert.Equal(t, port.Name, "clusteragent") + assert.Equal(t, port.Port, int32(8008)) + assert.Equal(t, port.TargetPort.IntVal, int32(9009)) +} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clustername_test.go b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clustername_test.go new file mode 100644 index 0000000000..55090b9956 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/clustername_test.go @@ -0,0 +1,54 @@ +package test + +import ( + "testing" + + "github.com/gruntwork-io/terratest/modules/helm" + "github.com/stretchr/testify/assert" + + "gitlab.com/StackVista/DevOps/helm-charts/helmtestutil" +) + +func TestHelmBasicRender(t *testing.T) { + output := helmtestutil.RenderHelmTemplate(t, "stackstate-k8s-agent", "values/minimal.yaml") + + // Parse all resources into their corresponding types for validation and further inspection + helmtestutil.NewKubernetesResources(t, output) +} + +func TestClusterNameValidation(t *testing.T) { + testCases := []struct { + Name string + ClusterName string + IsValid bool + }{ + {"not allowed end with special character [.]", "name.", false}, + {"not allowed end with special character [-]", "name.", false}, + {"not allowed start with special character [-]", "-name", false}, + {"not allowed start with special character [.]", ".name", false}, + {"upper case is not allowed", "Euwest1-prod.cool-company.com", false}, + {"upper case is not allowed", "euwest1-PROD.cool-company.com", false}, + {"upper case is not allowed", "euwest1-prod.cool-company.coM", false}, + {"dots and dashes are allowed in the middle", "euwest1-prod.cool-company.com", true}, + {"underscore is not allowed", "why_7", false}, + } + + for _, testCase := range testCases { + t.Run(testCase.Name, func(t *testing.T) { + output, err := helmtestutil.RenderHelmTemplateOpts( + t, "cluster-agent", + &helm.Options{ + ValuesFiles: []string{"values/minimal.yaml"}, + SetStrValues: map[string]string{ + "stackstate.cluster.name": testCase.ClusterName, + }, + }) + if testCase.IsValid { + assert.Nil(t, err) + } else { + assert.NotNil(t, err) + assert.Contains(t, output, "stackstate.cluster.name: Does not match pattern") + } + }) + } +} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_custom_url.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_custom_url.yaml new file mode 100644 index 0000000000..57b973eed9 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_custom_url.yaml @@ -0,0 +1,7 @@ +checksAgent: + enabled: true + kubeStateMetrics: + url: http://my-custom-ksm-url.monitoring.svc.local:8080/metrics +dependencies: + kubeStateMetrics: + enabled: true diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_no_override.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_no_override.yaml new file mode 100644 index 0000000000..b6c817d473 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_no_override.yaml @@ -0,0 +1,5 @@ +checksAgent: + enabled: true +dependencies: + kubeStateMetrics: + enabled: true diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_override.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_override.yaml new file mode 100644 index 0000000000..9ca201345a --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_ksm_override.yaml @@ -0,0 +1,26 @@ +checksAgent: + enabled: true +dependencies: + kubeStateMetrics: + enabled: true +agent: + config: + override: +# agent.config.override -- Disables kubernetes_state check on regular agent pods. + - name: auto_conf.yaml + path: /etc/stackstate-agent/conf.d/kubernetes_state.d + data: | +clusterAgent: + config: + override: +# clusterAgent.config.override -- Defines kubernetes_state check for clusterchecks agents. Auto-discovery +# with ad_identifiers does not work here. Use a specific URL instead. + - name: conf.yaml + path: /etc/stackstate-agent/conf.d/kubernetes_state.d + data: | + cluster_check: true + + init_config: + + instances: + - kube_state_url: http://YOUR_KUBE_STATE_METRICS_SERVICE_NAME:8080/metrics diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_no_ksm_custom_url.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_no_ksm_custom_url.yaml new file mode 100644 index 0000000000..a62691878c --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_no_ksm_custom_url.yaml @@ -0,0 +1,7 @@ +checksAgent: + enabled: true + kubeStateMetrics: + url: http://my-custom-ksm-url.monitoring.svc.local:8080/metrics +dependencies: + kubeStateMetrics: + enabled: false diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_service_port_override.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_service_port_override.yaml new file mode 100644 index 0000000000..c01a98fcb4 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/clustercheck_service_port_override.yaml @@ -0,0 +1,4 @@ +clusterAgent: + service: + port: 8008 + targetPort: 9009 diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/disable-all-resource.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/disable-all-resource.yaml new file mode 100644 index 0000000000..cd33e843ea --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/disable-all-resource.yaml @@ -0,0 +1,17 @@ +clusterAgent: + collection: + kubernetesMetrics: false + kubernetesResources: + namespaces: false + configmaps: false + endpoints: false + persistentvolumes: false + persistentvolumeclaims: false + secrets: false + daemonsets: false + deployments: false + replicasets: false + statefulsets: false + ingresses: false + cronjobs: false + jobs: false diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/http-header-injector.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/http-header-injector.yaml new file mode 100644 index 0000000000..c9392ce2dd --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/http-header-injector.yaml @@ -0,0 +1,8 @@ +httpHeaderInjectorWebhook: + webhook: + tls: + mode: "provided" + provided: + caBundle: insert-ca-here + crt: insert-cert-here + key: insert-key-here diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/minimal.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/minimal.yaml new file mode 100644 index 0000000000..b310c9a093 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/test/values/minimal.yaml @@ -0,0 +1,7 @@ +stackstate: + apiKey: foobar + cluster: + name: some-k8s-cluster + token: some-token + + url: https://stackstate:7000/receiver diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/values.schema.json b/charts/stackstate/stackstate-k8s-agent/1.0.98/values.schema.json new file mode 100644 index 0000000000..57d36a9f34 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/values.schema.json @@ -0,0 +1,78 @@ +{ + "$schema": "https://json-schema.org/draft/2019-09/schema", + "$id": "https://stackstate.io/example.json", + "type": "object", + "default": {}, + "title": "StackState Agent Helm chart values", + "required": [ + "stackstate", + "clusterAgent" + ], + "properties": { + "stackstate": { + "type": "object", + "required": [ + "cluster", + "url" + ], + "properties": { + "apiKey": { + "type": "string" + }, + "cluster": { + "type": "object", + "required": ["name"], + "properties": { + "name": { + "type": "string", + "pattern": "^[a-z0-9]([a-z0-9\\-\\.]*[a-z0-9])$" + }, + "authToken": { + "type": "string" + } + } + }, + "url": { + "type": "string" + } + } + }, + "clusterAgent": { + "type": "object", + "required": [ + "config" + ], + "properties": { + "config": { + "type": "object", + "required": [ + "events" + ], + "properties": { + "events": { + "type": "object", + "properties": { + "categories": { + "type": "object", + "patternProperties": { + ".*": { + "type": [ + "string" + ], + "enum": [ + "Alerts", + "Activities", + "Changes", + "Others" + ] + } + } + } + } + } + } + } + } + } + } +} diff --git a/charts/stackstate/stackstate-k8s-agent/1.0.98/values.yaml b/charts/stackstate/stackstate-k8s-agent/1.0.98/values.yaml new file mode 100644 index 0000000000..c58846a7c0 --- /dev/null +++ b/charts/stackstate/stackstate-k8s-agent/1.0.98/values.yaml @@ -0,0 +1,616 @@ +##################### +# General variables # +##################### + +global: + extraEnv: + # global.extraEnv.open -- Extra open environment variables to inject into pods. + open: {} + # global.extraEnv.secret -- Extra secret environment variables to inject into pods via a `Secret` object. + secret: {} + # global.imagePullSecrets -- Secrets / credentials needed for container image registry. + imagePullSecrets: [] + # global.imagePullCredentials -- Globally define credentials for pulling images. + imagePullCredentials: {} + proxy: + # global.proxy.url -- Proxy for all traffic to stackstate + url: "" + # global.skipSslValidation -- Enable tls validation from client + skipSslValidation: false + + # global.extraLabels -- Extra labels added ta all resources created by the helm chart + extraLabels: {} + # global.extraAnnotations -- Extra annotations added ta all resources created by the helm chart + extraAnnotations: {} + +# nameOverride -- Override the name of the chart. +nameOverride: "" +# fullnameOverride -- Override the fullname of the chart. +fullnameOverride: "" + +# targetSystem -- Target OS for this deployment (possible values: linux) +targetSystem: "linux" + +all: + image: + # all.image.registry -- The image registry to use. + registry: "quay.io" + hardening: + # all.hardening.enabled -- An indication of whether the containers will be evaluated for hardening at runtime + enabled: false + +nodeAgent: + # nodeAgent.autoScalingEnabled -- Enable / disable autoscaling for the node agent pods. + autoScalingEnabled: false + containerRuntime: + # nodeAgent.containerRuntime.customSocketPath -- If the container socket path does not match the default for CRI-O, Containerd or Docker, supply a custom socket path. + customSocketPath: "" + # nodeAgent.containerRuntime.customHostProc -- If the container is launched from a place where /proc is mounted differently, /proc can be changed + hostProc: /proc + + scc: + # nodeAgent.scc.enabled -- Enable / disable the installation of the SecurityContextConfiguration needed for installation on OpenShift. + enabled: false + apm: + # nodeAgent.apm.enabled -- Enable / disable the nodeAgent APM module. + enabled: true + networkTracing: + # nodeAgent.networkTracing.enabled -- Enable / disable the nodeAgent network tracing module. + enabled: true + protocolInspection: + # nodeAgent.protocolInspection.enabled -- Enable / disable the nodeAgent protocol inspection. + enabled: true + httpTracing: + enabled: true + # nodeAgent.skipSslValidation -- Set to true if self signed certificates are used. + skipSslValidation: false + # nodeAgent.skipKubeletTLSVerify -- Set to true if you want to skip kubelet tls verification. + skipKubeletTLSVerify: false + + # nodeAgent.checksTagCardinality -- low, orchestrator or high. Orchestrator level adds pod_name, high adds display_container_name + checksTagCardinality: orchestrator + + # nodeAgent.config -- + config: + # nodeAgent.config.override -- A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap + override: [] + + # nodeAgent.priorityClassName -- Priority class for nodeAgent pods. + priorityClassName: "" + + scaling: + autoscalerLimits: + agent: + minimum: + # nodeAgent.scaling.autoscalerLimits.agent.minimum.cpu -- Minimum CPU resource limits for main agent. + cpu: "20m" + # nodeAgent.scaling.autoscalerLimits.agent.minimum.memory -- Minimum memory resource limits for main agent. + memory: "180Mi" + maximum: + # nodeAgent.scaling.autoscalerLimits.agent.maximum.cpu -- Maximum CPU resource limits for main agent. + cpu: "200m" + # nodeAgent.scaling.autoscalerLimits.agent.maximum.memory -- Maximum memory resource limits for main agent. + memory: "450Mi" + processAgent: + minimum: + # nodeAgent.scaling.autoscalerLimits.processAgent.minimum.cpu -- Minimum CPU resource limits for process agent. + cpu: "25m" + # nodeAgent.scaling.autoscalerLimits.processAgent.minimum.memory -- Minimum memory resource limits for process agent. + memory: "100Mi" + maximum: + # nodeAgent.scaling.autoscalerLimits.processAgent.maximum.cpu -- Maximum CPU resource limits for process agent. + cpu: "200m" + # nodeAgent.scaling.autoscalerLimits.processAgent.maximum.memory -- Maximum memory resource limits for process agent. + memory: "500Mi" + + containers: + agent: + image: + # nodeAgent.containers.agent.image.repository -- Base container image repository. + repository: stackstate/stackstate-k8s-agent + # nodeAgent.containers.agent.image.tag -- Default container image tag. + tag: "c4caacef" + # nodeAgent.containers.agent.image.pullPolicy -- Default container image pull policy. + pullPolicy: IfNotPresent + processAgent: + # nodeAgent.containers.agent.processAgent.enabled -- Enable / disable the agent process agent module. - deprecated + enabled: false + # nodeAgent.containers.agent.env -- Additional environment variables for the agent container + env: {} + # nodeAgent.containers.agent.logLevel -- Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off + ## If not set, fall back to the value of agent.logLevel. + logLevel: # INFO + + resources: + limits: + # nodeAgent.containers.agent.resources.limits.cpu -- CPU resource limits. + cpu: "270m" + # nodeAgent.containers.agent.resources.limits.memory -- Memory resource limits. + memory: "420Mi" + requests: + # nodeAgent.containers.agent.resources.requests.cpu -- CPU resource requests. + cpu: "20m" + # nodeAgent.containers.agent.resources.requests.memory -- Memory resource requests. + memory: "180Mi" + livenessProbe: + # nodeAgent.containers.agent.livenessProbe.enabled -- Enable use of livenessProbe check. + enabled: true + # nodeAgent.containers.agent.livenessProbe.failureThreshold -- `failureThreshold` for the liveness probe. + failureThreshold: 3 + # nodeAgent.containers.agent.livenessProbe.initialDelaySeconds -- `initialDelaySeconds` for the liveness probe. + initialDelaySeconds: 15 + # nodeAgent.containers.agent.livenessProbe.periodSeconds -- `periodSeconds` for the liveness probe. + periodSeconds: 15 + # nodeAgent.containers.agent.livenessProbe.successThreshold -- `successThreshold` for the liveness probe. + successThreshold: 1 + # nodeAgent.containers.agent.livenessProbe.timeoutSeconds -- `timeoutSeconds` for the liveness probe. + timeoutSeconds: 5 + readinessProbe: + # nodeAgent.containers.agent.readinessProbe.enabled -- Enable use of readinessProbe check. + enabled: true + # nodeAgent.containers.agent.readinessProbe.failureThreshold -- `failureThreshold` for the readiness probe. + failureThreshold: 3 + # nodeAgent.containers.agent.readinessProbe.initialDelaySeconds -- `initialDelaySeconds` for the readiness probe. + initialDelaySeconds: 15 + # nodeAgent.containers.agent.readinessProbe.periodSeconds -- `periodSeconds` for the readiness probe. + periodSeconds: 15 + # nodeAgent.containers.agent.readinessProbe.successThreshold -- `successThreshold` for the readiness probe. + successThreshold: 1 + # nodeAgent.containers.agent.readinessProbe.timeoutSeconds -- `timeoutSeconds` for the readiness probe. + timeoutSeconds: 5 + + processAgent: + # nodeAgent.containers.processAgent.enabled -- Enable / disable the process agent container. + enabled: true + image: + # Override to pull the image from an alternate registry + registry: + # nodeAgent.containers.processAgent.image.repository -- Process-agent container image repository. + repository: stackstate/stackstate-k8s-process-agent + # nodeAgent.containers.processAgent.image.tag -- Default process-agent container image tag. + tag: "cae7a4fa" + # nodeAgent.containers.processAgent.image.pullPolicy -- Process-agent container image pull policy. + pullPolicy: IfNotPresent + # nodeAgent.containers.processAgent.env -- Additional environment variables for the process-agent container + env: {} + # nodeAgent.containers.processAgent.logLevel -- Set logging verbosity, valid log levels are: trace, debug, info, warn, error, critical, and off + ## If not set, fall back to the value of agent.logLevel. + logLevel: # INFO + + # nodeAgent.containers.processAgent.procVolumeReadOnly -- Configure whether /host/proc is read only for the process agent container + procVolumeReadOnly: true + + resources: + limits: + # nodeAgent.containers.processAgent.resources.limits.cpu -- CPU resource limits. + cpu: "125m" + # nodeAgent.containers.processAgent.resources.limits.memory -- Memory resource limits. + memory: "400Mi" + requests: + # nodeAgent.containers.processAgent.resources.requests.cpu -- CPU resource requests. + cpu: "25m" + # nodeAgent.containers.processAgent.resources.requests.memory -- Memory resource requests. + memory: "128Mi" + # nodeAgent.service -- The Kubernetes service for the agent + service: + # nodeAgent.service.type -- Type of Kubernetes service: ClusterIP, LoadBalancer, NodePort + type: ClusterIP + # nodeAgent.service.annotations -- Annotations for the service + annotations: {} + # nodeAgent.service.loadBalancerSourceRanges -- The IP4 CIDR allowed to reach LoadBalancer for the service. For LoadBalancer type of service only. + loadBalancerSourceRanges: ["10.0.0.0/8"] + + # nodeAgent.logLevel -- Logging level for agent processes. + logLevel: INFO + + # nodeAgent.updateStrategy -- The update strategy for the DaemonSet object. + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 100 + + # nodeAgent.nodeSelector -- Node labels for pod assignment. + nodeSelector: {} + + # nodeAgent.tolerations -- Toleration labels for pod assignment. + tolerations: [] + + # nodeAgent.affinity -- Affinity settings for pod assignment. + affinity: {} + + serviceaccount: + # nodeAgent.serviceaccount.annotations -- Annotations for the service account for the agent daemonset pods + annotations: {} + +processAgent: + softMemoryLimit: + # processAgent.softMemoryLimit.goMemLimit -- Soft-limit for golang heap allocation, for sanity, must be around 85% of nodeAgent.containers.processAgent.resources.limits.cpu. + goMemLimit: 340MiB + # processAgent.softMemoryLimit.httpStatsBufferSize -- Sets a maximum for the number of http stats to keep in memory between check runs, to use 40k requires around ~400Mib of memory. + httpStatsBufferSize: 40000 + # processAgent.softMemoryLimit.httpObservationsBufferSize -- Sets a maximum for the number of http observations to keep in memory between check runs, to use 40k requires around ~400Mib of memory. + httpObservationsBufferSize: 40000 + + checkIntervals: + # processAgent.checkIntervals.container -- Override the default value of the container check interval in seconds. + container: 28 + # processAgent.checkIntervals.connections -- Override the default value of the connections check interval in seconds. + connections: 30 + # processAgent.checkIntervals.process -- Override the default value of the process check interval in seconds. + process: 32 + +clusterAgent: + collection: + # clusterAgent.collection.kubernetesEvents -- Enable / disable the cluster agent events collection. + kubernetesEvents: true + # clusterAgent.collection.kubernetesMetrics -- Enable / disable the cluster agent metrics collection. + kubernetesMetrics: true + # clusterAgent.collection.kubernetesTimeout -- Default timeout (in seconds) when obtaining information from the Kubernetes API. + kubernetesTimeout: 10 + # clusterAgent.collection.kubernetesTopology -- Enable / disable the cluster agent topology collection. + kubernetesTopology: true + kubeStateMetrics: + # clusterAgent.collection.kubeStateMetrics.enabled -- Enable / disable the cluster agent kube-state-metrics collection. + enabled: true + # clusterAgent.collection.kubeStateMetrics.clusterCheck -- For large clusters where the Kubernetes State Metrics Check Core needs to be distributed on dedicated workers. + clusterCheck: false + # clusterAgent.collection.kubeStateMetrics.labelsAsTags -- Extra labels to collect from resources and to turn into StackState tag. + ## It has the following structure: + ## labelsAsTags: + ## : # can be pod, deployment, node, etc. + ## : # where is the kubernetes label and is the StackState tag + ## : + ## : + ## : + ## + ## Warning: the label must match the transformation done by kube-state-metrics, + ## for example tags.stackstate/version becomes tags_stackstate_version. + labelsAsTags: {} + # pod: + # app: app + # node: + # zone: zone + # team: team + + # clusterAgent.collection.kubeStateMetrics.annotationsAsTags -- Extra annotations to collect from resources and to turn into StackState tag. + + ## It has the following structure: + ## annotationsAsTags: + ## : # can be pod, deployment, node, etc. + ## : # where is the kubernetes annotation and is the StackState tag + ## : + ## : + ## : + ## + ## Warning: the annotation must match the transformation done by kube-state-metrics, + ## for example tags.stackstate/version becomes tags_stackstate_version. + annotationsAsTags: {} + kubernetesResources: + # clusterAgent.collection.kubernetesResources.limitranges -- Enable / disable collection of LimitRanges. + limitranges: true + # clusterAgent.collection.kubernetesResources.horizontalpodautoscalers -- Enable / disable collection of HorizontalPodAutoscalers. + horizontalpodautoscalers: true + # clusterAgent.collection.kubernetesResources.replicationcontrollers -- Enable / disable collection of ReplicationControllers. + replicationcontrollers: true + # clusterAgent.collection.kubernetesResources.poddisruptionbudgets -- Enable / disable collection of PodDisruptionBudgets. + poddisruptionbudgets: true + # clusterAgent.collection.kubernetesResources.storageclasses -- Enable / disable collection of StorageClasses. + storageclasses: true + # clusterAgent.collection.kubernetesResources.volumeattachments -- Enable / disable collection of Volume Attachments. Used to bind Nodes to Persistent Volumes. + volumeattachments: true + # clusterAgent.collection.kubernetesResources.namespaces -- Enable / disable collection of Namespaces. + namespaces: true + # clusterAgent.collection.kubernetesResources.configmaps -- Enable / disable collection of ConfigMaps. + configmaps: true + # clusterAgent.collection.kubernetesResources.endpoints -- Enable / disable collection of Endpoints. If endpoints are disabled then StackState won't be able to connect a Service to Pods that serving it + endpoints: true + # clusterAgent.collection.kubernetesResources.persistentvolumes -- Enable / disable collection of PersistentVolumes. + persistentvolumes: true + # clusterAgent.collection.kubernetesResources.persistentvolumeclaims -- Enable / disable collection of PersistentVolumeClaims. Disabling these will not let StackState connect PersistentVolumes to pods they are attached to + persistentvolumeclaims: true + # clusterAgent.collection.kubernetesResources.secrets -- Enable / disable collection of Secrets. + secrets: true + # clusterAgent.collection.kubernetesResources.daemonsets -- Enable / disable collection of DaemonSets. + daemonsets: true + # clusterAgent.collection.kubernetesResources.deployments -- Enable / disable collection of Deployments. + deployments: true + # clusterAgent.collection.kubernetesResources.replicasets -- Enable / disable collection of ReplicaSets. + replicasets: true + # clusterAgent.collection.kubernetesResources.statefulsets -- Enable / disable collection of StatefulSets. + statefulsets: true + # clusterAgent.collection.kubernetesResources.ingresses -- Enable / disable collection of Ingresses. + ingresses: true + # clusterAgent.collection.kubernetesResources.cronjobs -- Enable / disable collection of CronJobs. + cronjobs: true + # clusterAgent.collection.kubernetesResources.jobs -- Enable / disable collection of Jobs. + jobs: true + # clusterAgent.collection.kubernetesResources.resourcequotas -- Enable / disable collection of ResourceQuotas. + resourcequotas: true + + # clusterAgent.config -- + config: + events: + # clusterAgent.config.events.categories -- Custom mapping from Kubernetes event reason to StackState event category. Categories allowed: Alerts, Activities, Changes, Others + categories: {} + topology: + # clusterAgent.config.topology.collectionInterval -- Interval for running topology collection, in seconds + collectionInterval: 90 + configMap: + # clusterAgent.config.configMap.maxDataSize -- Maximum amount of characters for the data property of a ConfigMap collected by the kubernetes topology check + maxDataSize: + # clusterAgent.config.override -- A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap + override: [] + + service: + # clusterAgent.service.port -- Change the Cluster Agent service port + port: 5005 + # clusterAgent.service.targetPort -- Change the Cluster Agent service targetPort + targetPort: 5005 + + # clusterAgent.enabled -- Enable / disable the cluster agent. + enabled: true + + # clusterAgent.skipSslValidation -- If true, ignores the server certificate being signed by an unknown authority. + skipSslValidation: false + + image: + # clusterAgent.image.repository -- Base container image repository. + repository: stackstate/stackstate-k8s-cluster-agent + # clusterAgent.image.tag -- Default container image tag. + tag: "c4caacef" + # clusterAgent.image.pullPolicy -- Default container image pull policy. + pullPolicy: IfNotPresent + + livenessProbe: + # clusterAgent.livenessProbe.enabled -- Enable use of livenessProbe check. + enabled: true + # clusterAgent.livenessProbe.failureThreshold -- `failureThreshold` for the liveness probe. + failureThreshold: 3 + # clusterAgent.livenessProbe.initialDelaySeconds -- `initialDelaySeconds` for the liveness probe. + initialDelaySeconds: 15 + # clusterAgent.livenessProbe.periodSeconds -- `periodSeconds` for the liveness probe. + periodSeconds: 15 + # clusterAgent.livenessProbe.successThreshold -- `successThreshold` for the liveness probe. + successThreshold: 1 + # clusterAgent.livenessProbe.timeoutSeconds -- `timeoutSeconds` for the liveness probe. + timeoutSeconds: 5 + + # clusterAgent.logLevel -- Logging level for stackstate-k8s-agent processes. + logLevel: INFO + + # clusterAgent.priorityClassName -- Priority class for stackstate-k8s-agent pods. + priorityClassName: "" + + readinessProbe: + # clusterAgent.readinessProbe.enabled -- Enable use of readinessProbe check. + enabled: true + # clusterAgent.readinessProbe.failureThreshold -- `failureThreshold` for the readiness probe. + failureThreshold: 3 + # clusterAgent.readinessProbe.initialDelaySeconds -- `initialDelaySeconds` for the readiness probe. + initialDelaySeconds: 15 + # clusterAgent.readinessProbe.periodSeconds -- `periodSeconds` for the readiness probe. + periodSeconds: 15 + # clusterAgent.readinessProbe.successThreshold -- `successThreshold` for the readiness probe. + successThreshold: 1 + # clusterAgent.readinessProbe.timeoutSeconds -- `timeoutSeconds` for the readiness probe. + timeoutSeconds: 5 + + # clusterAgent.replicaCount -- Number of replicas of the cluster agent to deploy. + replicaCount: 1 + + serviceaccount: + # clusterAgent.serviceaccount.annotations -- Annotations for the service account for the cluster agent pods + annotations: {} + + # clusterAgent.strategy -- The strategy for the Deployment object. + strategy: + type: RollingUpdate + # rollingUpdate: + # maxUnavailable: 1 + + resources: + limits: + # clusterAgent.resources.limits.cpu -- CPU resource limits. + cpu: "400m" + # clusterAgent.resources.limits.memory -- Memory resource limits. + memory: "800Mi" + requests: + # clusterAgent.resources.requests.cpu -- CPU resource requests. + cpu: "70m" + # clusterAgent.resources.requests.memory -- Memory resource requests. + memory: "512Mi" + + # clusterAgent.nodeSelector -- Node labels for pod assignment. + nodeSelector: {} + + # clusterAgent.tolerations -- Toleration labels for pod assignment. + tolerations: [] + + # clusterAgent.affinity -- Affinity settings for pod assignment. + affinity: {} + +openShiftLogging: + # openShiftLogging.installSecret -- Install a secret for logging on openshift + installSecret: false + +logsAgent: + # logsAgent.enabled -- Enable / disable k8s pod log collection + enabled: true + + # logsAgent.skipSslValidation -- If true, ignores the server certificate being signed by an unknown authority. + skipSslValidation: false + + # logsAgent.priorityClassName -- Priority class for logsAgent pods. + priorityClassName: "" + + image: + # logsAgent.image.repository -- Base container image repository. + repository: stackstate/promtail + # logsAgent.image.tag -- Default container image tag. + tag: 2.9.8-5b179aee + # logsAgent.image.pullPolicy -- Default container image pull policy. + pullPolicy: IfNotPresent + + resources: + limits: + # logsAgent.resources.limits.cpu -- CPU resource limits. + cpu: "1300m" + # logsAgent.resources.limits.memory -- Memory resource limits. + memory: "192Mi" + requests: + # logsAgent.resources.requests.cpu -- CPU resource requests. + cpu: "20m" + # logsAgent.resources.requests.memory -- Memory resource requests. + memory: "100Mi" + + # logsAgent.updateStrategy -- The update strategy for the DaemonSet object. + updateStrategy: + type: RollingUpdate + rollingUpdate: + maxUnavailable: 100 + + # logsAgent.nodeSelector -- Node labels for pod assignment. + nodeSelector: {} + + # logsAgent.tolerations -- Toleration labels for pod assignment. + tolerations: [] + + # logsAgent.affinity -- Affinity settings for pod assignment. + affinity: {} + + serviceaccount: + # logsAgent.serviceaccount.annotations -- Annotations for the service account for the daemonset pods + annotations: {} + +checksAgent: + # checksAgent.enabled -- Enable / disable runnning cluster checks in a separately deployed pod + enabled: true + scc: + # checksAgent.scc.enabled -- Enable / disable the installation of the SecurityContextConfiguration needed for installation on OpenShift + enabled: false + apm: + # checksAgent.apm.enabled -- Enable / disable the agent APM module. + enabled: true + networkTracing: + # checksAgent.networkTracing.enabled -- Enable / disable the agent network tracing module. + enabled: true + processAgent: + # checksAgent.processAgent.enabled -- Enable / disable the agent process agent module. + enabled: true + # checksAgent.skipSslValidation -- Set to true if self signed certificates are used. + skipSslValidation: false + + # nodeAgent.checksTagCardinality -- low, orchestrator or high. Orchestrator level adds pod_name, high adds display_container_name + checksTagCardinality: orchestrator + + # checksAgent.config -- + config: + # checksAgent.config.override -- A list of objects containing three keys `name`, `path` and `data`, specifying filenames at specific paths which need to be (potentially) overridden using a mounted configmap + override: [] + + image: + # checksAgent.image.repository -- Base container image repository. + repository: stackstate/stackstate-k8s-agent + # checksAgent.image.tag -- Default container image tag. + tag: "c4caacef" + # checksAgent.image.pullPolicy -- Default container image pull policy. + pullPolicy: IfNotPresent + + livenessProbe: + # checksAgent.livenessProbe.enabled -- Enable use of livenessProbe check. + enabled: true + # checksAgent.livenessProbe.failureThreshold -- `failureThreshold` for the liveness probe. + failureThreshold: 3 + # checksAgent.livenessProbe.initialDelaySeconds -- `initialDelaySeconds` for the liveness probe. + initialDelaySeconds: 15 + # checksAgent.livenessProbe.periodSeconds -- `periodSeconds` for the liveness probe. + periodSeconds: 15 + # checksAgent.livenessProbe.successThreshold -- `successThreshold` for the liveness probe. + successThreshold: 1 + # checksAgent.livenessProbe.timeoutSeconds -- `timeoutSeconds` for the liveness probe. + timeoutSeconds: 5 + + # checksAgent.logLevel -- Logging level for clusterchecks agent processes. + logLevel: INFO + + # checksAgent.priorityClassName -- Priority class for clusterchecks agent pods. + priorityClassName: "" + + readinessProbe: + # checksAgent.readinessProbe.enabled -- Enable use of readinessProbe check. + enabled: true + # checksAgent.readinessProbe.failureThreshold -- `failureThreshold` for the readiness probe. + failureThreshold: 3 + # checksAgent.readinessProbe.initialDelaySeconds -- `initialDelaySeconds` for the readiness probe. + initialDelaySeconds: 15 + # checksAgent.readinessProbe.periodSeconds -- `periodSeconds` for the readiness probe. + periodSeconds: 15 + # checksAgent.readinessProbe.successThreshold -- `successThreshold` for the readiness probe. + successThreshold: 1 + # checksAgent.readinessProbe.timeoutSeconds -- `timeoutSeconds` for the readiness probe. + timeoutSeconds: 5 + + # checksAgent.replicas -- Number of clusterchecks agent pods to schedule + replicas: 1 + + resources: + limits: + # checksAgent.resources.limits.cpu -- CPU resource limits. + cpu: "400m" + # checksAgent.resources.limits.memory -- Memory resource limits. + memory: "600Mi" + requests: + # checksAgent.resources.requests.cpu -- CPU resource requests. + cpu: "20m" + # checksAgent.resources.requests.memory -- Memory resource requests. + memory: "512Mi" + + serviceaccount: + # checksAgent.serviceaccount.annotations -- Annotations for the service account for the cluster checks pods + annotations: {} + + # checksAgent.strategy -- The strategy for the Deployment object. + strategy: + type: RollingUpdate + # rollingUpdate: + # maxUnavailable: 1 + + # checksAgent.nodeSelector -- Node labels for pod assignment. + nodeSelector: {} + + # checksAgent.tolerations -- Toleration labels for pod assignment. + tolerations: [] + + # checksAgent.affinity -- Affinity settings for pod assignment. + affinity: {} + +################################## +# http-header-injector variables # +################################## + +httpHeaderInjectorWebhook: + # httpHeaderInjectorWebhook.enabled -- Enable the webhook for injection http header injection sidecar proxy + enabled: false + +######################## +# StackState variables # +######################## + +stackstate: + # stackstate.manageOwnSecrets -- Set to true if you don't want this helm chart to create secrets for you. + manageOwnSecrets: false + # stackstate.customSecretName -- Name of the secret containing the receiver API key. + customSecretName: "" + # stackstate.customApiKeySecretKey -- Key in the secret containing the receiver API key. + customApiKeySecretKey: "sts-api-key" + # stackstate.customClusterAuthTokenSecretKey -- Key in the secret containing the cluster auth token. + customClusterAuthTokenSecretKey: "sts-cluster-auth-token" + # stackstate.apiKey -- (string) **PROVIDE YOUR API KEY HERE** API key to be used by the StackState agent. + apiKey: + cluster: + # stackstate.cluster.name -- (string) **PROVIDE KUBERNETES CLUSTER NAME HERE** Name of the Kubernetes cluster where the agent will be installed. + name: + # stackstate.cluster.authToken -- Provide a token to enable secure communication between the agent and the cluster agent. + authToken: "" + # stackstate.url -- (string) **PROVIDE STACKSTATE URL HERE** URL of the StackState installation to receive data from the agent. + url: diff --git a/index.yaml b/index.yaml index 8120aeb953..48caaec708 100644 --- a/index.yaml +++ b/index.yaml @@ -241,6 +241,40 @@ entries: - assets/amd/amd-gpu-0.9.0.tgz version: 0.9.0 artifactory-ha: + - annotations: + artifactoryServiceVersion: 7.90.20 + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Artifactory HA + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-ha + apiVersion: v2 + appVersion: 7.90.14 + created: "2024-10-09T00:35:07.012228604Z" + dependencies: + - condition: postgresql.enabled + name: postgresql + repository: https://charts.jfrog.io/ + version: 10.3.18 + description: Universal Repository Manager supporting all major packaging formats, + build tools and CI servers. + digest: dc098ed79093e4ae894fdff48aaa382ca9764be14c0f2d6e98ad6953e4890ee5 + home: https://www.jfrog.com/artifactory/ + icon: file://assets/icons/artifactory-ha.png + keywords: + - artifactory + - jfrog + - devops + kubeVersion: '>= 1.19.0-0' + maintainers: + - email: installers@jfrog.com + name: Chart Maintainers at JFrog + name: artifactory-ha + sources: + - https://github.com/jfrog/charts + type: application + urls: + - assets/jfrog/artifactory-ha-107.90.14.tgz + version: 107.90.14 - annotations: artifactoryServiceVersion: 7.90.18 catalog.cattle.io/certified: partner @@ -1642,6 +1676,40 @@ entries: - assets/jfrog/artifactory-ha-107.55.14.tgz version: 107.55.14 artifactory-jcr: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: JFrog Container Registry + catalog.cattle.io/kube-version: '>= 1.19.0-0' + catalog.cattle.io/release-name: artifactory-jcr + apiVersion: v2 + appVersion: 7.90.14 + created: "2024-10-09T00:35:07.404179085Z" + dependencies: + - name: artifactory + repository: file://charts/artifactory + version: 107.90.14 + description: JFrog Container Registry + digest: b629ddb62d78b9ee327db9067b1a5fd5bb71ddf7c43850929fd63de16ac4e581 + home: https://jfrog.com/container-registry/ + icon: file://assets/icons/artifactory-jcr.png + keywords: + - artifactory + - jfrog + - container + - registry + - devops + - jfrog-container-registry + kubeVersion: '>= 1.19.0-0' + maintainers: + - email: helm@jfrog.com + name: Chart Maintainers at JFrog + name: artifactory-jcr + sources: + - https://github.com/jfrog/charts + type: application + urls: + - assets/jfrog/artifactory-jcr-107.90.14.tgz + version: 107.90.14 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: JFrog Container Registry @@ -20578,6 +20646,35 @@ entries: - assets/trilio/k8s-triliovault-operator-3.1.1.tgz version: 3.1.1 k10: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: K10 + catalog.cattle.io/kube-version: '>= 1.17.0-0' + catalog.cattle.io/release-name: k10 + apiVersion: v2 + appVersion: 7.0.11 + created: "2024-10-09T00:35:07.82628804Z" + dependencies: + - condition: grafana.enabled + name: grafana + repository: "" + version: 8.5.0 + - condition: prometheus.server.enabled + name: prometheus + repository: "" + version: 25.24.1 + description: Kasten’s K10 Data Management Platform + digest: 18e3b1b58d07b8acdb8b9fb76b7b63b02b38ac8207872db83f76b12814f77692 + home: https://kasten.io/ + icon: file://assets/icons/k10.png + kubeVersion: '>= 1.17.0-0' + maintainers: + - email: contact@kasten.io + name: kastenIO + name: k10 + urls: + - assets/kasten/k10-7.0.1101.tgz + version: 7.0.1101 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: K10 @@ -28633,6 +28730,95 @@ entries: - assets/f5/nginx-ingress-1.0.2.tgz version: 1.0.2 nri-bundle: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: New Relic + catalog.cattle.io/release-name: nri-bundle + apiVersion: v2 + created: "2024-10-09T00:35:09.545137647Z" + dependencies: + - condition: infrastructure.enabled,newrelic-infrastructure.enabled + name: newrelic-infrastructure + repository: https://newrelic.github.io/nri-kubernetes + version: 3.34.6 + - condition: prometheus.enabled,nri-prometheus.enabled + name: nri-prometheus + repository: https://newrelic.github.io/nri-prometheus + version: 2.1.19 + - condition: newrelic-prometheus-agent.enabled + name: newrelic-prometheus-agent + repository: https://newrelic.github.io/newrelic-prometheus-configurator + version: 1.14.4 + - condition: webhook.enabled,nri-metadata-injection.enabled + name: nri-metadata-injection + repository: https://newrelic.github.io/k8s-metadata-injection + version: 4.21.2 + - condition: metrics-adapter.enabled,newrelic-k8s-metrics-adapter.enabled + name: newrelic-k8s-metrics-adapter + repository: https://newrelic.github.io/newrelic-k8s-metrics-adapter + version: 1.11.4 + - condition: ksm.enabled,kube-state-metrics.enabled + name: kube-state-metrics + repository: https://prometheus-community.github.io/helm-charts + version: 5.12.1 + - condition: kubeEvents.enabled,nri-kube-events.enabled + name: nri-kube-events + repository: https://newrelic.github.io/nri-kube-events + version: 3.10.8 + - condition: logging.enabled,newrelic-logging.enabled + name: newrelic-logging + repository: https://newrelic.github.io/helm-charts + version: 1.23.0 + - condition: newrelic-pixie.enabled + name: newrelic-pixie + repository: https://newrelic.github.io/helm-charts + version: 2.1.4 + - condition: k8s-agents-operator.enabled + name: k8s-agents-operator + repository: https://newrelic.github.io/k8s-agents-operator + version: 0.13.0 + - alias: pixie-chart + condition: pixie-chart.enabled + name: pixie-operator-chart + repository: https://pixie-operator-charts.storage.googleapis.com + version: 0.1.6 + - condition: newrelic-infra-operator.enabled + name: newrelic-infra-operator + repository: https://newrelic.github.io/newrelic-infra-operator + version: 2.11.4 + description: Groups together the individual charts for the New Relic Kubernetes + solution for a more comfortable deployment. + digest: a067cc71a21b7f8be82f3f12171751746741c898bbf09b4b5c5f054e4d4fc286 + home: https://github.com/newrelic/helm-charts + icon: file://assets/icons/nri-bundle.svg + keywords: + - infrastructure + - newrelic + - monitoring + maintainers: + - name: juanjjaramillo + url: https://github.com/juanjjaramillo + - name: csongnr + url: https://github.com/csongnr + - name: dbudziwojskiNR + url: https://github.com/dbudziwojskiNR + name: nri-bundle + sources: + - https://github.com/newrelic/nri-bundle/ + - https://github.com/newrelic/nri-bundle/tree/master/charts/nri-bundle + - https://github.com/newrelic/nri-kubernetes/tree/master/charts/newrelic-infrastructure + - https://github.com/newrelic/nri-prometheus/tree/master/charts/nri-prometheus + - https://github.com/newrelic/newrelic-prometheus-configurator/tree/master/charts/newrelic-prometheus-agent + - https://github.com/newrelic/k8s-metadata-injection/tree/master/charts/nri-metadata-injection + - https://github.com/newrelic/newrelic-k8s-metrics-adapter/tree/master/charts/newrelic-k8s-metrics-adapter + - https://github.com/newrelic/nri-kube-events/tree/master/charts/nri-kube-events + - https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-logging + - https://github.com/newrelic/helm-charts/tree/master/charts/newrelic-pixie + - https://github.com/newrelic/newrelic-infra-operator/tree/master/charts/newrelic-infra-operator + - https://github.com/newrelic/k8s-agents-operator/tree/master/charts/k8s-agents-operator + urls: + - assets/new-relic/nri-bundle-5.0.94.tgz + version: 5.0.94 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: New Relic @@ -40451,6 +40637,35 @@ entries: - assets/speedscale/speedscale-operator-1.3.10.tgz version: 1.3.10 stackstate-k8s-agent: + - annotations: + catalog.cattle.io/certified: partner + catalog.cattle.io/display-name: StackState Agent + catalog.cattle.io/kube-version: '>=1.19.0-0' + catalog.cattle.io/release-name: stackstate-k8s-agent + apiVersion: v2 + appVersion: 3.0.0 + created: "2024-10-09T00:35:10.293415083Z" + dependencies: + - alias: httpHeaderInjectorWebhook + name: http-header-injector + repository: https://helm.stackstate.io + version: 0.0.11 + description: Helm chart for the StackState Agent. + digest: bb89a8c3005bac496a1a9076b936b314815ddc3fccbbd0767f99064fa6dc5955 + home: https://github.com/StackVista/stackstate-agent + icon: file://assets/icons/stackstate-k8s-agent.svg + keywords: + - monitoring + - observability + - stackstate + kubeVersion: '>=1.19.0-0' + maintainers: + - email: ops@stackstate.com + name: Stackstate + name: stackstate-k8s-agent + urls: + - assets/stackstate/stackstate-k8s-agent-1.0.98.tgz + version: 1.0.98 - annotations: catalog.cattle.io/certified: partner catalog.cattle.io/display-name: StackState Agent @@ -44399,4 +44614,4 @@ entries: urls: - assets/netfoundry/ziti-host-1.5.1.tgz version: 1.5.1 -generated: "2024-10-08T00:34:54.185236969Z" +generated: "2024-10-09T00:35:05.224730742Z"