Skip to content

Commit

Permalink
Cronjob Deployment Configs (#1433)
Browse files Browse the repository at this point in the history
- Telemetry cronjob
  • Loading branch information
MacQSL authored Nov 27, 2024
1 parent 085199b commit 5bf290e
Show file tree
Hide file tree
Showing 6 changed files with 144 additions and 2 deletions.
8 changes: 8 additions & 0 deletions api/.pipeline/config.js
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,8 @@ const phases = {
dbName: `${dbName}`,
phase: 'dev',
changeId: deployChangeId,
telemetryCronjobSchedule: '0 0 * * *', // Daily at midnight
telemetryCronjobDisabled: !isStaticDeployment,
suffix: `-dev-${deployChangeId}`,
instance: `${name}-dev-${deployChangeId}`,
version: `${deployChangeId}-${changeId}`,
Expand Down Expand Up @@ -114,6 +116,8 @@ const phases = {
dbName: `${dbName}`,
phase: 'test',
changeId: deployChangeId,
telemetryCronjobSchedule: '0 0 * * *', // Daily at midnight
telemetryCronjobDisabled: !isStaticDeployment,
suffix: `-test`,
instance: `${name}-test`,
version: `${version}`,
Expand Down Expand Up @@ -157,6 +161,8 @@ const phases = {
dbName: `${dbName}-spi`,
phase: 'test-spi',
changeId: deployChangeId,
telemetryCronjobSchedule: '0 0 * * *', // Daily at midnight
telemetryCronjobDisabled: !isStaticDeployment,
suffix: `-test-spi`,
instance: `${name}-spi-test-spi`,
version: `${version}`,
Expand Down Expand Up @@ -200,6 +206,8 @@ const phases = {
dbName: `${dbName}`,
phase: 'prod',
changeId: deployChangeId,
telemetryCronjobSchedule: '0 0 * * *', // Daily at midnight
telemetryCronjobDisabled: !isStaticDeployment,
suffix: `-prod`,
instance: `${name}-prod`,
version: `${version}`,
Expand Down
4 changes: 4 additions & 0 deletions api/.pipeline/lib/api.deploy.js
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,16 @@ const apiDeploy = async (settings) => {
objects.push(
...oc.processDeploymentTemplate(`${templatesLocalBaseUrl}/api.dc.yaml`, {
param: {
NAMESPACE: phases[phase].namespace,
NAME: phases[phase].name,
SUFFIX: phases[phase].suffix,
VERSION: phases[phase].tag,
HOST: phases[phase].host,
APP_HOST: phases[phase].appHost,
CHANGE_ID: phases.build.changeId || changeId,
// Cronjobs
TELEMETRY_CRONJOB_SCHEDULE: phases[phase].telemetryCronjobSchedule,
TELEMETRY_CRONJOB_DISABLED: phases[phase].telemetryCronjobDisabled,
// Node
NODE_ENV: phases[phase].nodeEnv,
NODE_OPTIONS: phases[phase].nodeOptions,
Expand Down
2 changes: 1 addition & 1 deletion api/.pipeline/lib/clean.js
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ const clean = (settings) => {
namespace: phaseObj.namespace
});

oc.raw('delete', ['all,pvc,secrets,Secrets,secret,configmap,endpoints,Endpoints'], {
oc.raw('delete', ['all,pvc,secrets,Secrets,secret,configmap,endpoints,Endpoints,cronjobs,Cronjobs'], {
selector: `app=${phaseObj.instance},env-id=${phaseObj.changeId},!shared,github-repo=${oc.git.repository},github-owner=${oc.git.owner}`,
wait: 'true',
namespace: phaseObj.namespace
Expand Down
6 changes: 6 additions & 0 deletions api/.pipeline/templates/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,3 +10,9 @@ The pipeline code builds and deploys all pods/images/storage/etc needed to deplo
- Create ObjectStore Secret

The included templates under `prereqs` can be imported via the "Import YAML" page in OpenShift.

## Telemetry Cronjob

How to manually trigger cronjob?

- `oc create job --from=cronjob/biohubbc-telemetry-cronjob-<suffix> <name of job>`
118 changes: 117 additions & 1 deletion api/.pipeline/templates/api.dc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,12 @@ metadata:
labels:
build: biohubbc-api
parameters:
- name: NAMESPACE
description: Openshift namespace name
value: ''
- name: BASE_IMAGE_REGISTRY_URL
description: The base image registry URL
value: image-registry.openshift-image-registry.svc:5000
- name: NAME
value: biohubbc-api
- name: SUFFIX
Expand Down Expand Up @@ -76,6 +82,10 @@ parameters:
- name: DB_SERVICE_NAME
description: 'Database service name associated with deployment'
required: true
- name: DB_PORT
description: 'Database port'
required: true
value: '5432'
# Keycloak
- name: KEYCLOAK_HOST
description: Key clock login url
Expand Down Expand Up @@ -195,6 +205,22 @@ parameters:
value: '1'
- name: REPLICAS_MAX
value: '1'
# Telemetry
- name: TELEMETRY_CRONJOB_SCHEDULE
description: The schedule for the telemetry cronjob
value: '0 0 * * *' # 12am
- name: TELEMETRY_CRONJOB_DISABLED
description: Boolean flag to disable the cronjob, only static deployments should run on schedule.
value: 'true'
- name: TELEMETRY_SECRET
description: The name of the Openshift Biohubbc telemetry secret
value: biohubbc-telemetry
- name: LOTEK_API_HOST
description: The host URL for Lotek webservice API
value: https://webservice.lotek.com
- name: VECTRONIC_API_HOST
description: The host URL for Vectronic webservice API
value: https://api.vectronic-wildlife.com
objects:
- kind: ImageStream
apiVersion: image.openshift.io/v1
Expand Down Expand Up @@ -316,7 +342,7 @@ objects:
key: database-name
name: ${DB_SERVICE_NAME}
- name: DB_PORT
value: '5432'
value: ${DB_PORT}
# Keycloak
- name: KEYCLOAK_HOST
value: ${KEYCLOAK_HOST}
Expand Down Expand Up @@ -537,6 +563,96 @@ objects:
status:
ingress: null

- kind: CronJob
apiVersion: batch/v1
metadata:
name: biohubbc-telemetry-cronjob${SUFFIX}
labels:
role: telemetry-cronjob
spec:
schedule: ${TELEMETRY_CRONJOB_SCHEDULE}
suspend: ${{TELEMETRY_CRONJOB_DISABLED}}
concurrencyPolicy: 'Forbid'
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
containers:
- name: api
image: ${BASE_IMAGE_REGISTRY_URL}/${NAMESPACE}/${NAME}:${VERSION}
imagePullPolicy: Always
restartPolicy: 'Never'
terminationGracePeriodSeconds: 30
activeDeadlineSeconds: 220
env:
- name: NODE_ENV
value: ${NODE_ENV}
- name: NODE_OPTIONS
value: ${NODE_OPTIONS}
# Database
- name: TZ
value: ${TZ}
- name: DB_HOST
value: ${DB_SERVICE_NAME}
- name: DB_USER_API
valueFrom:
secretKeyRef:
key: database-user-api
name: ${DB_SERVICE_NAME}
- name: DB_USER_API_PASS
valueFrom:
secretKeyRef:
key: database-user-api-password
name: ${DB_SERVICE_NAME}
- name: DB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: ${DB_SERVICE_NAME}
- name: DB_PORT
value: ${DB_PORT}
# Telemetry
- name: LOTEK_API_HOST
value: ${LOTEK_API_HOST}
- name: LOTEK_ACCOUNT_USERNAME
valueFrom:
secretKeyRef:
key: lotek_account_username
name: ${TELEMETRY_SECRET}
- name: LOTEK_ACCOUNT_PASSWORD
valueFrom:
secretKeyRef:
key: lotek_account_password
name: ${TELEMETRY_SECRET}
- name: VECTRONIC_API_HOST
value: ${VECTRONIC_API_HOST}
# Logging
- name: LOG_LEVEL
value: ${LOG_LEVEL}
- name: LOG_LEVEL_FILE
value: data/cronjob-logs
- name: LOG_FILE_DIR
value: ${LOG_FILE_DIR}
- name: LOG_FILE_NAME
value: sims-telemetry-cronjob-%DATE%.log
- name: LOG_FILE_DATE_PATTERN
value: ${LOG_FILE_DATE_PATTERN}
- name: LOG_FILE_MAX_SIZE
value: ${LOG_FILE_MAX_SIZE}
- name: LOG_FILE_MAX_FILES
value: ${LOG_FILE_MAX_FILES}
# Api Validation
- name: API_RESPONSE_VALIDATION_ENABLED
value: ${API_RESPONSE_VALIDATION_ENABLED}
- name: DATABASE_RESPONSE_VALIDATION_ENABLED
value: ${DATABASE_RESPONSE_VALIDATION_ENABLED}
command: ["npm", "run", "telemetry-cronjob", "--", "--batchSize 1000", "--concurrently 100"]
restartPolicy: Never


# Disable the HPA for now, as it is preferrable to run an exact number of pods (e.g. min:2, max:2)
# - kind: HorizontalPodAutoscaler
# apiVersion: autoscaling/v2
Expand Down
8 changes: 8 additions & 0 deletions api/.pipeline/templates/prereqs/biohubbc-telemetry.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
kind: Secret
apiVersion: v1
metadata:
name: biohubbc-telemetry
data:
lotek_acount_username: <fill in values>
lotek_acount_password: <fill in values>
type: Opaque

0 comments on commit 5bf290e

Please sign in to comment.