This is a recipe for deploying a Symfony application on the Google Kubernetes Engine. It includes all the configuration files and build scripts.
It is based on the implementation notes of a recent project. I tidied them up so that they could be shared. Hope this helps.
How to use it:
-
Download the repo
-
Prepare your local and remote infrastructure as explained on this page
-
Install
composer.json
-
Customize
sf/composer.json
if needed andcomposer install
-
Adapt the
conf/env
files to your environments -
Adapt the
docs/README.config.adoc
files to your project configuration -
Start developing your Symfony application
Key principles underlying this design:
-
Adhere to the 12-factors principles
-
Identical application code deployed in local and remote environments
-
Near-identical infrastructure code deployed in local and remote environments
-
Ready to deploy on the GKE (Google Kubernetes Engine)
-
Code repository in a private Github repo (the code is easy to adapt if your code sits somewhere else)
-
A web reverse proxy has been deployed in the GKE cluster, behind a service of type LoadBalancer (see this recipe for deploying an Apache+Letsencrypt web server
Kubernetes:
-
Kubernetes cluster on Minikube, with Virtualbox VM back-end
-
Includes its own Docker environment
-
The data volumes on the host are mounted on the
/hosthome
VM location, where they are visible to the Kubernetes cluster. From there they can be mounted as persistent volumes onto the container -
Connect to a MySQL database on the host, external to the cluster
The application manages two types of content files:
-
Public files, served directly by the web proxy server
-
Private files, subject to access control, served by the Symfony application
-
File storage abstraction using Flysystem, in
local
mode
Kubernetes:
-
GKE (Google Kubernetes Engine) Autopilot cluster
-
GCP Docker image registry
-
Permanent storage on Google Cloud Storage buckets, public buckets for public files, private buckets for private files
-
Connect to Google Cloud SQL/ MySQL database
Application content files:
-
Public files are served directly by the web proxy server, proxying to Cloud Storage buckets API (public buckets)
-
Private files are served by the web application from private buckets
-
File storage abstraction using Flysystem, in
gcloud
(Cloud Storage) mode
We use Cloud Shell to build and deploy releases, with PHP Deployer scripts:
-
Pull project files from Github repo
-
Build Docker images, push images to the GCP Registry
-
An init container pulls Symfony files from Github and builds the Symfony application, calling
composer install
-
Update GKE deployment manifest with new image tag
With this recipe, we have identical Symfony application code and Kubernetes services, deployments and cronjobs definitions across all environments, local as well as remote.
All differences between environments are reflected in project-level .env
configuration files.
The architecture is suited for a relatively simple web application, with relatively modest traffic and SLAs, or a MVP.
The limitations, deliberate for a simple project, can easily be extended as needed as follows.
Here we assume that an Apache server in reverse proxy mode is deployed on a free tier Compute Engine VM. For bigger sites one would typically use HTTPS Load Balancers which are expensive.
Limitation | Rationale | How to extend |
---|---|---|
Sessions stored in the container |
Single pod so no need to implement session affinity |
Implement session affinity in the load balancer or reverse proxy |
Symfony logs stored locally |
Monolog configured to send emails at a certain alert level |
Send Symfony logs to Stackdriver |
Partial CI/CD |
Deployment by manual execution of a Deployer script in Cloud Shell |
Deploy Jenkins on GCP |
No test automation in the deployment |
Functional tests are executed locally prior to committing |
Add test tasks to the Deployer script |
Single pod used for web and batch |
Load on web pod can accommodate batch jobs |
Deploy a separate pod dedicated to batch jobs |
Symfony Mailer does not yet supports multiple asynchronous transports |
Limitation of the current Mailer version; expecting this to be fixed soon
(Issue) |
- |
Environments are defined by two meta-parameters
-
HOST_ENV:
-
local
(developer’s laptop) -
remote
(GCP)
-
-
APP_ENV:
-
any name: dev, master, prod, oat, etc.
-
avoid reusing the same name in both a local and a remote environment, since Symfony will use override configuration files based on the APP_ENV name; those overrides are likely to be different in a local and a remote deployment
-
Symfony configuration .env
files are named using these two meta-parameters, and named .env.{APP_ENV}.{HOST_ENV}
, such as .env.oat.remote
.
.env
file contain all environment parameters needed by either Docker, PHP Deployer or Symfony.
They contain all environment-specific parameters except secrets.
Symfony looks at OS environment variables when it can’t find an environment variable
in the Symfony .env
file.
In the local environment, the Symfony working directory is mounted externally on the container, thus code changes are visible immediately.
To switch between environments in the local hosting:
-
Copy the relevant
.env
file fromconf/env
to the Symfony root foldersf
-
Check out the master or dev branch (or feature branch as the case may be)
-
(the
.env
file is built by the build process, not committed to source control)
In the remote (GCP) environments, the build process selects the relevant Symfony .env
file
and ADD’s it to the Docker container.
The project folder structure is as follows:
Folder | Contents | Comments |
---|---|---|
|
Project root, git root |
|
|
Assets to build with Webpack Encore |
css, js |
|
Deployment built artefacts (on local deployments) |
Is emptied at the beginning of a build process. Gitignored |
|
Project configuration files |
|
|
PHP Deployer scripts, Deployer hosts configuration |
|
|
Docker image templates |
Web and batch components |
|
Environment variables |
Depend on <HOST_ENV> and <APP_ENV> |
|
Container configuration file templates |
Apache, PHP, msmtp |
|
Kubernetes manifest templates: service, deployment, cronjob |
Depend on <HOST_ENV> |
|
Project documentation |
|
|
Symfony project root folder |
The Symfony |
|
PHP libraries used by Deployer |
Managed by Composer, distinct from PHP libraries of the Symfony application which are
managed under the |
Deployer is a simple deployment tool written in PHP. It is open source and free. It contains pre-defined recipes designed for traditional FTP deployments; those are not useful in a Kubernetes context, so we wrote new scripts from scratch.
We use Deployer scripts to:
-
Generate service/ cronjob manifests (usually done only once)
-
Generate deployment manifests (usually done only once)
-
Deploy new container version (done at every release, only remotely)
We run Deployer scripts on:
-
local laptop for local environment
-
Cloud Shell for GCP environments
The scripts take the parameters:
-
APP_ENV
-
TAG
:-
In remote environments, a git tag version is pulled from Github and deployed
-
In local environment: 'current'. The container needs only rebuilding infrequently, as it mounts the Symfony working directory, obscuring the ADD directive in the Dockerfile, thus serves whatever is currently checked out in the working directory. Note that we use 'current' instead of 'latest' as 'latest' forces a rebuild of the container, which we don’t want locally.
-
Outline of the remote build process:
-
Execute an initialisation container which:
-
Checks out the tagged version from Github (into a detached branch)
-
Copies relevant Symfony application files from source
-
Copies the relevant
.env.<APP_ENV>.remote
file to bothbuild/.env
and tosf/.env
-
Warms up the Symfony application cache, which is needed by the PHP Opcache directive and must exist at the time the web container starts
-
-
Build the application container:
-
Passing the
build/.env
as environment parameters -
ADD the Symfony application and cache files from the init container
-
COPY infrastructure configuration templates
-
RUN
dockerize
on infrastructure configuration templates (see next section) -
docker push the new container to the GCP container registry
-
-
Build a deployment manifest to
build/deployment.yml
. This manifest contains the new container tag-
Apply the updated Kubernetes deployment manifest
-
Notes:
-
In the web container, the Apache user (www-data) has user:group id 1000:33, whereas in the init container it has user:group id 33:33. This explains the chown commands in the initialisation container
-
Another approach would be to use the GCP native Cloud Build service (but this is less portable)
The following container infrastructure files are templated. Running dockerize
interpolates
placeholders in the templates with variables from the Docker build .env
file.
Template | Target file in container | Contents |
---|---|---|
|
|
msmtp logrotate configuration |
|
|
msmtp configuration. |
|
|
php.ini for CLI and the Apache PHP module |
|
|
Location of the SSH key to the Github private repo. Used by the webinit initialisation container to build the Symfony application files |
|
|
Single virtual host for the web application |
Notes:
-
For more info on dockerize, see.
-
See also
conf/docker/Dockerfile.PHP.example
for a typical Docker RUN command with commonly used PHP libraries. Adapt as needed.
We use two types of secrets:
-
Kubernetes secrets, mounted onto containers:
-
MAILER_PASSWORD
: SMTP account password -
API_KEY
: API key used by the cron service
-
-
Symfony application secrets, packed into a single
SYMFONY_DECRYPTION_SECRET
Kubernetes secret:-
APP_SECRET
: encryption key -
DB_PASSWORD
: MySQL account password -
MAILER_PASSWORD
: SMTP account password
-
The MAILER_PASSWORD
secret, although used by the Symfony application, is needed outside the Symfony environments, to send emails via the batch cron container.
With this container build process, secrets only exist as container OS environment variables.
In order for the application to correctly reads the headers forwarded by the web reverse proxy.
framework: ... trusted_proxies: '%env(TRUSTED_PROXIES)%' trusted_headers: ['x-forwarded-for', 'x-forwarded-host']
See for reference.
A Symfony application is typically used in two modes: online requests and batch jobs.
For this recipe we use a single container to serve both.
We define batch jobs as Kubernetes cronjobs. Those jobs do the following:
-
Instantiate a simple Alpine/curl container in the cluster
-
The container command sends a curl GET request to the application pod inside the cluster
-
The request is handled by a normal Symfony controller
-
Complete job and log the job status depending on the HTTP response (200 or 500)
Note that in a traditional, non-containerized Symfony application we would implement Console Commands in the controller, triggered by command line php calls, scheduled by a cron job. We can’t do this with Kubernetes, since the Kubernetes cronjob cannot execute remote shell commands on the container and is limited to sending HTTP requests.
A simpler alternative is to define cron jobs inside your web proxy container, calling application containers using the same API endpoints.
To send emails, we use the following components:
-
The Symfony Mailer library to create emails
-
The Symfony Messenger to queue emails in the database
-
The msmtp MTA (message transfer agent) to send emails
-
Kubernetes cronjobs to process Messenger queues
The new Mailer library replaces the deprecated SwiftMailer library and is now the recommended library for new projects.
For transport the Symfony application does not establish a SMTP connection to the remote SMTP server, but instead sends messages to a local MTA running in the container. We use msmtp as MTA. msmtp is a popular successor to sendmail, easier to configure, with a sendmail-compatible API. The benefits of using a MTA are:
-
The messages are sent to the remote SMTP server by the MTA background process, not PHP scripts
-
The MTA handles exceptions well, such as SMTP server unavailable or returning errors.
Here we configure two asynchronous transports:
-
realtime_mailer
for high priority emails (e.g. confirmation after registration) -
batch_mailer
for low priority emails (e.g. batch newsletter)
These transports are configured in Sendmail
mode.
The process of sending an email is the following:
-
A Symfony controller action (online or batch) generates an email, indicates the transport method (high or low priority)
-
The Messenger component puts these messages in either queue in the database
-
The Messenger component processes these two queues in batch mode and sends the emails to the msmtp MTA (which runs on the same container)
-
The high priority batch job runs every 2mn with a time-out of 100s. If not all queued emails are processed, the next run starting 20s after will pick them up
-
The low priority batch job runs every 20mn
-
-
The MTA sends the emails to the remote SMTP server roughly in the order it has received them.
See previous section for how cronjobs are implemented in Kubernetes.
In the messenger configuration file we define the queue where to put emails.
Here we use the application database to persist the queues (other methods are available,
notably Redis). We pass the queue_name
argument to indicate the queue.
We also define a dead-letter queue where failed messages will be logged. This will be rare as it is the MTA that is likely to fail while sending emails instead of the Symfony application.
framework:
messenger:
# Uncomment this (and the failed transport below) to send failed messages to this transport for later handling.
failure_transport: failed
transports:
# https://symfony.com/doc/current/messenger.html#transport-configuration
failed: 'doctrine://default?queue_name=failed'
# sync: 'sync://'
batch_mailer: 'doctrine://default?queue_name=batch_mailer'
realtime_mailer: 'doctrine://default?queue_name=realtime_mailer'
routing:
# Route your messages to the transports
'Symfony\Component\Mailer\Messenger\SendEmailMessage': realtime_mailer
In the mailer configuration file we define the action to take when actually sending an email.
Usually this is either a SMTP DSN string or Sendmail. Here we use Sendmail. The native://default
option
uses the sendmail_path
setting of php.ini, itself defined as /usr/bin/msmtp -t -v
. See infra configuration
files.
framework:
mailer:
transports:
#main: '%env(MAILER_DSN)%'
realtime_mailer: 'native://default'
Mailer is a new library, and has currently some limitations that we expect to be remediated soon:
-
No support for multiple async transports (See)
The msmtp configuration is defined in the conf/infra/msmtprc.tpl
template file.
It contains:
-
SMTP account details
-
The password is not stored in clear but is defined by a shell command:
"echo $MAILER_PASSWORD"
-
Location of the msmtp log file
For application user file management we use the Flysystem library, which provides filesystem abstraction across a number of storage mechanisms.
-
The local environment uses the
local
adapter (local file system) -
The GCP environment uses the
gcloud
adapter (Cloud Storage).
The application code to read/ write files is identical in all environments. Only the environment configuration
changes. The storage mechanics are abstracted from the application. The application uses get
and put
methods for file paths on a virtual file system.
Typically the application needs to manage:
-
Public files, served directly by the web proxy without access control
-
Private files, subject to access control, and served by the Symfony application
-
Each for local and remote file storage
Hence four adapter configuration:
flysystem:
storages:
storage.private.local:
adapter: 'local'
options:
directory: '%kernel.project_dir%/../local-storage/%env(GCS_PRIVATE_BUCKET)%'
storage.public.local:
adapter: 'local'
options:
directory: '%kernel.project_dir%/../local-storage/%env(GCS_PUBLIC_BUCKET)%'
storage.private.gcloud:
adapter: 'gcloud'
options:
client: 'Google\Cloud\Storage\StorageClient' # The service ID of the Google\Cloud\Storage\StorageClient instance
bucket: '%env(GCS_PRIVATE_BUCKET)%'
prefix: ''
api_url: 'https://storage.googleapis.com'
storage.public.gcloud:
adapter: 'gcloud'
options:
client: 'Google\Cloud\Storage\StorageClient' # The service ID of the Google\Cloud\Storage\StorageClient instance
bucket: '%env(GCS_PUBLIC_BUCKET)%'
prefix: ''
api_url: 'https://storage.googleapis.com'
# Aliases based on environment variable
storage.private:
adapter: 'lazy'
options:
source: 'storage.private.%env(STORAGE_ADAPTER)%'
storage.public:
adapter: 'lazy'
options:
source: 'storage.public.%env(STORAGE_ADAPTER)%'
Thus Symfony creates four "real" services (flysystem.storage.private.local
, etc)
corresponding to these four adapters.
We do not use these services but dynamic "alias" (lazy) services storage.private
and storage.private
which depend on the environment.
To use these services in a controller:
use League\Flysystem\FilesystemInterface;
class MyController extends AbstractController
{
/** @var FilesystemInterface $storagePublic Public storage adapter */
private $storagePublic;
/** @var FilesystemInterface $storagePrivate Private storage adapter */
private $storagePrivate;
public function __construct(
FilesystemInterface $storagePublic,
FilesystemInterface $storagePrivate
) {
$this->storagePublic = $storagePublic;
$this->storagePrivate = $storagePrivate;
}
public function myAction(
Request $request
) {
$this->storagePrivate->put($file_path, $content);
On local hosting:
-
External persistent folders on the host are mounted on the Minikube cluster, and in turn mounted as persistent volumes on the container
-
The root of the virtual file system is the mount point of the persistent volume in the container
-
The application read/writes to these folders using the
local
adapter
On GKE hosting:
-
We use Cloud Storage buckets for storage
-
The root of the virtual file system viewed from the application is the bucket
-
The application read/writes to these buckets using the
gcloud
adapter, which uses API calls to Cloud Storage -
The buckets must be configured for ACL access as the Symfony application will use a GCP service account to access the buckets.
The CDN_URL
environment variable is used by the application to create URLs to public
assets that are served directly by the web proxy, outside the application.
In the local environment we instantiate a Kubernetes engine using the Minikube package.
-
Install kubectl
-
Install minikube via direct download
-
Bind mount the host folders used to persist application user data host folder to
/home
, which is accessible from within the Minikube cluster as/hosthome
Increase the default CPU allocated to the cluster VM (2CPU) to 4CPU:
minikube delete minikube config set cpus 4 minikube start
sudo mount --bind /opt/data/storage-buckets /home/storage-buckets \
&& sudo mount --bind /opt/data/projects/myproject /home/myproject \
\
&& minikube start --driver=virtualbox --cpus 4 \
&& minikube tunnel
Then optionally open the minikube dashboard in a browser:
# Open the minikube dashboard in a browser
minikube dashboard
The dashboard is very handy.
On Linux, from within the minikube cluster, the host does not have a DNS name. This is available on MacOS
hosts (name is host.docker.internal
). There is a pull request
to this effect.
You have to use the host IP address 10.0.2.2
instead.
Before building containers (docker build
) ensure you are in the correct context. Minikube has its
own Docker engine running inside its Virtualbox VM, distinct from that of the laptop host.
# switch to the Minikube VM context (must be run in each new terminal session)
eval $(minikube docker-env)
# switch back to local Docker context
eval $(minikube docker-env -u)
Notes:
-
I tried to use Minikube with
--vm-driver=none
, so that it would use the host Docker engine, but it didn’t work and probably never will -
The Minikube cluster node is visible in the host at
http://192.168.99.100:31645/
(the port is randomly assigned at minikube startup).
You will be switching between the local and GKE Kubernetes contexts. Ensure you are in the correct
context before firing kubectl apply
commands.
kubectl config get-contexts
# output:
CURRENT NAME CLUSTER
* gke_myproject-123456_us-central1_myproject gke_myproject-123456_us-central1_myproject
minikube minikube
# To switch to another context:
kubectl config use-context gke_myproject-123456_us-central1_myproject
# or
kubectl config use-context minikube
Set project defaults:
gcloud config set project myproject-123456
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
Generate SSH keys, store them locally in ~/.ssh/
.
Copy the keys in your Cloud Shell ~/.ssh/
folder.
We use Cloud Shell as a build and deployment environment
# set project
gcloud config set project myproject-123456
# clone the project repo:
git clone [email protected]:myorganization/myproject.git
# install the Deployer vendor libraries
cd myproject && composer install
To avoid having to reinstall Deployer at every new session, add the following lines to your Cloud Shell customize_environment file:
#!/bin/sh curl -LO https://deployer.org/deployer.phar sudo mv deployer.phar /usr/local/bin/dep sudo chmod +x /usr/local/bin/dep
It is useful to SCP to Cloud Shell. Note that paths must be absolute. Use the --recurse
flag
for recursive copying.
# To copy a remote directory to your local machine:
gcloud alpha cloud-shell scp \
cloudshell:~/REMOTE-DIR \
localhost:~/LOCAL-DIR
# Conversely:
gcloud alpha cloud-shell scp \
localhost:~/LOCAL-DIR \
cloudshell:~/REMOTE-DIR
sudo add-apt-repository ppa:ondrej/php
sudo apt-get update \
sudo apt install php7.3 php7.3-cli php7.3-mbstring php7.3-curl php7.3-xml php7.3-zip php7.3-curl
sudo update-alternatives --set php /usr/bin/php7.3
TO DO: replace the PHP CLI install by a Docker container… nicer and cleaner
Create a new SSH key labelled "myproject-vm" locally. Register it on Github. Also create a corresponding config file:
Host github.com
User git
Hostname github.com
PreferredAuthentications publickey
IdentityFile ~/.ssh/myproject-vm_rsa
It is a good practice to create bucket names that are domain names associated to your project, as it guarantees global unicity.
This requires domain ownership verification:
-
In the Google Search Console, add Property:
myproject.com
-
In your domain name DNS manager, add a TXT record with the provided text.
Create the following buckets:
-
app.myproject.com
: private data, live site -
app-test.myproject.com
: private data, test site -
cdn.myproject.com
: public data, live site -
cdn-test.myproject.com
: public data, test site
Set bucket access control policy to ACLs, since the PHP flysystem API will use a service account to access the Cloud Storage API.
To make buckets public:
gsutil iam ch allUsers:objectViewer gs://cdn-test.myproject.com
gsutil iam ch allUsers:objectViewer gs://cdn.myproject.com
Using Chrome, upload folders/ files using the Cloud Storage console or the gsutil cp
command.
Example of commands to copy files at the command line from local and remote environments:
# local to remote
gsutil cp * gs://cdn.myproject.com/dir
# remote to local
gsutil cp gs://cdn.myproject.com/dir/* .
# remote to remote
gsutil cp gs://cdn.myproject.com/dir/* gs://cdn-test.myproject.com.org/dir
On the Gcloud SQL Console, create DB instance called myproject-db
(MySQL 5.7):
Create Database:
-
Character set/ Collation ⇒
utf8mb4/ utf8mb4_unicode_ci
-
Connectivity: Private IP
Don’t use the recommended collation for MySQL 8.0 utf8mb4_0900_ai_ci
as it is
not supported on MySQL 5.7.
Note down the DB instance external IP address (10.1.2.3
). You will use it in your
.env
configuration files.
Instantiate temporarily a Compute Engine VM in the same VPC.
Install the mysql CLI: apt-get update && apt-get install -y mysql-client-5.7
# root user - enter password at prompt
mysql -u root -p -h 10.1.2.3
# application user - enter password at prompt
mysql -u myproject_dev -p -h 10.1.2.3
Note:
-
You can’t connect from the Cloud Shell, as it is outside the project VPC.
On the Gcloud SQL Console > Users > Create MySQL user accounts:
-
myproject_dev
-
myproject_master
Allow from any host (%)
Apply GRANT commands as required by your application. A typical one would be:
GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, EXECUTE,
CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER, LOCK TABLES
ON `<DB>`.* TO '<USER>'@'%';
FLUSH PRIVILEGES;
Create a myproject-dev
Cloud IAM service account for the application. Add roles:
-
Storage > Storage Object Admin
-
Cloud SQL > Cloud SQL Client
The service account is named [email protected]
Create a JSON key associated to this account ⇒ myproject-123456-abcdef123456.json
.
To import a database, instantiate a temporary VM.
All tables must be in the InnoDB format.
-
Export the DB in SQL format with Adminer or PhpMyAdmin
-
SCP the SQL file to the VM:
gcloud compute scp \
~/myproject_dev.sql \
temp-vm:/tmp
-
Import the DB:
mysql -u root -p -h 10.1.2.3 myproject_dev < /tmp/myproject_dev.sql
The Container Registry is deprecated. Use the Artifact Registry instead.
-
In the Console > Artifact Registry, create a repo named
myproject-123456
-
In Cloud Shell run
gcloud auth configure-docker us-central1-docker.pkg.dev
-
Assign role "Artifact Registry Writer" to your service account
Creating the cluster creates automatically:
-
A
default
namespace -
A Kubernetes service account (
kubectl get serviceaccount --namespace default
). At this stage this service account controls access only within the cluster.
The Kubernetes service account needs to access Google resources, so we bind it to the application Google service account previously created.
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:myproject-123456.svc.id.goog[default/default]" \
[email protected]
Add corresponding annotation to the Kubernetes service account:
kubectl annotate serviceaccount \
--namespace default \
default \
iam.gke.io/gcp-service-account=myproject-123456-dev@myproject-123456.iam.gserviceaccount.com
Verify that the service account is configured correctly by running a test container provided by Google:
kubectl run -it \
--generator=run-pod/v1 \
--image google/cloud-sdk \
--serviceaccount default \
--namespace default \
workload-identity-test
gcloud auth list
This should display a single Google service account, the one bind’ed earlier. This is the service account the pod will use to access GCP services.
Once done, delete the workload-identity-test
pod.
Create your secrets for each environment in a safe location outside the project directory.
Naming convention: secrets.<HOST_ENV>.<APP_ENV>.yml
Those are in the form:
apiVersion: v1
kind: Secret
metadata:
name: myproject-{APP_ENV}-sf-secrets
type: Opaque
data:
# php -r 'echo base64_encode(base64_encode(require "config/secrets/dev/dev.decrypt.private.php"));'
SYMFONY_DECRYPTION_SECRET: YWJjZGVmZ2h1aWRVQUlQR0RBWkVQVUlJVUdQ
# echo -n 'secret1' | base64
API_KEY: c2VjcmV0MQ==
# echo -n 'secret2' | base64
MAILER_PASSWORD: c2VjcmV0Mg==
Deploy Kubernetes secrets locally:
cd <secrets> dir
kubectl config use-context minikube
kubectl apply -f secrets.local.dev.yml
kubectl apply -f secrets.local.master.yml
To execute cron jobs we instantiate a very simple Docker Alpine image with the cUrl library.
cd myproject
# Set Docker and Kubernetes contexts to Minikube
eval $(minikube docker-env)
# Build the Docker cronjob image
docker build -f build/Dockerfile.cronjob -t k8s-cronjob:current .
We build services manifests with PHP Deployer and deploy them using kubectl. This is usually done only once.
cd myproject
# Set Docker and Kubernetes contexts to Minikube
kubectl config use-context minikube
# Deploy services (dev environment)
php vendor/bin/dep --file=conf/deployer/deploy.php \
--hosts=localhost gen-service -o APP_ENV=dev
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.local.yml
# Deploy services (master environment)
php vendor/bin/dep --file=conf/deployer/deploy.php \
--hosts=localhost gen-service -o APP_ENV=master
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.local.yml
The web application container mounts the Symfony working directory. There is no need to rebuild the container, unless, say, you need to add a PHP library. Just switch git branches as needed.
The Minikube web application is visible on the host at: 192.168.99.100:32745
(adapt the port number)
We use Cloud Shell to build and deploy on GKE.
git clone [email protected]:myorganization/myproject.git
cd myproject
Deploy Kubernetes secrets. Here those are the same as for the local environment.
cd your-secrets-dir
# Switch to GKE context
kubectl config use-context gke_myproject-123456_us-central1_myproject
kubectl apply -f secrets.oat.remote.yml
kubectl apply -f secrets.prod.remote.yml
Push the Alpine/cUrl image to GCR:
cd myproject
docker build -f conf/docker/Dockerfile.cronjob -t k8s-cronjob:current .
docker tag k8s-cronjob:current gcr.io/myproject-123456/k8s-cronjob:current
docker push gcr.io/myproject-123456/k8s-cronjob:current
Deploy services manifests:
cd myproject
# Switch to GKE context
kubectl config use-context gke_myproject-123456_us-central1_myproject
# Deploy services (OAT on test.)
php vendor/bin/dep --file=conf/deployer/deploy.php \
--hosts=remote gen-service -o APP_ENV=oat
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.remote.yml
# Deploy services (Prod on www.)
php vendor/bin/dep --file=conf/deployer/deploy.php \
--hosts=remote gen-service -o APP_ENV=prod
kubectl apply -f build/service.yml
kubectl apply -f build/cronjob.remote.yml
The release process is as follows:
-
Git tag a release version
-
Push the tag to Github
-
In the Cloud Shell git pull
-
Execute the Deployer script; this:
-
Builds a new container with a Docker tag identical to the git tag
-
Applies an updated deployment manifest with the new Docker tag
-
Commands:
# tag a commit locally
git tag 0.4
# push tags
git push origin --tags
# Deploy new version - Test
cd myproject
php vendor/bin/dep --file=conf/deployer/deploy.php --hosts=remote deploy-remote \
-o APP_ENV=dev -o TAG=0.4
# Deploy new version - Prod
cd myproject
php vendor/bin/dep --file=conf/deployer/deploy.php --hosts=remote deploy-remote \
-o APP_ENV=master -o TAG=0.4
Some random resources that I found useful. Not all apply to this recipe.
The Blowb Project - Deploy Integrated Apps Using Docker (Not used in this article but looks like it has many good ideas)