Salt states for provisioning machines in generic yet sensible way.
The goal is to create salt environments usable by developers as well as admins during the setup of either server or 'client' machines.
There are multiple options to deploy Salt.
They depend on how you want to provision machines:
-
Separate
salt-master
process provisioningsalt-minions
:
Refer to SaltStack documentation of gitfs (if you prefer local filesystem then familiarize with multienv) or use fully automated setup of SaltStack via associated project ambassador -
Master-less provisioning (machine provisions itself):
Stepscurl -o /tmp/bootstrap-salt.sh -L https://bootstrap.saltstack.com
, requires (apt-get install curl python-pip python-pygit2
)sh /tmp/bootstrap-salt.sh -X
- Create masterless configs:
config/common.conf
andconfig/gitfs.conf
(put under/etc/salt/minion.d/
), use associated project ambassador for guidelines how to create such configs systemctl start salt-minion
- Optionally run
salt-call --local saltutil.sync_all
It is possible to use both methods, e.g., initially provision the machine using master-minion setup, "unplug" the minion and use master-less when needed.
Vagrant supports Salt provisioner
- Add following sections to
Vagrantfile
.
Vagrant.configure("2") do |config|
...
config.vm.synced_folder "/srv/salt/", "/srv/salt/" # add states from host
config.vm.provision "init", type: "shell" do |s|
s.path = "https://gist.githubusercontent.com/kiemlicz/33e891dd78e985bd080b85afa24f5d0a/raw/b9aba40aa30f238a24fe4ecb4ddc1650d9d685af/init.sh"
end
config.vm.provision :salt do |salt|
salt.masterless = true
salt.minion_config = "minion.conf"
salt.run_highstate = true
salt.salt_args = [ "saltenv=base" ]
end
...
end
init.sh
: bash script that installs salt requisites, e.g., git, pip packages (jinja2) etc.
minion.conf
: configure file_client: local
and whatever you like (mutlienvs, gitfs, ext_pillar)
vagrant up
Depending on use case, different deployment strategies exist.
Salt Master installed on separate machine, Salt Minion installed on each Kubernetes node.
This way it is possible to automatically create Kubernetes master and worker nodes
For documentation refer to Kubernetes states
In this strategy the Salt Master is deployed within dedicated pod and Salt Minions are deployed as DaemonSet.
In this approach, the Salt Minion is not the provisioned entity.
Instead the Salt Minion registers docker_events
engine. The engine captures
docker host events and forwards them to Salt Master Event Bus. Salt Master's Reactor System is then used to
add additional provisioning logic that is impossible (not in an easy way at least) to provide using Kubernetes tools only.
Example: creating and maintaining Redis Cluster.
Mind that Salt Minion is not installed on every container and not used to fully configure that container. That would be possible but this should be the responsibility of the tool that is used to create that container (of course it is possible to use Salt as such tool)
More detailed description can be found in POD provisioning section
In order to run states against minions, pillar must be configured.
Refer to pillar.example.sls
files in states themselves for particular structure.
States must be written with assumption that given pillar entry may not exist.
For detailed state description, refer to particular states' README file.
States are divided in environments:
base
- the main one. Any other environment comprises of at leastbase
. Contains core states responsible for operations like repositories configuration, core packages installation or user setupdev
- for developer machines. Includesbase
. Contains states that install tons of dev apps along with their configuration (like add entry toPATH
variable)server
- installs/configured typical server tools, e.g., Kubernetes or LVS. Includesbase
anddev
In order to keep states readable and configuration of whole SaltStack as flexible as possible, some extensions and custom states were introduced.
All of the custom states can be found in default Salt extensions' directories (_pillar
, _runner
, etc.)
Dynamically configured git pillar.
Allows users to configure their own pillar data git repository in the runtime - using pillar entries.
Normally git_pillar
must be configured in the Salt Master configuration beforehand.
Append privgit
to ext_pillar
configuration option to enable this extension.
The syntax:
ext_pillar: # Salt option
- privgit: # extension name
repositories:
- name1: # first entry identifier
param1: # the parameters dict
param2: # append in config only the options that most likely won't be changed by users
Fully static configuration (use git_pillar instead of such):
ext_pillar:
- privgit:
- name1:
url: [email protected]:someone/somerepo.git
branch: master
env: custom
root: pillar
privkey: |
some
sensitive data
pubkey: and so on
- name2:
url: [email protected]:someone/somerepo.git
branch: develop
env: custom
privkey_location: /location/on/master
pubkey_location: /location/on/master
Parameters are formed as a list, next entries override previous:
privgit:
- name1:
url: [email protected]:someone/somerepo.git
branch: master
env: custom
root: pillar
privkey: |
some
sensitive data
pubkey: and so on
- name2:
url: [email protected]:someone/somerepo.git
branch: develop
env: custom
privkey_location: /location/on/master
pubkey_location: /location/on/master
- name2:
url: [email protected]:someone/somerepo.git
branch: notdevelop
Entries order does matter, last one is the most specific one. It doesn't affect further pillar merge strategies.
Due to potential integration with systems like foreman that support string keys only, another (unpleasant, flat) syntax exists:
privgit_name1_url: [email protected]:someone/somerepo.git
privgit_name1_branch: master
privgit_name1_env: custom
privgit_name1_root: pillar
privgit_name1_privkey: |
some
sensitive data
privgit_name1_pubkey: and so on
privgit_name2_url: [email protected]:someone/somerepo.git
privgit_name2_branch: develop
privgit_name2_env: custom
privgit_name2_privkey_location: /location/on/master
privgit_name2_pubkey_location: /location/on/master
Pulls any Kubernetes information and adds them to pillar.
It is possible to specify pillar key under which the Kubernetes data will be hooked up.
Under the hood this extension executes:
kubectl get -o yaml -n <namespace or deafult> <kind> <name>
or
kubectl get -o yaml -n <namespace or deafult> <kind> -l <selector>
if name is not provided
There is no (not yet) per-minion filtering of Kubernetes pillar data, thus this data will be matched to all minions.
For Kubernetes deployments (minions as daemon set) this should be acceptable.
ext_pillar:
- kubectl:
config: "/some/path/to/kubernetes/access.conf" # all queries will use this config
queries:
- kind: statefulsets
name: redis-cluster
key: "redis:kubernetes" # nest the results under `redis:kubernetes`
Custom state that manages dotfiles.
Clones them from passed repository and sets up according to following technique
Most dev tools setup comes down to downloading some kind of archive, unpacking it and possibly adding symlink to some generic location.
This state does pretty much that.
Environment variables operations.
Tests are performed on different OSes (in docker) in Salt masterless mode.
Different pillar data is mixed with different saltenvs.
Then the salt-call --local state.show_sls <state name>
is invoked and checked if renders properly
More complex tests that perform actual state application in different environments are performed in associated ambassador project
- SaltStack quickstart
- SaltStack best practices