No. Salt is 100% committed to being open-source, including all of our APIs and
the new 'Halite' web interface which will be included in version 0.17.0. It
is developed under the Apache 2.0 license, allowing it to be used in both
open and proprietary projects.
Minions need to be able to connect to the Master on TCP ports 4505 and 4506.
Minions do not need any inbound ports open. More detailed information on
firewall settings can be found here.
You are probably using cmd.run
rather than
cmd.wait
. A cmd.wait
state will only run when there has been a change in a
state that it is watching.
A cmd.run
state will run the corresponding command
every time (unless it is prevented from running by the unless
or onlyif
arguments).
More details can be found in the docmentation for the cmd
states.
When you run test.ping the Master tells Minions to run commands/functions,
and listens for the return data, printing it to the screen when it is received.
If it doesn't receive anything back, it doesn't have anything to display for
that Minion.
There are a couple options for getting information on Minions that are not
responding. One is to use the verbose (-v
) option when you run salt
commands, as it will display "Minion did not return" for any Minions which time
out.
salt -v '*' pkg.install zsh
Another option is to use the manage.down
runner:
If the Minion id is not configured explicitly (using the id
parameter), Salt will determine the id based on the hostname. Exactly how this
is determined varies a little between operating systems and is described in
detail here.
Salt detects the Minion's operating system and assigns the correct package or
service management module based on what is detected. However, for certain custom
spins and OS derivatives this detection fails. In cases like this, an issue
should be opened on our tracker, with the following information:
The output of the following command:
salt <minion_id> grains.items | grep os
The contents of /etc/lsb-release
, if present on the Minion.
In versions of Salt 0.16.3 or older, there is a bug in gitfs which can affect the syncing of custom types.
Upgrading to 0.16.4 or newer will fix this.
This is most likely a PATH issue. Did you custom-compile the software which the
module requires? RHEL/CentOS/etc. in particular override the root user's path
in /etc/init.d/functions
, setting it to /sbin:/usr/sbin:/bin:/usr/bin
,
making software installed into /usr/local/bin
unavailable to Salt when the
Minion is started using the initscript. In version 0.18.0, Salt will have a
better solution for these sort of PATH-related issues, but recompiling the
software to install it into a location within the PATH should resolve the
issue in the meantime. Alternatively, you can create a symbolic link within the
PATH using a file.symlink
state.
/usr/bin/foo:
file.symlink:
- target: /usr/local/bin/foo
Introduction to Salt
We’re not just talking about NaCl.
The 30 second summary
Salt is:
- a configuration management system, capable of maintaining remote nodes
in defined states (for example, ensuring that specific packages are installed and
specific services are running)
- a distributed remote execution system used to execute commands and
query data on remote nodes, either individually or by arbitrary
selection criteria
It was developed in order to bring the best solutions found in the
world of remote execution together and make them better, faster, and more
malleable. Salt accomplishes this through its ability to handle large loads of
information, and not just dozens but hundreds and even thousands of individual
servers quickly through a simple and manageable interface.
Simplicity
Providing versatility between massive scale deployments and smaller systems may seem
daunting, but Salt is very simple to set up and maintain, regardless of the
size of the project. The architecture of Salt is designed to work with any
number of servers, from a handful of local network systems to international
deployments across different datacenters. The topology is a simple
server/client model with the needed functionality built into a single set of
daemons. While the default configuration will work with little to no
modification, Salt can be fine tuned to meet specific needs.
Parallel execution
The core functions of Salt:
- enable commands to remote systems to be called in parallel rather than serially
- use a secure and encrypted protocol
- use the smallest and fastest network payloads possible
- provide a simple programming interface
Salt also introduces more granular controls to the realm of remote
execution, allowing systems to be targeted not just by hostname, but
also by system properties.
Building on proven technology
Salt takes advantage of a number of technologies and techniques. The
networking layer is built with the excellent ZeroMQ networking
library, so the Salt daemon includes a viable and transparent AMQ
broker. Salt uses public keys for authentication with the master
daemon, then uses faster AES encryption for payload communication;
authentication and encryption are integral to Salt. Salt takes
advantage of communication via msgpack, enabling fast and light
network traffic.
Python client interface
In order to allow for simple expansion, Salt execution routines can be written
as plain Python modules. The data collected from Salt executions can be sent
back to the master server, or to any arbitrary program. Salt can be called from
a simple Python API, or from the command line, so that Salt can be used to
execute one-off commands as well as operate as an integral part of a larger
application.
Fast, flexible, scalable
The result is a system that can execute commands at high speed on
target server groups ranging from one to very many servers. Salt is
very fast, easy to set up, amazingly malleable and provides a single
remote execution architecture that can manage the diverse
requirements of any number of servers. The Salt infrastructure
brings together the best of the remote execution world, amplifies its
capabilities and expands its range, resulting in a system that is as
versatile as it is practical, suitable for any network.
Open
Salt is developed under the Apache 2.0 license, and can be used for
open and proprietary projects. Please submit your expansions back to
the Salt project so that we can all benefit together as Salt grows.
Please feel free to sprinkle Salt around your systems and let the
deliciousness come forth.
Installation
The Salt system setup is amazingly simple, as this is one of the central design
goals of Salt.
Quick Install
Many popular distributions will be able to install the salt minion by executing
the bootstrap script:
wget -O - http://bootstrap.saltstack.org | sudo sh
Run the following script to install just the Salt Master:
curl -L http://bootstrap.saltstack.org | sudo sh -s -- -M -N
The script should also make it simple to install a salt master, if desired.
Currently the install script has been tested to work on:
- Ubuntu 10.x/11.x/12.x
- Debian 6.x
- CentOS 6.3
- Fedora
- Arch
- FreeBSD 9.0
See Salt Bootstrap for more information.
Dependencies
Salt should run on any Unix-like platform so long as the dependencies are met.
- Python 2.6 >= 2.6 <3.0
- ZeroMQ >= 2.1.9
- pyzmq >= 2.1.9 - ZeroMQ Python bindings
- PyCrypto - The Python cryptography toolkit
- msgpack-python - High-performance message interchange format
- YAML - Python YAML bindings
- Jinja2 - parsing Salt States (configurable in the master settings)
Optional Dependencies
- mako - an optional parser for Salt States (configurable in the master
settings)
- gcc - dynamic Cython module compiling
Configuring Salt
Salt configuration is very simple. The default configuration for the
master will work for most installations and the only requirement for
setting up a minion is to set the location of the master in the minion
configuration file.
- master
- The Salt master is the central server that all minions connect to.
Commands are run on the minions through the master, and minions send data
back to the master (unless otherwise redirected with a returner). It is started with the
salt-master program.
- minion
- Salt minions are the potentially hundreds or thousands of servers that
may be queried and controlled from the master.
The configuration files will be installed to /etc/salt
and are named
after the respective components, /etc/salt/master
and
/etc/salt/minion
.
Master Configuration
By default the Salt master listens on ports 4505 and 4506 on all
interfaces (0.0.0.0). To bind Salt to a specific IP, redefine the
"interface" directive in the master configuration file, typically
/etc/salt/master
, as follows:
- #interface: 0.0.0.0
+ interface: 10.0.0.1
After updating the configuration file, restart the Salt master.
See the master configuration reference
for more details about other configurable options.
Minion Configuration
Although there are many Salt Minion configuration options, configuring
a Salt Minion is very simple. By default a Salt Minion will
try to connect to the DNS name "salt"; if the Minion is able to
resolve that name correctly, no configuration is needed.
If the DNS name "salt" does not resolve to point to the correct
location of the Master, redefine the "master" directive in the minion
configuration file, typically /etc/salt/minion
, as follows:
- #master: salt
+ master: 10.0.0.1
After updating the configuration file, restart the Salt minion.
See the minion configuration reference
for more details about other configurable options.
Running Salt
Start the master in the foreground (to daemonize the process, pass the
-d flag
):
Start the minion in the foreground (to daemonize the process, pass the
-d flag
):
Having trouble?
The simplest way to troubleshoot Salt is to run the master and minion in
the foreground with log level
set to debug
:
salt-master --log-level=debug
For information on salt's logging system please see the logging
document.
Run as an unprivileged (non-root) user
To run Salt as another user, specify --user
in the command
line or assign user
in the
configuration file.
There is also a full troubleshooting guide
available.
Key Management
Salt uses AES encryption for all communication between the Master and
the Minion. This ensures that the commands sent to the Minions cannot
be tampered with, and that communication between Master and Minion is
authenticated through trusted, accepted keys.
Before commands can be sent to a Minion, its key must be accepted on
the Master. Run the salt-key
command to list the keys known to
the Salt Master:
[root@master ~]# salt-key -L
Unaccepted Keys:
alpha
bravo
charlie
delta
Accepted Keys:
This example shows that the Salt Master is aware of four Minions, but none of
the keys has been accepted. To accept the keys and allow the Minions to be
controlled by the Master, again use the salt-key
command:
[root@master ~]# salt-key -A
[root@master ~]# salt-key -L
Unaccepted Keys:
Accepted Keys:
alpha
bravo
charlie
delta
The salt-key
command allows for signing keys individually or in bulk. The
example above, using -A
bulk-accepts all pending keys. To accept keys
individually use the lowercase of the same option, -a keyname
.
Sending Commands
Communication between the Master and a Minion may be verified by running
the test.ping
command:
[root@master ~]# salt alpha test.ping
alpha:
True
Communication between the Master and all Minions may be tested in a
similar way:
[root@master ~]# salt '*' test.ping
alpha:
True
bravo:
True
charlie:
True
delta:
True
Each of the Minions should send a True
response as shown above.
What's Next?
Understanding targeting is important. From there,
depending on the way you wish to use Salt, you should also proceed to learn
about States and Execution Modules.
Developing Salt
There is a great need for contributions to salt and patches are welcome! The goal
here is to make contributions clear, make sure there is a trail for where the code
has come from, and most importantly, to give credit where credit is due!
There are a number of ways to contribute to salt development.
Sending a GitHub pull request
This is the preferred method for contributions. Simply create a GitHub
fork, commit changes to the fork, and then open up a pull request.
The following is an example (from Open Comparison Contributing Docs )
of an efficient workflow for forking, cloning, branching, committing, and
sending a pull request for a GitHub repository.
First, make a local clone of your GitHub fork of the salt GitHub repo and make
edits and changes locally.
Then, create a new branch on your clone by entering the following commands:
git checkout -b fixed-broken-thing
Switched to a new branch 'fixed-broken-thing'
Choose a name for your branch that describes its purpose.
Now commit your changes to this new branch with the following command:
git commit -am 'description of my fixes for the broken thing'
Note
Using git commit -am
, followed by a quoted string, both stages and
commits all modified files in a single command. Depending on the nature of
your changes, you may wish to stage and commit them separately. Also, note
that if you wish to add newly-tracked files as part of your commit, they
will not be caught using git commit -am
and will need to be added using
git add
before committing.
Push your locally-committed changes back up to GitHub:
git push --set-upstream origin fixed-broken-thing
Now go look at your fork of the salt repo on the GitHub website. The new
branch will now be listed under the "Source" tab where it says "Switch Branches".
Select the new branch from this list, and then click the "Pull request" button.
Put in a descriptive comment, and include links to any project issues related
to the pull request.
The repo managers will be notified of your pull request and it will be
reviewed. If a reviewer asks for changes, just make the changes locally in the
same local feature branch, push them to GitHub, then add a comment to the
discussion section of the pull request.
Note
Travis-CI
To make reviewing pull requests easier for the maintainers, please enable
Travis-CI on your fork. Salt is already configured, so simply follow the
first 2 steps on the Travis-CI Getting Started Doc.
Keeping Salt Forks in Sync
Salt is advancing quickly. It is therefore critical to pull upstream changes
from master into forks on a regular basis. Nothing is worse than putting in a
days of hard work into a pull request only to have it rejected because it has
diverged too far from master.
To pull in upstream changes:
# For ssh github
git remote add upstream git@github.com:saltstack/salt.git
git fetch upstream
# For https github
git remote add upstream https://github.com/saltstack/salt.git
git fetch upstream
To check the log to be sure that you actually want the changes, run the
following before merging:
Then to accept the changes and merge into the current branch:
git merge upstream/develop
For more info, see GitHub Fork a Repo Guide or Open Comparison Contributing
Docs
Posting patches to the mailing list
Patches will also be accepted by email. Format patches using git
format-patch and send them to the Salt users mailing list. The contributor
will then get credit for the patch, and the Salt community will have an archive
of the patch and a place for discussion.
Installing Salt for development
Clone the repository using:
git clone https://github.com/saltstack/salt
Note
tags
Just cloning the repository is enough to work with Salt and make
contributions. However, fetching additional tags from git is required to
have Salt report the correct version for itself. To do this, first
add the git repository as an upstream source:
git remote add upstream http://github.com/saltstack/salt
Fetching tags is done with the git 'fetch' utility:
git fetch --tags upstream
Create a new virtualenv:
virtualenv /path/to/your/virtualenv
On Arch Linux, where Python 3 is the default installation of Python, use the
virtualenv2
command instead of virtualenv
.
Note
Using system Python modules in the virtualenv
To use already-installed python modules in virtualenv (instead of having pip
download and compile new ones), run virtualenv --system-site-packages
Using this method eliminates the requirement to install the salt dependencies
again, although it does assume that the listed modules are all installed in the
system PYTHONPATH at the time of virtualenv creation.
Activate the virtualenv:
source /path/to/your/virtualenv/bin/activate
Install Salt (and dependencies) into the virtualenv:
pip install M2Crypto # Don't install on Debian/Ubuntu (see below)
pip install pyzmq PyYAML pycrypto msgpack-python jinja2 psutil
pip install -e ./salt # the path to the salt git clone from above
Note
Installing M2Crypto
swig
and libssl-dev
are required to build M2Crypto. To fix
the error command 'swig' failed with exit status 1
while installing M2Crypto,
try installing it with the following command:
env SWIG_FEATURES="-cpperraswarn -includeall -D__`uname -m`__ -I/usr/include/openssl" pip install M2Crypto
Debian and Ubuntu systems have modified openssl libraries and mandate that
a patched version of M2Crypto be installed. This means that M2Crypto
needs to be installed via apt:
apt-get install python-m2crypto
This also means that pulling in the M2Crypto installed using apt requires using
--system-site-packages
when creating the virtualenv.
Note
Installing psutil
Python header files are required to build this module, otherwise the pip
install will fail. If your distribution separates binaries and headers into
separate packages, make sure that you have the headers installed. In most
Linux distributions which split the headers into their own package, this
can be done by installing the python-dev
or python-devel
package.
For other platforms, the package will likely be similarly named.
Note
Important note for those developing using RedHat variants
For developers using a RedHat variant, be advised that the package
provider for newer Redhat-based systems (yumpkg.py) relies on RedHat's python
interface for yum. The variants that use this module to provide package
support include the following:
Developers using one of these systems should create the salt virtualenv using the
--system-site-packages
option to ensure that the correct modules are available.
Note
Installing dependencies on OS X.
You can install needed dependencies on OS X using homebrew or macports.
See OS X Installation
Running a self-contained development version
During development it is easiest to be able to run the Salt master and minion
that are installed in the virtualenv you created above, and also to have all
the configuration, log, and cache files contained in the virtualenv as well.
Copy the master and minion config files into your virtualenv:
mkdir -p /path/to/your/virtualenv/etc/salt
cp ./salt/conf/master /path/to/your/virtualenv/etc/salt/master
cp ./salt/conf/minion /path/to/your/virtualenv/etc/salt/minion
Edit the master config file:
- Uncomment and change the
user: root
value to your own user.
- Uncomment and change the
root_dir: /
value to point to
/path/to/your/virtualenv
.
- If you are running version 0.11.1 or older, uncomment and change the
pidfile: /var/run/salt-master.pid
value to point to
/path/to/your/virtualenv/salt-master.pid
.
- If you are also running a non-development version of Salt you will have to
change the
publish_port
and ret_port
values as well.
Edit the minion config file:
- Repeat the edits you made in the master config for the
user
and
root_dir
values as well as any port changes.
- If you are running version 0.11.1 or older, uncomment and change the
pidfile: /var/run/salt-minion.pid
value to point to
/path/to/your/virtualenv/salt-minion.pid
.
- Uncomment and change the
master: salt
value to point at localhost
.
- Uncomment and change the
id:
value to something descriptive like
"saltdev". This isn't strictly necessary but it will serve as a reminder of
which Salt installation you are working with.
Note
Using salt-call with a Standalone Minion
If you plan to run salt-call with this self-contained development
environment in a masterless setup, you should invoke salt-call with
-c /path/to/your/virtualenv/etc/salt
so that salt can find the minion
config file. Without the -c
option, Salt finds its config files in
/etc/salt.
Start the master and minion, accept the minion's key, and verify your local Salt
installation is working:
cd /path/to/your/virtualenv
salt-master -c ./etc/salt -d
salt-minion -c ./etc/salt -d
salt-key -c ./etc/salt -L
salt-key -c ./etc/salt -A
salt -c ./etc/salt '*' test.ping
Running the master and minion in debug mode can be helpful when developing. To
do this, add -l debug
to the calls to salt-master
and salt-minion
.
If you would like to log to the console instead of to the log file, remove the
-d
.
Once the minion starts, you may see an error like the following:
zmq.core.error.ZMQError: ipc path "/path/to/your/virtualenv/var/run/salt/minion/minion_event_7824dcbcfd7a8f6755939af70b96249f_pub.ipc" is longer than 107 characters (sizeof(sockaddr_un.sun_path)).
This means the the path to the socket the minion is using is too long. This is
a system limitation, so the only workaround is to reduce the length of this
path. This can be done in a couple different ways:
- Create your virtualenv in a path that is short enough.
- Edit the
sock_dir
minion config variable and reduce its
length. Remember that this path is relative to the value you set in
root_dir
.
NOTE:
The socket path is limited to 107 characters on Solaris and Linux,
and 103 characters on BSD-based systems.
Note
File descriptor limits
Ensure that the system open file limit is raised to at least 2047:
# check your current limit
ulimit -n
# raise the limit. persists only until reboot
# use 'limit descriptors 2047' for c-shell
ulimit -n 2047
To set file descriptors on OSX, refer to the OS X Installation instructions.
Using easy_install to Install Salt
If you are installing using easy_install
, you will need to define a
USE_SETUPTOOLS environment variable, otherwise dependencies will not
be installed:
USE_SETUPTOOLS=1 easy_install salt
Running the tests
You will need mock
to run the tests:
If you are on Python < 2.7 then you will also need unittest2:
Note
In Salt 0.17, testing libraries were migrated into their own repo. To install them:
pip install git+https://github.com/saltstack/salt-testing.git#egg=SaltTesting
Failure to install SaltTesting will result in import errors similar to the following:
ImportError: No module named salttesting
Finally you use setup.py to run the tests with the following command:
For greater control while running the tests, please try:
Editing and previewing the documentation
You need sphinx-build
command to build the docs. In Debian/Ubuntu this is
provided in the python-sphinx
package. Sphinx can also be installed
to a virtualenv using pip:
Change to salt documentation directory, then:
- This will build the HTML docs. Run
make
without any arguments to see the
available make targets, which include html, man, and
text.
- The docs then are built within the docs/_build/ folder. To update
the docs after making changes, run
make
again.
- The docs use reStructuredText for markup.
See a live demo at http://rst.ninjs.org/.
- The help information on each module or state is culled from the python code
that runs for that piece. Find them in
salt/modules/
or salt/states/
.
- To build the docs on Arch Linux, the python2-sphinx package is
required. Additionally, it is necessary to tell make where to find
the proper sphinx-build binary, like so:
make SPHINXBUILD=sphinx-build2 html
- To build the docs on RHEL/CentOS 6, the python-sphinx10 package
must be installed from EPEL, and the following make command must be used:
make SPHINXBUILD=sphinx-1.0-build html
Targeting
- Targeting
- Specifying which minions should run a command or execute a state by
matching against hostnames, or system information, or defined groups,
or even combinations thereof.
For example the command salt web1 apache.signal restart
to restart the
Apache httpd server specifies the machine web1
as the target and the
command will only be run on that one minion.
Similarly when using States, the following top file specifies that only
the web1
minion should execute the contents of webserver.sls
:
base:
'web1':
- webserver
There are many ways to target individual minions or groups of minions in Salt:
Matching the minion id
- minion id
- A unique identifier for a given minion. By default the minion id is the
FQDN of that host but this can be overridden.
Each minion needs a unique identifier. By default when a minion starts for the
first time it chooses its FQDN as that
identifier. The minion id can be overridden via the minion's id
configuration setting.
Tip
minion id and minion keys
The minion id is used to generate the minion's public/private keys
and if it ever changes the master must then accept the new key as though
the minion was a new host.
Globbing
The default matching that Salt utilizes is shell-style globbing
around the minion id. This also works for states
in the top file.
Note
You must wrap salt calls that use globbing in single-quotes to
prevent the shell from expanding the globs before Salt is invoked.
Match all minions:
Match all minions in the example.net domain or any of the example domains:
salt '*.example.net' test.ping
salt '*.example.*' test.ping
Match all the webN
minions in the example.net domain (web1.example.net
,
web2.example.net
… webN.example.net
):
salt 'web?.example.net' test.ping
Match the web1
through web5
minions:
salt 'web[1-5]' test.ping
Match the web-x
, web-y
, and web-z
minions:
salt 'web-[x-z]' test.ping
Regular Expressions
Minions can be matched using Perl-compatible regular expressions
(which is globbing on steroids and a ton of caffeine).
Match both web1-prod
and web1-devel
minions:
salt -E 'web1-(prod|devel)' test.ping
When using regular expressions in a State's top file, you must specify
the matcher as the first option. The following example executes the contents of
webserver.sls
on the above-mentioned minions.
base:
'web1-(prod|devel)':
- match: pcre
- webserver
Lists
At the most basic level, you can specify a flat list of minion IDs:
salt -L 'web1,web2,web3' test.ping
Grains
Salt comes with an interface to derive information about the underlying system.
This is called the grains interface, because it presents salt with grains of
information.
- Grains
- Static bits of information that a minion collects about the system when
the minion first starts.
The grains interface is made available to Salt modules and components so that
the right salt minion commands are automatically available on the right
systems.
It is important to remember that grains are bits of information loaded when
the salt minion starts, so this information is static. This means that the
information in grains is unchanging, therefore the nature of the data is
static. So grains information are things like the running kernel, or the
operating system.
Match all CentOS minions:
salt -G 'os:CentOS' test.ping
Match all minions with 64-bit CPUs, and return number of CPU cores for each
matching minion:
salt -G 'cpuarch:x86_64' grains.item num_cpus
Additionally, globs can be used in grain matches, and grains that are nested in
a dictionary can be matched by adding a colon for
each level that is traversed. For example, the following will match hosts that
have a grain called ec2_tags
, which itself is a
dict with a key named environment
, which
has a value that contains the word production
:
salt -G 'ec2_tags:environment:*production*'
Listing Grains
Available grains can be listed by using the 'grains.ls' module:
Grains data can be listed by using the 'grains.items' module:
Grains in the Minion Config
Grains can also be statically assigned within the minion configuration file.
Just add the option grains
and pass options to it:
grains:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
Then status data specific to your servers can be retrieved via Salt, or used
inside of the State system for matching. It also makes targeting, in the case
of the example above, simply based on specific data about your deployment.
Grains in /etc/salt/grains
If you do not want to place your custom static grains in the minion config
file, you can also put them in /etc/salt/grains
. They are configured in the
same way as in the above example, only without a top-level grains:
key:
roles:
- webserver
- memcache
deployment: datacenter4
cabinet: 13
cab_u: 14-15
Precedece of Custom Static Grains
Be careful when defining grains both in /etc/salt/grains
and within the
minion config file. If a grain is defined in both places, the value in the
minion config file takes precedence, and will always be used over its
counterpart in /etc/salt/grains
.
Grains in Top file
With correctly setup grains on the Minion, the Top file used in Pillar or during Highstate can be made really efficient. Like for example, you could do:
'node_type:web':
- match: grain
- webserver
'node_type:postgres':
- match: grain
- database
'node_type:redis':
- match: grain
- redis
'node_type:lb':
- match: grain
- lb
For this example to work, you would need the grain node_type
and the correct value to match on. This simple example is nice, but too much of the code is similar. To go one step further, we can place some Jinja template code into the Top file.
{% set self = grains['node_type'] %}
'node_type:{{ self }}':
- match: grain
- {{ self }}
The Jinja code simplified the Top file, and allowed SaltStack to work its magic.
Writing Grains
Grains are easy to write. The grains interface is derived by executing
all of the "public" functions found in the modules located in the grains
package or the custom grains directory. The functions in the modules of
the grains must return a Python dict, where the
keys in the dict are the names of the grains and
the values are the values.
Custom grains should be placed in a _grains
directory located under the
file_roots
specified by the master config file. They will be
distributed to the minions when state.highstate
is run, or by executing the
saltutil.sync_grains
or
saltutil.sync_all
functions.
Before adding a grain to Salt, consider what the grain is and remember that
grains need to be static data. If the data is something that is likely to
change, consider using Pillar instead.
Node groups
- Node group
- A predefined group of minions declared in the master configuration file
nodegroups
setting as a compound target.
Nodegroups are declared using a compound target specification. The compound
target documentation can be found here.
The nodegroups
master config file parameter is used to define
nodegroups. Here's an example nodegroup configuration:
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
To match a nodegroup on the CLI, use the -N
command-line option:
To match in your top file, make sure to put - match: nodegroup
on
the line directly following the nodegroup name.
base:
group1:
- match: nodegroup
- webserver
Compound matchers
- Compound matcher
- A combination of many target definitions that can be combined with
boolean operators.
Compound matchers allow very granular minion targeting using any of Salt's
matchers. The default matcher is a glob
match, just as
with CLI and top file matching. To match using anything other than a
glob, prefix the match string with the appropriate letter from the table below,
followed by an @
sign.
Letter |
Match Type |
Example |
G |
Grains glob |
G@os:Ubuntu |
E |
PCRE Minion ID |
E@web\d+\.(dev|qa|prod)\.loc |
P |
Grains PCRE |
P@os:(RedHat|Fedora|CentOS) |
L |
List of minions |
L@minion1.example.com,minion3.domain.com or bl*.domain.com |
I |
Pillar glob |
I@pdata:foobar |
S |
Subnet/IP address |
S@192.168.1.0/24 or S@192.168.1.100 |
R |
Range cluster |
R@%foo.bar |
Matchers can be joined using boolean and
, or
, and not
operators.
For example, the following string matches all Debian minions with a hostname
that begins with webserv
, as well as any minions that have a hostname which
matches the regular expression
web-dc1-srv.*
:
salt -C 'webserv* and G@os:Debian or E@web-dc1-srv.*' test.ping
That same example expressed in a top file looks like the following:
base:
'webserv* and G@os:Debian or E@web-dc1-srv.*':
- match: compound
- webserver
Note that a leading not
is not supported in compound matches. Instead,
something like the following must be done:
salt -C '* and not G@kernel:Darwin' test.ping
Batch Size
The -b
(or --batch-size
) option allows commands to be executed on only
a specified number of minions at a time. Both percentages and finite numbers are
supported.
salt '*' -b 10 test.ping
salt -G 'os:RedHat' --batch-size 25% apache.signal restart
This will only run test.ping on 10 of the targeted minions at a time and then
restart apache on 25% of the minions matching os:RedHat
at a time and work
through them all until the task is complete. This makes jobs like rolling web
server restarts behind a load balancer or doing maintenance on BSD firewalls
using carp much easier with salt.
The batch system maintains a window of running minions, so, if there are a
total of 150 minions targeted and the batch size is 10, then the command is
sent to 10 minions, when one minion returns then the command is sent to one
additional minion, so that the job is constantly running on 10 minions.
Salt tutorials
Bootstrapping Salt on Linux EC2 with Cloud-Init
Salt is a great tool for remote execution and
configuration management, however you will still need to bootstrap the
daemon when spinning up a new node. One option is to create and save a
custom AMI, but this creates another resource to maintain and document.
A better method for Linux machines uses Canonical's CloudInit to run a bootstrap script
during an EC2 Instance initialization. Cloud-init takes the user_data
string passed into a new AWS instance and runs it in a manner similar to
rc.local. The bootstrap script needs to:
- Install Salt with dependencies
- Point the minion to the master
Here is a sample script:
#!/bin/bash
# Install saltstack
add-apt-repository ppa:saltstack/salt -y
apt-get update -y
apt-get install salt-minion -y
apt-get install salt-master -y
apt-get upgrade -y
# Set salt master location and start minion
sed -i 's/#master: salt/master: [salt_master_fqdn]/' /etc/salt/minion
salt-minion -d
First the script adds the saltstack ppa and installs the package. Then
we copy over the minion config template and tell it where to find the
master. You will have to replace [salt_master_fqdn]
with something
that resolves to your Salt master.
Used With Boto
Boto will accept a string for user data
which can be used to pass our bootstrap script. If the script is saved to
a file, you can read it into a string:
import boto
user_data = open('salt_bootstrap.sh')
conn = boto.connect_ec2(<AWS_ACCESS_ID>, <AWS_SECRET_KEY>)
reservation = conn.run_instances(image_id=<ami_id>,
key_name=<key_name>,
user_data=user_data.read())
Additional Notes
Sometime in the future the ppa will include and install an upstart file. In the
meantime, you can use the bootstrap to build one.
It may also be useful to set the node's role during this phase. One option
would be saving the node's role to a file and then using a custom Grain to
select it.
Salt as a Cloud Controller
In Salt 0.14.0 advanced cloud control systems were introduced, allowing for
private cloud vms to be managed directly with Salt. This system is generally
referred to as Salt Virt.
The Salt Virt system already exists and is installed within Salt itself, this
means that beyond setting up Salt no additional salt code needs to be deployed.
Setting up Hypervisors
The first step to set up the hypervisors involves getting the correct software
installed and setting up the hypervisor network interfaces.
Installing Hypervisor Software
Salt Virt is made to be hypervisor agnostic, but currently the only
implemented hypervisor is KVM via libvirt.
The required software for a hypervisor is libvirt and kvm. For advanced
features install libguestfs or qemu-nbd.
Note
Libguestfs and qemu-nbd allow for virtual machine images to be mounted
before startup and get pre-seeded with configurations and a salt minion
A simple sls formula to deploy the required software and service:
Note
Package names used are Red Hat specific, different package names will be
required for different platforms
libguestfs:
pkg.installed
qemu-kvm:
pkg.installed
libvirt:
pkg.installed
libvirtd:
service.running:
- enable: True
- watch:
- pkg: libvirt
Network Setup
Salt virt comes with a system to model the network interfaces used by the
deployed virtual machines, by default a single interface is created for the
deployed virtual machine and is bridged to br0
. To get going with the
default networking setup ensure that the bridge interface named br0
exists
on the hypervisor and is bridged to an active network device.
Note
To use more advanced networking in Salt Virt read the Salt Virt
Networking document:
Salt Virt Networking
Libvirt State
One of the challenges of deploying a libvirt based cloud is the distribution
of libvirt certificates. These certificates allow for virtual machine
migration. Salt comes with a system used to auto deploy these certificates.
Salt manages the signing authority key and generates keys for libvirt clients
on the master, signs them with the certificate authority and uses pillar to
distribute them. This is managed via the libvirt
state. Simply execute this
formula on the minion to ensure that the certificate is in place and up to
date:
libvirt_keys:
libvirt.keys
Getting Virtual Machine Images Ready
Salt Virt, requires that virtual machine images be provided as these are not
generated on the fly. Generating these virtual machine images differs greatly
based on the underlying platform.
Virtual machine images can be manually created using KVM and running through
the installer, but this process is not recommended since it is very manual and
prone to errors.
Virtual Machine generation applications are avilable for many platforms:
- vm-builder:
- http://wiki.debian.org/VMBuilder
Using Salt Virt
With hypervisors set up and virtual machine images ready, Salt can start
issuing cloud commands.
Start by deploying
Using cron with Salt
The Salt Minion can initiate its own highstate using the salt-call
command.
$ salt-call state.highstate
This will cause the minion to check in with the master and ensure it is in the
correct 'state'.
Use cron to initiate a highstate
If you would like the Salt Minion to regularly check in with the master you can
use the venerable cron to run the salt-call
command.
# PATH=/bin:/sbin:/usr/bin:/usr/sbin
00 00 * * * salt-call state.highstate
The above cron entry will run a highstate every day at midnight.
Note
Be aware that you may need to ensure the PATH for cron includes any
scripts or commands that need to be executed.
Automatic Updates / Frozen Deployments
Salt has support for the
Esky application freezing and update
tool. This tool allows one to build a complete zipfile out of the salt scripts
and all their dependencies - including shared objects / DLLs.
Getting Started
To build frozen applications, you'll need a suitable build environment for each
of your platforms. You should probably set up a virtualenv in order to limit
the scope of Q/A.
This process does work on Windows. Follow the directions at
https://github.com/saltstack/salt-windows-install for details on
installing Salt in Windows. Only the 32-bit Python and dependencies have been
tested, but they have been tested on 64-bit Windows.
You will need to install esky
and bbfreeze
from Pypi in order to enable
the bdist_esky
command in setup.py
.
Building and Freezing
Once you have your tools installed and the environment configured, you can then
python setup.py bdist
to get the eggs prepared. After that is done, run
python setup.py bdist_esky
to have Esky traverse the module tree and pack
all the scripts up into a redistributable. There will be an appropriately
versioned salt-VERSION.zip
in dist/
if everything went smoothly.
Windows
You will need to add C:\Python27\lib\site-packages\zmq
to your PATH
variable. This helps bbfreeze find the zmq dll so it can pack it up.
Using the Frozen Build
Unpack the zip file in your desired install location. Scripts like
salt-minion
and salt-call
will be in the root of the zip file. The
associated libraries and bootstrapping will be in the directories at the same
level. (Check the Esky documentation
for more information)
To support updating your minions in the wild, put your builds on a web server
that your minions can reach. salt.modules.saltutil.update()
will
trigger an update and (optionally) a restart of the minion service under the
new version.
Gotchas
My Windows minion isn't responding
The process dispatch on Windows is slower than it is on *nix. You may need to
add '-t 15' to your salt calls to give them plenty of time to return.
Windows and the Visual Studio Redist
You will need to install the Visual C++ 2008 32-bit redistributable on all
Windows minions. Esky has an option to pack the library into the zipfile,
but OpenSSL does not seem to acknowledge the new location. If you get a
no OPENSSL_Applink
error on the console when trying to start your
frozen minion, you have forgotten to install the redistributable.
Mixed Linux environments and Yum
The Yum Python module doesn't appear to be available on any of the standard
Python package mirrors. If you need to support RHEL/CentOS systems, you
should build on that platform to support all your Linux nodes. Also remember
to build your virtualenv with --system-site-packages
so that the
yum
module is included.
Automatic (Python) module discovery
Automatic (Python) module discovery does not work with the late-loaded scheme that
Salt uses for (Salt) modules. You will need to explicitly add any
misbehaving modules to the freezer_includes
in Salt's setup.py
.
Always check the zipped application to make sure that the necessary modules
were included.
Opening the Firewall up for Salt
The Salt master communicates with the minions using an AES-encrypted ZeroMQ
connection. These communications are done over TCP ports 4505 and 4506, which need
to be accessible on the master only. This document outlines suggested firewall
rules for allowing these incoming connections to the master.
Note
No firewall configuration needs to be done on Salt minions. These changes
refer to the master only.
RHEL 6 / CENTOS 6
The lokkit
command packaged with some Linux distributions makes opening
iptables firewall ports very simple via the command line. Just be careful
to not lock out access to the server by neglecting to open the ssh
port.
lokkit example:
lokkit -p 22:tcp -p 4505:tcp -p 4506:tcp
The system-config-firewall-tui
command provides a text-based interface to modifying
the firewall.
system-config-firewall-tui:
system-config-firewall-tui
openSUSE
Salt installs firewall rules in /etc/sysconfig/SuSEfirewall2.d/services/salt.
Enable with:
SuSEfirewall2 open
SuSEfirewall2 start
If you have an older package of Salt where the above configuration file is not included, the SuSEfirewall2
command makes opening iptables firewall ports
very simple via the command line.
SuSEfirewall example:
SuSEfirewall2 open EXT TCP 4505
SuSEfirewall2 open EXT TCP 4506
The firewall module in YaST2 provides a text-based interface to modifying the firewall.
YaST2:
iptables
Different Linux distributions store their iptables rules in different places,
which makes it difficult to standardize firewall documentation. Included are
some of the more common locations, but your mileage may vary.
Fedora / RHEL / CentOS:
Arch Linux:
/etc/iptables/iptables.rules
Debian
Follow these instructions: http://wiki.debian.org/iptables
Once you've found your firewall rules, you'll need to add the two lines below
to allow traffic on tcp/4505
and tcp/4506
:
-A INPUT -m state --state new -m tcp -p tcp --dport 4505 -j ACCEPT
-A INPUT -m state --state new -m tcp -p tcp --dport 4506 -j ACCEPT
Ubuntu
Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:
pf.conf
The BSD-family of operating systems uses packet filter (pf). The following
example describes the additions to pf.conf
needed to access the Salt
master.
pass in on $int_if proto tcp from any to $int_if port 4505
pass in on $int_if proto tcp from any to $int_if port 4506
Once these additions have been made to the pf.conf
the rules will need to
be reloaded. This can be done using the pfctl
command.
GitFS Backend Walkthrough
While the default location of the salt state tree is on the Salt master,
in /srv/salt, the master can create a bridge to external resources for files.
One of these resources is the ability for the master to directly pull files
from a git repository and serve them to minions.
Note
This walkthrough assumes basic knowledge of Salt. To get up to speed, check
out the walkthrough.
The gitfs backend hooks into any number of remote git repositories and caches
the data from the repository on the master. This makes distributing a state
tree to multiple masters seamless and automated.
Salt's file server also has a concept of environments, when using the gitfs
backend, Salt translates git branches and tags into environments, making
environment management very simple. Just merging a QA or staging branch up
to a production branch can be all that is required to make those file changes
available to Salt.
Simple Configuration
To use the gitfs backend only two configuration changes are required on the
master. The fileserver_backend
option needs to be set with a value of
git
:
fileserver_backend:
- git
To configure what fileserver backends will be searched for requested files.
Now the gitfs system needs to be configured with a remote:
gitfs_remotes:
- git://github.com/saltstack/salt-states.git
These changes require a restart of the master, then the git repo will be cached
on the master and new requests for the salt://
protocol will send files
found in the remote git repository via the master.
Note
The master caches the files from the git server and serves them out,
minions do not connect directly to the git server meaning that only
requested files are delivered to minions.
Multiple Remotes
The gitfs_remotes
option can accept a list of git remotes, the remotes are
then searched in order for the requested file. A simple scenario can illustrate
this behavior.
Assuming that the gitfs_remotes
option specifies three remotes:
gitfs_remotes:
- git://github.com/example/first.git
- git://github.com/example/second.git
- file:///root/third
Note
This example is purposefully contrived to illustrate the behavior of the
gitfs backend. This example should not be read as a recommended way to lay
out files and git repos.
Note
The file:// prefix denotes a git repository in a local directory.
However, it will still use the given file:// URL as a remote,
rather than copying the git repo to the salt cache. This means that any
refs you want accessible must exist as local refs in the specified repo.
Assume that each repository contains some files:
first.git:
top.sls
edit/vim.sls
edit/vimrc
nginx/init.sls
second.git:
edit/dev_vimrc
haproxy/init.sls
third:
haproxy/haproxy.conf
edit/dev_vimrc
The repositories will be searched for files by the master in the order in which
they are defined in the configuration, Therefore the remote
git://github.com/example/first.git will be searched first, if the
requested file is found then it is served and no further searching is executed.
This means that if the file salt://haproxy/init.sls is requested then
it will be pulled from the git://github.com/example/second.git git
repo. If salt://haproxy/haproxy.conf is requested then it will be
pulled from the third repo.
Serving from a Subdirectory
The gitfs_root
option gives the ability to serve files from a subdirectory
within the repository. The path is defined relative to the root of the
repository.
With this repository structure:
repository.git:
somefolder
otherfolder
top.sls
edit/vim.sls
edit/vimrc
nginx/init.sls
Configuration and files can be accessed normally with:
gitfs_root: somefolder/otherfolder
Multiple Backends
Sometimes it may make sense to use multiple backends. For instance, if sls
files are stored in git, but larger files need to be stored directly on the
master.
The logic used for multiple remotes is also used for multiple backends. If
the fileserver_backend
option contains multiple backends:
fileserver_backend:
- roots
- git
Then the roots
backend (the default backend of files in /srv/salt
) will
be searched first for the requested file, then if it is not found on the master
the git remotes will be searched.
Branches, environments and top.sls files
As stated above, when using the gitfs
backend, branches will be mapped
to environments using the branch name as identifier.
There is an exception to this rule thought: the master
branch is implicitly
mapped to the base
environment.
Therefore, for a typical base
, qa
, dev
setup, you'll have to
create the following branches:
Also, top.sls
files from different branches will be merged into one big
file at runtime. Since this could lead to hardly manageable configurations,
the recommended setup is to have the top.sls
file only in your master branch,
and use environment-specific branches for states definitions.
GitFS Remotes over SSH
In order to configure a gitfs_remotes
repository over SSH transport the
git+ssh
URL form must be used.
gitfs_remotes:
- git+ssh://git@github.com/example/salt-states.git
The private key used to connect to the repository must be located in ~/.ssh/id_rsa
for the user running the salt-master.
Note
GitFS requires the Python module GitPython
, version 0.3.0 or newer.
Why aren't my custom modules/states/etc. syncing to my Minions?
In versions 0.16.3 and older, when using the git fileserver backend, certain versions of GitPython may generate errors
when fetching, which Salt fails to catch. While not fatal to the fetch process,
these interrupt the fileserver update that takes place before custom types are
synced, and thus interrupt the sync itself. Try disabling the git fileserver
backend in the master config, restarting the master, and attempting the sync
again.
This issue will be worked around in Salt 0.16.4 and newer.
Remote execution tutorial
Before continuing make sure you have a working Salt installation by
following the installation and the configuration instructions.
Order your minions around
Now that you have a master and at least one minion
communicating with each other you can perform commands on the minion via the
salt command. Salt calls are comprised of three main components:
salt '<target>' <function> [arguments]
target
The target component allows you to filter which minions should run the
following function. The default filter is a glob on the minion id. For example:
salt '*' test.ping
salt '*.example.org' test.ping
Targets can be based on minion system information using the Grains system:
salt -G 'os:Ubuntu' test.ping
Targets can be filtered by regular expression:
salt -E 'virtmach[0-9]' test.ping
Targets can be explicitly specified in a list:
salt -L 'foo,bar,baz,quo' test.ping
Or Multiple target types can be combined in one command:
salt -C 'G@os:Ubuntu and webser* or E@database.*' test.ping
function
A function is some functionality provided by a module. Salt ships with a large
collection of available functions. List all available functions on your
minions:
Here are some examples:
Show all currently available minions:
Run an arbitrary shell command:
salt '*' cmd.run 'uname -a'
arguments
Space-delimited arguments to the function:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument
.
Multi Master Tutorial
As of Salt 0.16.0, the ability to connect minions to multiple masters has been
made available. The multi-master system allows for redundancy of Salt
masters and facilitates multiple points of communication out to minions. When
using a multi-master setup, all masters are running hot, and any active master
can be used to send commands out to the minions.
In 0.16.0, the masters do not share any information, keys need to be accepted
on both masters, and shared files need to be shared manually or use tools like
the git fileserver backend to ensure that the file_roots
are
kept consistent.
Summary of Steps
- Create a redundant master server
- Copy primary master key to redundant master
- Start redundant master
- Configure minions to connect to redundant master
- Restart minions
- Accept keys on redundant master
Prepping a Redundant Master
The first task is to prepare the redundant master. There is only one
requirement when preparing a redundant master, which is that masters share the
same private key. When the first master was created, the master's identifying
key was generated and placed in the master's pki_dir
. The default location
of the key is /etc/salt/pki/master/master.pem
. Take this key and copy it to
the same location on the redundant master. Assuming that no minions have yet
been connected to the new redundant master, it is safe to delete any existing
key in this location and replace it.
Note
There is no logical limit to the number of redundant masters that can be
used.
Once the new key is in place, the redundant master can be safely started.
Sharing Files Between Masters
Salt does not automatically share files between multiple masters. A number of
files should be shared or sharing of these files should be strongly considered.
Minion Keys
Minion keys can be accepted the normal way using salt-key on both
masters. Keys accepted, deleted, or rejected on one master will NOT be
automatically managed on redundant masters; this needs to be taken care of by
running salt-key on both masters or sharing the
/etc/salt/pki/master/{minions,minions_pre,minions_rejected}
directories
between masters.
Note
While sharing the /etc/salt/pki/master directory will work, it is
strongly discouraged, since allowing access to the master.pem key
outside of Salt creates a SERIOUS security risk.
File_Roots
The file_roots
contents should be kept consistent between
masters. Otherwise state runs will not always be consistent on minions since
instructions managed by one master will not agree with other masters.
The recommended way to sync these is to use a fileserver backend like gitfs or
to keep these files on shared storage.
Pillar_Roots
Pillar roots should be given the same considerations as
file_roots
.
Master Configurations
While reasons may exist to maintain separate master configurations, it is wise
to remember that each master maintains independent control over minions.
Therefore, access controls should be in sync between masters unless a valid
reason otherwise exists to keep them inconsistent.
These access control options include but are not limited to:
- external_auth
- client_acl
- peer
- peer_run
Pillar Walkthrough
Note
This walkthrough assumes that the reader has already completed the initial
Salt Stack walkthrough.
The pillar interface inside of Salt is one of the most important components
of a Salt deployment. Pillar is the interface used to generate arbitrary data
for specific minions. The data generated in pillar is made available to almost
every component of Salt and is used for a number of purposes:
- Highly Sensitive Data:
- Information transferred via pillar is guaranteed to only be presented to the
minions that are targeted, this makes pillar the engine to use in Salt for
managing security information, such as cryptographic keys and passwords.
- Minion Configuration:
- Minion modules such as the execution modules, states, and returners can
often be configured via data stored in pillar.
- Variables:
- Variables which need to be assigned to specific minions or groups of
minions can be defined in pillar and then accessed inside sls formulas
and template files.
- Arbitrary Data:
- Pillar can contain any basic data structure, so a list of values, or a
key/value store can be defined making it easy to iterate over a group
of values in sls formulas
Pillar is therefore one of the most important systems when using Salt, this
walkthrough is designed to get a simple pillar up and running in a few minutes
and then to dive into the capabilities of pillar and where the data is
available.
Setting Up Pillar
The pillar is already running in Salt by default. The data in the minion's
pillars can be seen via the following command:
Note
Prior to version 0.16.2, this function is named pillar.data
. This
function name is still supported for backwards compatibility.
By default the contents of the master configuration file are loaded into
pillar for all minions, this is to enable the master configuration file to
be used for global configuration of minions.
The pillar is built in a similar fashion as the state tree, it is comprised
of sls files and has a top file, just like the state tree. The pillar is stored
in a different location on the Salt master than the state tree. The default
location for the pillar is in /srv/pillar.
Note
The pillar location can be configured via the pillar_roots option inside
the master configuration file.
To start setting up the pillar, the /srv/pillar directory needs to be present:
Now a simple top file, following the same format as the top file used for
states needs to be created:
/srv/pillar/top.sls
:
This top file associates the data.sls file to all minions. Now the
/srv/pillar/data.sls
file needs to be populated:
/srv/pillar/data.sls
:
Now that the file has been saved the minions' pillars will be updated:
The key info
should now appear in the returned pillar data.
More Complex Data
Pillar files are sls files, just like states, but unlike states they do not
need to define formulas, the data can be arbitrary, this example for
instance sets up user data with a UID:
/srv/pillar/users/init.sls
:
users:
thatch: 1000
shouse: 1001
utahdave: 1002
redbeard: 1003
Note
The same directory lookups that exist in states exist in pillar, so the
file users/init.sls
can be referenced with users
in the top
file.
The top file will need to be updated to include this sls file:
/srv/pillar/top.sls
:
base:
'*':
- data
- users
Now the data will be available to the minions. To use the pillar data in a
state just access the pillar via Jinja:
/srv/salt/users/init.sls
{% for user, uid in pillar.get('users', {}).items() %}
{{user}}:
user.present:
- uid: {{uid}}
{% endfor %}
This approach allows for users to be safely defined in a pillar and then the
user data is applied in an sls file.
Paramaterizing States With Pillar
One of the most powerful abstractions in pillar is the ability to parameterize
states. Instead of defining macros or functions within the state context the
entire state tree can be freely parameterized relative to the minion's pillar.
This approach allows for Salt to be very flexible while staying very
straightforward. It also means that simple sls formulas used in the state tree
can be directly parameterized without needing to refactor the state tree.
A simple example is to set up a mapping of package names in pillar for
separate Linux distributions:
/srv/pillar/pkg/init.sls
:
pkgs:
{% if grains['os_family'] == 'RedHat' %}
apache: httpd
vim: vim-enhanced
{% elif grains['os_family'] == 'Debian' %}
apache: apache2
vim: vim
{% elif grains['os'] == 'Arch' %}
apache: apache
vim: vim
{% endif %}
The new pkg
sls needs to be added to the top file:
/srv/pillar/top.sls
:
base:
'*':
- data
- users
- pkg
Now the minions will auto map values based on respective operating systems
inside of the pillar, so sls files can be safely parameterized:
/srv/salt/apache/init.sls
:
apache:
pkg.installed:
- name: {{ pillar['pkgs']['apache'] }}
Or, if no pillar is available a default can be set as well:
Note
The function pillar.get
used in this example was added to Salt in
version 0.14.0
/srv/salt/apache/init.sls
:
apache:
pkg.installed:
- name: {{ salt['pillar.get']('pkgs:apache', 'httpd') }}
In the above example, if the pillar value pillar['pkgs']['apache']
is not
set in the minion's pillar, then the default of httpd
will be used.
Note
Under the hood, pillar is just a python dict, so python dict methods such
as get and items can be used.
Pillar Makes Simple States Grow Easily
One of the design goals of pillar is to make simple sls formulas easily grow
into more flexible formulas without refactoring or complicating the states.
A simple formula:
/srv/salt/edit/vim.sls
:
vim:
pkg:
- installed
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- mode: 644
- user: root
- group: root
- require:
- pkg: vim
Can be easily transformed into a powerful, parameterized formula:
/srv/salt/edit/vim.sls
:
vim:
pkg:
- installed
- name: {{ pillar['pkgs']['vim'] }}
/etc/vimrc:
file.managed:
- source: {{ pillar['vimrc'] }}
- mode: 644
- user: root
- group: root
- require:
- pkg: vim
Where the vimrc source location can now be changed via pillar:
/srv/pillar/edit/vim.sls
:
{% if grain['id'].startswith('dev') %}
vimrc: salt://edit/dev_vimrc
{% elif grain['id'].startswith('qa') %}
vimrc: salt://edit/qa_vimrc
{% else %}
vimrc: salt://edit/vimrc
{% endif %}
Ensuring that the right vimrc is sent out to the correct minions.
More On Pillar
The pillar data is generated on the Salt master and securely distributed to
minions. Salt is not restricted to the pillar sls files when defining the
pillar but can retrieve data from external sources. This can be useful when
information about an infrastructure is stored in a separate location.
Reference information on pillar and the external pillar interface can be found
in the Salt Stack documentation:
Pillar
Preseed Minion with Accepted Key
In some situations, it is not convenient to wait for a minion to start before
accepting its key on the master. For instance, you may want the minion to
bootstrap itself as soon as it comes online. You may also want to to let your
developers provision new development machines on the fly.
There is a general four step process to do this:
- Generate the keys on the master:
root@saltmaster# salt-key --gen-keys=[key_name]
Pick a name for the key, such as the minion's id.
- Add the public key to the accepted minion folder:
root@saltmaster# cp key_name.pub /etc/salt/pki/master/minions/[minion_id]
It is necessary that the public key file has the same name as your minion id.
This is how Salt matches minions with their keys. Also note that the pki folder
could be in a different location, depending on your OS or if specified in the
master config file.
- Distribute the minion keys.
There is no single method to get the keypair to your minion. If you are
spooling up minions on EC2, you could pass them in using user_data or a
cloud-init script. If you are handing them off to a team of developers for
provisioning dev machines, you will need a secure file transfer.
Security Warning
Since the minion key is already accepted on the master, distributing
the private key poses a potential security risk. A malicious party
will have access to your entire state tree and other sensitive data.
- Preseed the Minion with the keys
You will want to place the minion keys before starting the salt-minion daemon:
/etc/salt/pki/minion/minion.pem
/etc/salt/pki/minion/minion.pub
Once in place, you should be able to start salt-minion and run
salt-call state.highstate
or any other salt commands that require master
authentication.
Salt Masterless Quickstart
Running a masterless salt-minion lets you use salt's configuration management
for a single machine. It is also useful for testing out state trees before
deploying to a production setup.
The only real difference in using a standalone minion is that instead of issuing
commands with salt
, we use the salt-call
command, like this:
salt-call --local state.highstate
Bootstrap Salt Minion
First we need to install the salt minion. The salt-bootstrap script makes
this incredibly easy for any OS with a Bourne shell. You can use it like this:
wget -O - http://bootstrap.saltstack.org | sudo sh
Or see the salt-bootstrap documentation for other one liners. Additionally,
if you are using Vagrant to test out salt, the salty-vagrant tool will
provision the VM for you.
Create State Tree
Now we build an example state tree. This is where the configuration is defined.
For more in depth directions, see the tutorial.
- Create the top.sls file:
/srv/salt/top.sls:
- Create our webserver state tree:
/srv/salt/webserver.sls:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
The only thing left is to provision our minion using the highstate command.
Salt-call also gives us an easy way to give us verbose output:
salt-call --local state.highstate -l debug
The --local
flag tells the salt-minion to look for the state tree in the
local file system. Normally the minion copies the state tree from the master
and executes it from there.
That's it, good luck!
Standalone Minion
Since the Salt minion contains such extensive functionality it can be useful
to run it standalone. A standalone minion can be used to do a number of
things:
- Stand up a master server via States (Salting a Salt Master)
- Use salt-call commands on a system without connectivity to a master
- Masterless States, run states entirely from files local to the minion
Telling Salt Call to Run Masterless
The salt-call command is used to run module functions locally on a minion
instead of executing them from the master. Normally the salt-call command
checks into the master to retrieve file server and pillar data, but when
running standalone salt-call needs to be instructed to not check the master for
this data. To instruct the minion to not look for a master when running
salt-call the file_client
configuration option needs to be set.
By default the file_client
is set to remote
so that the
minion knows that file server and pillar data are to be gathered from the
master. When setting the file_client
option to local
the
minion is configured to not gather this data from the master.
Now the salt-call command will not look for a master and will assume that the
local system has all of the file ad pillar resources.
Running States Masterless
The state system can be easily run without a Salt master, with all needed files
local to the minion. To do this the minion configuration file needs to be set
up to know how to return file_roots information like the master. The file_roots
setting defaults to /srv/salt for the base environment just like on the master:
file_roots:
base:
- /srv/salt
Now set up the Salt State Tree, top file, and SLS modules in the same way that
they would be set up on a master. Now, with the file_client
option set to local
and an available state tree then calls to functions in
the state module will use the information in the file_roots on the minion
instead of checking in with the master.
Remember that when creating a state tree on a minion there are no syntax or
path changes needed, SLS modules written to be used from a master do not need
to be modified in any way to work with a minion.
This makes it easy to "script" deployments with Salt states without having to
set up a master, and allows for these SLS modules to be easily moved into a
Salt master as the deployment grows.
Now the declared state can now be executed with:
salt-call state.highstate
Or the salt-call command can be executed with the --local
flag, this makes
it unnecessary to change the configuration file:
salt-call state.highstate --local
How Do I Use Salt States?
Simplicity, Simplicity, Simplicity
Many of the most powerful and useful engineering solutions are founded on
simple principles. Salt States strive to do just that: K.I.S.S. (Keep It
Stupidly Simple)
The core of the Salt State system is the SLS, or SaLt
State file. The SLS is a representation of the state in which
a system should be in, and is set up to contain this data in a simple format.
This is often called configuration management.
Note
This is just the beginning of using states, make sure to read up on pillar
Pillar next.
It is All Just Data
Before delving into the particulars, it will help to understand that the SLS
file is just a data structure under the hood. While understanding that the SLS
is just a data structure isn't critical for understanding and making use of
Salt States, it should help bolster knowledge of where the real power is.
SLS files are therefore, in reality, just dictionaries, lists, strings, and numbers.
By using this approach Salt can be much more flexible. As one writes more state
files, it becomes clearer exactly what is being written. The result is a system
that is easy to understand, yet grows with the needs of the admin or developer.
The Top File
The example SLS files in the below sections can be assigned to hosts using a
file called top.sls. This file is described in-depth here.
Default Data - YAML
By default Salt represents the SLS data in what is one of the simplest
serialization formats available - YAML.
A typical SLS file will often look like this in YAML:
Note
These demos use some generic service and package names, different
distributions often use different names for packages and services. For
instance apache should be replaced with httpd on a Red Hat system.
Salt uses the name of the init script, systemd name, upstart name etc.
based on what the underlying service management for the platform. To
get a list of the available service names on a platform execute the
service.get_all salt function.
Information on how to make states work with multiple distributions
is later in the tutorial.
apache:
pkg:
- installed
service:
- running
- require:
- pkg: apache
This SLS data will ensure that the package named apache is installed, and
that the apache service is running. The components can be explained in a
simple way.
The first line is the ID for a set of data, and it is called the ID
Declaration. This ID sets the name of the thing that needs to be manipulated.
The second and fourth lines are the start of the State Declarations, so they
are using the pkg and service states respectively. The pkg state manages a
software package to be installed via the system's native package manager,
and the service state manages a system daemon.
The third and fifth lines are the function to run. This function defines what
state the named package and service should be in. Here, the package is to be
installed, and the service should be running.
Finally, on line six, is the word require
. This is called a Requisite
Statement, and it makes sure that the Apache service is only started after
a successful installation of the apache package.
Adding Configs and Users
When setting up a service like an Apache web server, many more components may
need to be added. The Apache configuration file will most likely be managed,
and a user and group may need to be set up.
apache:
pkg:
- installed
service:
- running
- watch:
- pkg: apache
- file: /etc/httpd/conf/httpd.conf
- user: apache
user.present:
- uid: 87
- gid: 87
- home: /var/www/html
- shell: /bin/nologin
- require:
- group: apache
group.present:
- gid: 87
- require:
- pkg: apache
/etc/httpd/conf/httpd.conf:
file.managed:
- source: salt://apache/httpd.conf
- user: root
- group: root
- mode: 644
This SLS data greatly extends the first example, and includes a config file,
a user, a group and new requisite statement: watch
.
Adding more states is easy, since the new user and group states are under
the Apache ID, the user and group will be the Apache user and group. The
require
statements will make sure that the user will only be made after
the group, and that the group will be made only after the Apache package is
installed.
Next,the require
statement under service was changed to watch, and is
now watching 3 states instead of just one. The watch statement does the same
thing as require, making sure that the other states run before running the
state with a watch, but it adds an extra component. The watch
statement
will run the state's watcher function for any changes to the watched states.
So if the package was updated, the config file changed, or the user
uid modified, then the service state's watcher will be run. The service
state's watcher just restarts the service, so in this case, a change in the
config file will also trigger a restart of the respective service.
Moving Beyond a Single SLS
When setting up Salt States in a scalable manner, more than one SLS will need
to be used. The above examples were in a single SLS file, but two or more
SLS files can be combined to build out a State Tree. The above example also
references a file with a strange source - salt://apache/httpd.conf
. That
file will need to be available as well.
The SLS files are laid out in a directory structure on the Salt master; an
SLS is just a file and files to download are just files.
The Apache example would be laid out in the root of the Salt file server like
this:
apache/init.sls
apache/httpd.conf
So the httpd.conf is just a file in the apache directory, and is referenced
directly.
But when using more than one single SLS file, more components can be added to
the toolkit. Consider this SSH example:
ssh/init.sls:
openssh-client:
pkg.installed
/etc/ssh/ssh_config:
file.managed:
- user: root
- group: root
- mode: 644
- source: salt://ssh/ssh_config
- require:
- pkg: openssh-client
ssh/server.sls:
include:
- ssh
openssh-server:
pkg.installed
sshd:
service.running:
- require:
- pkg: openssh-client
- pkg: openssh-server
- file: /etc/ssh/banner
- file: /etc/ssh/sshd_config
/etc/ssh/sshd_config:
file.managed:
- user: root
- group: root
- mode: 644
- source: salt://ssh/sshd_config
- require:
- pkg: openssh-server
/etc/ssh/banner:
file:
- managed
- user: root
- group: root
- mode: 644
- source: salt://ssh/banner
- require:
- pkg: openssh-server
Note
Notice that we use two similar ways of denoting that a file
is managed by Salt. In the /etc/ssh/sshd_config state section above,
we use the file.managed state declaration whereas with the
/etc/ssh/banner state section, we use the file state declaration
and add a managed attribute to that state declaration. Both ways
produce an identical result; the first way -- using file.managed --
is merely a shortcut.
Now our State Tree looks like this:
apache/init.sls
apache/httpd.conf
ssh/init.sls
ssh/server.sls
ssh/banner
ssh/ssh_config
ssh/sshd_config
This example now introduces the include
statement. The include statement
includes another SLS file so that components found in it can be required,
watched or as will soon be demonstrated - extended.
The include statement allows for states to be cross linked. When an SLS
has an include statement it is literally extended to include the contents of
the included SLS files.
Note that some of the SLS files are called init.sls, while others are not. More
info on what this means can be found in the States Tutorial.
Extending Included SLS Data
Sometimes SLS data needs to be extended. Perhaps the apache service needs to
watch additional resources, or under certain circumstances a different file
needs to be placed.
In these examples, the first will add a custom banner to ssh and the second will
add more watchers to apache to include mod_python.
ssh/custom-server.sls:
include:
- ssh.server
extend:
/etc/ssh/banner:
file:
- source: salt://ssh/custom-banner
python/mod_python.sls:
include:
- apache
extend:
apache:
service:
- watch:
- pkg: mod_python
mod_python:
pkg.installed
The custom-server.sls
file uses the extend statement to overwrite where the
banner is being downloaded from, and therefore changing what file is being used
to configure the banner.
In the new mod_python SLS the mod_python package is added, but more importantly
the apache service was extended to also watch the mod_python package.
Using extend with require or watch
The extend
statement works differently for require
or watch
.
It appends to, rather than replacing the requisite component.
Understanding the Render System
Since SLS data is simply that (data), it does not need to be represented
with YAML. Salt defaults to YAML because it is very straightforward and easy
to learn and use. But the SLS files can be rendered from almost any imaginable
medium, so long as a renderer module is provided.
The default rendering system is the yaml_jinja
renderer. The
yaml_jinja
renderer will first pass the template through the Jinja2
templating system, and then through the YAML parser. The benefit here is that
full programming constructs are available when creating SLS files.
Other renderers available are yaml_mako
and yaml_wempy
which each use
the Mako or Wempy templating system respectively rather than the jinja
templating system, and more notably, the pure Python or py
and pydsl
renderers.
The py
renderer allows for SLS files to be written in pure Python,
allowing for the utmost level of flexibility and power when preparing SLS
data; while the pydsl renderer
provides a flexible, domain-specific language for authoring SLS data in Python.
Note
The templating engines described above aren't just available in SLS files.
They can also be used in file.managed
states, making file management much more dynamic and flexible. Some
examples for using templates in managed files can be found in the
documentation for the file states, as well as the MooseFS
example below.
Getting to Know the Default - yaml_jinja
The default renderer - yaml_jinja
, allows for use of the jinja
templating system. A guide to the Jinja templating system can be found here:
http://jinja.pocoo.org/docs
When working with renderers a few very useful bits of data are passed in. In
the case of templating engine based renderers, three critical components are
available, salt
, grains
, and pillar
. The salt
object allows for
any Salt function to be called from within the template, and grains
allows
for the Grains to be accessed from within the template. A few examples:
apache/init.sls:
apache:
pkg.installed:
{% if grains['os'] == 'RedHat'%}
- name: httpd
{% endif %}
service.running:
{% if grains['os'] == 'RedHat'%}
- name: httpd
{% endif %}
- watch:
- pkg: apache
- file: /etc/httpd/conf/httpd.conf
- user: apache
user.present:
- uid: 87
- gid: 87
- home: /var/www/html
- shell: /bin/nologin
- require:
- group: apache
group.present:
- gid: 87
- require:
- pkg: apache
/etc/httpd/conf/httpd.conf:
file.managed:
- source: salt://apache/httpd.conf
- user: root
- group: root
- mode: 644
This example is simple. If the os
grain states that the operating system is
Red Hat, then the name of the Apache package and service needs to be httpd.
A more aggressive way to use Jinja can be found here, in a module to set up
a MooseFS distributed filesystem chunkserver:
moosefs/chunk.sls:
include:
- moosefs
{% for mnt in salt['cmd.run']('ls /dev/data/moose*').split() %}
/mnt/moose{{ mnt[-1] }}:
mount.mounted:
- device: {{ mnt }}
- fstype: xfs
- mkmnt: True
file.directory:
- user: mfs
- group: mfs
- require:
- user: mfs
- group: mfs
{% endfor %}
/etc/mfshdd.cfg:
file.managed:
- source: salt://moosefs/mfshdd.cfg
- user: root
- group: root
- mode: 644
- template: jinja
- require:
- pkg: mfs-chunkserver
/etc/mfschunkserver.cfg:
file.managed:
- source: salt://moosefs/mfschunkserver.cfg
- user: root
- group: root
- mode: 644
- template: jinja
- require:
- pkg: mfs-chunkserver
mfs-chunkserver:
pkg:
- installed
mfschunkserver:
service:
- running
- require:
{% for mnt in salt['cmd.run']('ls /dev/data/moose*') %}
- mount: /mnt/moose{{ mnt[-1] }}
- file: /mnt/moose{{ mnt[-1] }}
{% endfor %}
- file: /etc/mfschunkserver.cfg
- file: /etc/mfshdd.cfg
- file: /var/lib/mfs
This example shows much more of the available power of Jinja.
Multiple for loops are used to dynamically detect available hard drives
and set them up to be mounted, and the salt
object is used multiple
times to call shell commands to gather data.
Introducing the Python and the PyDSL Renderers
Sometimes the chosen default renderer might not have enough logical power to
accomplish the needed task. When this happens, the Python renderer can be
used. Normally a YAML renderer should be used for the majority of SLS files,
but an SLS file set to use another renderer can be easily added to the tree.
This example shows a very basic Python SLS file:
python/django.sls:
#!py
def run():
'''
Install the django package
'''
return {'include': ['python'],
'django': {'pkg': ['installed']}}
This is a very simple example; the first line has an SLS shebang that
tells Salt to not use the default renderer, but to use the py
renderer.
Then the run function is defined, the return value from the run function
must be a Salt friendly data structure, or better known as a Salt
HighState data structure.
Alternatively, using the pydsl
renderer, the above example can be written more succinctly as:
python/django.sls:
#!pydsl
include('python', delayed=True)
state('django').pkg.installed()
This Python examples would look like this if they were written in YAML:
include:
- python
django:
pkg.installed
This example clearly illustrates that; one, using the YAML renderer by default
is a wise decision and two, unbridled power can be obtained where needed by
using a pure Python SLS.
Running and debugging salt states.
Once the rules in an SLS are ready, they should be tested to ensure they
work properly. To invoke these rules, simply execute
salt '*' state.highstate
on the command line. If you get back only
hostnames with a :
after, but no return, chances are there is a problem with
one or more of the sls files. On the minion, use the salt-call
command:
salt-call state.highstate -l debug
to examine the output for errors.
This should help troubleshoot the issue. The minions can also be started in
the foreground in debug mode: salt-minion -l debug
.
Next Reading
With an understanding of states, the next recommendation is to become familiar
with Salt's pillar interface:
States tutorial, part 1
The purpose of this tutorial is to demonstrate how quickly you can configure a
system to be managed by Salt States. For detailed information about the state
system please refer to the full states reference.
This tutorial will walk you through using Salt to configure a minion to run the
Apache HTTP server and to ensure the server is running.
Before continuing make sure you have a working Salt installation by
following the installation and the configuration instructions.
Setting up the Salt State Tree
States are stored in text files on the master and transferred to the minions on
demand via the master's File Server. The collection of state files make up the
State Tree.
To start using a central state system in Salt, the Salt File Server must first
be set up. Edit the master config file (file_roots
) and
uncomment the following lines:
file_roots:
base:
- /srv/salt
Note
If you are deploying on FreeBSD via ports, the file_roots
path defaults
to /usr/local/etc/salt/states
.
Restart the Salt master in order to pick up this change:
pkill salt-master
salt-master -d
Preparing the Top File
On the master, in the directory uncommented in the previous step,
(/srv/salt
by default), create a new file called
top.sls
and add the following:
The top file is separated into environments (discussed later). The
default environment is base
. Under the base
environment a collection of
minion matches is defined; for now simply specify all hosts (*
).
Targeting minions
The expressions can use any of the targeting mechanisms used by Salt —
minions can be matched by glob, PCRE regular expression, or by grains. For example:
base:
'os:Fedora':
- match: grain
- webserver
Create an sls
module
In the same directory as the top file, create an empty file named
webserver.sls
, containing the following:
apache: # ID declaration
pkg: # state declaration
- installed # function declaration
The first line, called the ID declaration, is an arbitrary identifier.
In this case it defines the name of the package to be installed. NOTE: the
package name for the Apache httpd web server may differ depending on OS or
distro — for example, on Fedora it is httpd
but on Debian/Ubuntu it
is apache2
.
The second line, called the state declaration, defines which of the
Salt States we are using. In this example, we are using the pkg state
to ensure that a given package is installed.
The third line, called the function declaration, defines which function
in the pkg state
module to call.
Renderers
States sls files can be written in many formats. Salt requires only
a simple data structure and is not concerned with how that data structure
is built. Templating languages and DSLs are a dime-a-dozen and everyone
has a favorite.
Building the expected data structure is the job of Salt renderers and they are dead-simple to write.
In this tutorial we will be using YAML in Jinja2 templates, which is the
default format. The default can be changed by editing
renderer
in the master configuration file.
Install the package
Next, let's run the state we created. Open a terminal on the master and run:
% salt '*' state.highstate
Our master is instructing all targeted minions to run state.highstate
. When a minion executes a highstate call it
will download the top file and attempt to match the expressions. When
it does match an expression the modules listed for it will be downloaded,
compiled, and executed.
Once completed, the minion will report back with a summary of all actions taken
and all changes made.
SLS File Namespace
Note that in the example above, the SLS file
webserver.sls
was referred to simply as webserver
. The namespace
for SLS files follows a few simple rules:
The .sls
is discarded (i.e. webserver.sls
becomes
webserver
).
- Subdirectories can be used for better organization.
- Each subdirectory is represented by a dot.
webserver/dev.sls
is referred to as webserver.dev
.
A file called init.sls
in a subdirectory is referred to by the path
of the directory. So, webserver/init.sls
is referred to as
webserver
.
If both webserver.sls
and webserver/init.sls
happen to exist,
webserver/init.sls
will be ignored and webserver.sls
will be the
file referred to as webserver
.
Troubleshooting Salt
If the expected output isn't seen, the following tips can help to
narrow down the problem.
- Turn up logging
Salt can be quite chatty when you change the logging setting to
debug
:
- Run the minion in the foreground
By not starting the minion in daemon mode (-d
) one can view any output from the minion as it works:
Increase the default timeout value when running salt. For
example, to change the default timeout to 60 seconds:
For best results, combine all three:
salt-minion -l debug & # On the minion
salt '*' state.highstate -t 60 # On the master
Next steps
This tutorial focused on getting a simple Salt States configuration working.
Part 2 will build on this example to cover more advanced
sls syntax and will explore more of the states that ship with Salt.
States tutorial, part 2
Note
This tutorial builds on topics covered in part 1. It is
recommended that you begin there.
In the last part of the Salt States tutorial we covered
the basics of installing a package. We will now modify our webserver.sls
file to have requirements, and use even more Salt States.
Call multiple States
You can specify multiple state declarations under
an ID declaration. For example, a quick modification to our
webserver.sls
to also start Apache if it is not running:
| apache:
pkg:
- installed
service:
- running
- require:
- pkg: apache
|
Try stopping Apache before running state.highstate
once again and observe
the output.
Expand the SLS module
As you have seen, SLS modules are appended with the file extension .sls
and
are referenced by name starting at the root of the state tree. An SLS module
can be also defined as a directory. Demonstrate that now by creating a
directory named webserver
and moving and renaming webserver.sls
to
webserver/init.sls
. Your state directory should now look like this:
|- top.sls
`- webserver/
`- init.sls
Organizing SLS modules
You can place additional .sls
files in a state file directory. This
affords much cleaner organization of your state tree on the filesystem. For
example, if we created a webserver/django.sls
file that module would be
referenced as webserver.django
.
In addition, States provide powerful includes and extending functionality
which we will cover in Part 3.
Require other states
We now have a working installation of Apache so let's add an HTML file to
customize our website. It isn't exactly useful to have a website without a
webserver so we don't want Salt to install our HTML file until Apache is
installed and running. Include the following at the bottom of your
webserver/init.sls
file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14 | apache:
pkg:
- installed
service:
- running
- require:
- pkg: apache
/var/www/index.html: # ID declaration
file: # state declaration
- managed # function
- source: salt://webserver/index.html # function arg
- require: # requisite declaration
- pkg: apache # requisite reference
|
line 9 is the ID declaration. In this example it is the
location we want to install our custom HTML file. (Note: the default
location that Apache serves may differ from the above on your OS or distro.
/srv/www
could also be a likely place to look.)
Line 10 the state declaration. This example uses the Salt file
state
.
Line 11 is the function declaration. The managed function
will download a file from the master and install it
in the location specified.
Line 12 is a function arg declaration which, in this example, passes
the source
argument to the managed function
.
Line 13 is a requisite declaration.
Line 14 is a requisite reference which refers to a state and an ID.
In this example, it is referring to the ID declaration
from our example in
part 1. This declaration tells Salt not to install the HTML
file until Apache is installed.
Next, create the index.html
file and save it in the webserver
directory:
<html>
<head><title>Salt rocks</title></head>
<body>
<h1>This file brought to you by Salt</h1>
</body>
</html>
Last, call state.highstate
again and the
minion will fetch and execute the highstate as well as our HTML file from the
master using Salt's File Server:
Verify that Apache is now serving your custom HTML.
require
vs. watch
There are two requisite declarations,
“require” and “watch”. Not every state supports “watch”. The service
state
does support “watch” and will restart a
service based on the watch condition.
For example, if you use Salt to install an Apache virtual host
configuration file and want to restart Apache whenever that file is changed
you could modify our Apache example from earlier as follows:
/etc/httpd/extra/httpd-vhosts.conf:
file:
- managed
- source: salt://webserver/httpd-vhosts.conf
apache:
pkg:
- installed
service:
- running
- watch:
- file: /etc/httpd/extra/httpd-vhosts.conf
- require:
- pkg: apache
If the pkg and service names differ on your OS or distro of choice you can
specify each one separately using a name declaration which
explained in Part 3.
Next steps
In part 3 we will discuss how to use includes, extends and
templating to make a more complete State Tree configuration.
States tutorial, part 3
Note
This tutorial builds on topics covered in part 1 and
part 2. It is recommended that you begin there.
This part of the tutorial will cover more advanced templating and
configuration techniques for sls
files.
Templating SLS modules
SLS modules may require programming logic or inline execution. This is
accomplished with module templating. The default module templating system used
is Jinja2 and may be configured by changing the renderer
value in the master config.
All states are passed through a templating system when they are initially read.
To make use of the templating system, simply add some templating markup.
An example of an sls module with templating markup may look like this:
{% for usr in ['moe','larry','curly'] %}
{{ usr }}:
user.present
{% endfor %}
This templated sls file once generated will look like this:
moe:
user.present
larry:
user.present
curly:
user.present
Here's a more complex example:
{% for usr in 'moe','larry','curly' %}
{{ usr }}:
group:
- present
user:
- present
- gid_from_name: True
- require:
- group: {{ usr }}
{% endfor %}
Using Grains in SLS modules
Often times a state will need to behave differently on different systems.
Salt grains objects are made available
in the template context. The grains can be used from within sls modules:
apache:
pkg.installed:
{% if grains['os'] == 'RedHat' %}
- name: httpd
{% elif grains['os'] == 'Ubuntu' %}
- name: apache2
{% endif %}
Calling Salt modules from templates
All of the Salt modules loaded by the minion are available within the
templating system. This allows data to be gathered in real time on the target
system. It also allows for shell commands to be run easily from within the sls
modules.
The Salt module functions are also made available in the template context as
salt:
moe:
user:
- present
- gid: {{ salt['file.group_to_gid']('some_group_that_exists') }}
Note that for the above example to work, some_group_that_exists
must exist
before the state file is processed by the templating engine.
Below is an example that uses the network.hw_addr
function to retrieve the
MAC address for eth0:
salt['network.hw_addr']('eth0')
Advanced SLS module syntax
Lastly, we will cover some incredibly useful techniques for more complex State
trees.
A previous example showed how to spread a Salt tree across several files.
Similarly, requisites span multiple files by
using an include declaration. For example:
python/python-libs.sls:
python-dateutil:
pkg.installed
python/django.sls:
include:
- python.python-libs
django:
pkg.installed:
- require:
- pkg: python-dateutil
You can modify previous declarations by using an extend declaration. For
example the following modifies the Apache tree to also restart Apache when the
vhosts file is changed:
apache/apache.sls:
apache/mywebsite.sls:
include:
- apache.apache
extend:
apache:
service:
- running
- watch:
- file: /etc/httpd/extra/httpd-vhosts.conf
/etc/httpd/extra/httpd-vhosts.conf:
file.managed:
- source: salt://apache/httpd-vhosts.conf
Using extend with require or watch
The extend
statement works differently for require
or watch
.
It appends to, rather than replacing the requisite component.
You can override the ID declaration by using a name
declaration. For example, the previous example is a bit more maintainable if
rewritten as follows:
apache/mywebsite.sls:
include:
- apache.apache
extend:
apache:
service:
- running
- watch:
- file: mywebsite
mywebsite:
file.managed:
- name: /etc/httpd/extra/httpd-vhosts.conf
- source: salt://apache/httpd-vhosts.conf
Even more powerful is using a names declaration to override the
ID declaration for multiple states at once. This often can remove the
need for looping in a template. For example, the first example in this tutorial
can be rewritten without the loop:
stooges:
user.present:
- names:
- moe
- larry
- curly
Next steps
In part 4 we will discuss how to use salt's
file_roots
to set up a workflow in which states can be
"promoted" from dev, to QA, to production.
States tutorial, part 4
Note
This tutorial builds on topics covered in part 1,
part 2 and part 3. It is recommended
that you begin there.
This part of the tutorial will show how to use salt's file_roots
to set up a workflow in which states can be "promoted" from dev, to QA, to
production.
Salt fileserver path inheritance
Salt's fileserver allows for more than one root directory per environment, like
in the below example, which uses both a local directory and a secondary
location shared to the salt master via NFS:
# In the master config file (/etc/salt/master)
file_roots:
base:
- /srv/salt
- /mnt/salt-nfs/base
Salt's fileserver collapses the list of root directories into a single virtual
environment containing all files from each root. If the same file exists at the
same relative path in more than one root, then the top-most match "wins". For
example, if /srv/salt/foo.txt
and /mnt/salt-nfs/base/foo.txt
both
exist, then salt://foo.txt
will point to /srv/salt/foo.txt
.
Environment configuration
Configure a multiple-environment setup like so:
file_roots:
base:
- /srv/salt/prod
qa:
- /srv/salt/qa
- /srv/salt/prod
dev:
- /srv/salt/dev
- /srv/salt/qa
- /srv/salt/prod
Given the path inheritance described above, files within /srv/salt/prod
would be available in all environments. Files within /srv/salt/qa
would be
available in both qa
, and dev
. Finally, the files within
/srv/salt/dev
would only be available within the dev
environment.
Based on the order in which the roots are defined, new files/states can be
placed within /srv/salt/dev
, and pushed out to the dev hosts for testing.
Those files/states can then be moved to the same relative path within
/srv/salt/qa
, and they are now available only in the dev
and qa
environments, allowing them to be pushed to QA hosts and tested.
Finally, if moved to the same relative path within /srv/salt/prod
, the
files are now available in all three environments.
Practical Example
As an example, consider a simple website, installed to /var/www/foobarcom
.
Below is a top.sls that can be used to deploy the website:
/srv/salt/prod/top.sls:
base:
'web*prod*':
- webserver.foobarcom
qa:
'web*qa*':
- webserver.foobarcom
dev:
'web*dev*':
- webserver.foobarcom
Using pillar, roles can be assigned to the hosts:
/srv/pillar/top.sls:
base:
'web*prod*':
- webserver.prod
'web*qa*':
- webserver.qa
'web*dev*':
- webserver.dev
/srv/pillar/webserver/prod.sls:
/srv/pillar/webserver/qa.sls:
/srv/pillar/webserver/dev.sls:
And finally, the SLS to deploy the website:
/srv/salt/prod/webserver/foobarcom.sls:
{% if pillar.get('webserver_role', '') %}
/var/www/foobarcom:
file.recurse:
- source: salt://webserver/src/foobarcom
- env: {{ pillar['webserver_role'] }}
- user: www
- group: www
- dir_mode: 755
- file_mode: 644
{% endif %}
Given the above SLS, the source for the website should initially be placed in
/srv/salt/dev/webserver/src/foobarcom
.
First, let's deploy to dev. Given the configuration in the top file, this can
be done using state.highstate
:
salt --pillar 'webserver_role:dev' state.highstate
However, in the event that it is not desirable to apply all states configured
in the top file (which could be likely in more complex setups), it is possible
to apply just the states for the foobarcom
website, using state.sls
:
salt --pillar 'webserver_role:dev' state.sls webserver.foobarcom
Once the site has been tested in dev, then the files can be moved from
/srv/salt/dev/webserver/src/foobarcom
to
/srv/salt/qa/webserver/src/foobarcom
, and deployed using the following:
salt --pillar 'webserver_role:qa' state.sls webserver.foobarcom
Finally, once the site has been tested in qa, then the files can be moved from
/srv/salt/qa/webserver/src/foobarcom
to
/srv/salt/prod/webserver/src/foobarcom
, and deployed using the following:
salt --pillar 'webserver_role:prod' state.sls webserver.foobarcom
Thanks to Salt's fileserver inheritance, even though the files have been moved
to within /srv/salt/prod
, they are still available from the same
salt://
URI in both the qa and dev environments.
Continue learning
The best way to continue learning about Salt States is to read through the
reference documentation and to look through examples
of existing state trees. Many pre-configured state trees
can be found on Github in the saltstack-formulas collection of repositories.
If you have any questions, suggestions, or just want to chat with other people
who are using Salt, we have a very active community
and we'd love to hear from you.
Salt Stack Walkthrough
Welcome!
Welcome to Salt Stack! I am excited that you are interested in Salt and
starting down the path to better infrastructure management. I developed
(and am continuing to develop) Salt with the goal of making the best
software available to manage computers of almost any kind. I hope you enjoy
working with Salt and that the software can solve your real world needs!
- Thomas S Hatch
- Salt creator and chief developer
- CTO of Salt Stack, Inc.
Note
This is the first of a series of walkthroughs and serves as the best entry
point for people new to Salt, after this be sure to read up on pillar and
more on states:
Starting States
Pillar Walkthrough
Getting Started
What is Salt?
Salt is a different approach to infrastructure management, it is founded on
the idea that high speed communication with large numbers of systems can open
up new capabilities. This approach makes Salt a powerful multitasking system
that can solve many specific problems in an infrastructure. The backbone of
Salt is the remote execution engine, which creates a high speed, secure and
bi-directional communication net for groups of systems. On top of this
communication system Salt provides an extremely fast, flexible and easy to use
configuration management system called Salt States
.
This unique approach to management makes for a transparent control system that
is not only amazingly easy to set up and use, but also capable of solving very
complex problems in infrastructures; as will be explored in this walk through.
Salt is being used today by some of the largest infrastructures in the world
and has a proven ability to scale to astounding proportions without
modification. With the proven ability to scale out well beyond many tens of
thousands of servers, Salt has also proven to be an excellent choice for small
deployments as well, lowering compute and management overhead for
infrastructures as small as just a few systems.
Installing Salt
Salt Stack has been made to be very easy to install and get started. Setting up
Salt should be as easy as installing Salt via distribution packages on Linux or
via the Windows installer. The installation documents cover specific platform installation in depth.
Starting Salt
Salt functions on a master/minion topology. A master server acts as a
central control bus for the clients (called minions), and the minions connect
back to the master.
Setting Up the Salt Master
Turning on the Salt Master is easy, just turn it on! The default configuration
is suitable for the vast majority of installations. The Salt master can be
controlled by the local Linux/Unix service manager:
On Systemd based platforms (OpenSuse, Fedora):
systemctl start salt-master
On Upstart based systems (Ubuntu, older Fedora/RHEL):
service salt-master start
On SysV Init systems (Debian, Gentoo etc.):
/etc/init.d/salt-master start
Or the master can be started directly on the command line:
The Salt Master can also be started in the foreground in debug mode, thus
greatly increasing the command output:
The Salt Master needs to bind to 2 TCP network ports on the system, these ports
are 4505 and 4506. For more in depth information on firewalling these ports,
the firewall tutorial is available here.
Setting up a Salt Minion
Note
The Salt Minion can operate with or without a Salt Master. This walkthrough
assumes that the minion will be connected to the master, for information on
how to run a master-less minion please see the masterless quickstart guide:
Masterless Minion Quickstart
The Salt Minion only needs to be aware of one piece of information to run, the
network location of the master. By default the minion will look for the DNS
name salt
for the master, making the easiest approach to set internal DNS
to resolve the name salt
back to the Salt Master IP. Otherwise the minion
configuration file will need to be edited, edit the configuration option
master
to point to the DNS name or the IP of the Salt Master:
Note
The default location of the configuration files is /etc/salt
. Most
platforms adhere to this convention, but platforms such as FreeBSD and
Microsoft Windows place this file in different locations.
/etc/salt/minion:
master: saltmaster.example.com
Now that the master can be found, start the minion in the same way as the
master; with the platform init system, or via the command line directly:
As a daemon:
In the foreground in debug mode:
Now that the minion is started it will generate cryptographic keys and attempt
to connect to the master. The next step is to venture back to the master server
and accept the new minion's public key.
When the minion is started, it will generate an id
value, unless it has
been generated on a previous run and cached in the configuration directory
(/etc/salt
by default). This is the name by which the minion will attempt
to authenticate to the master. The following steps are attempted, in order to
try to find a value that is not localhost
:
- The Python function
socket.getfqdn()
is run
/etc/hostname
is checked (non-Windows only)
/etc/hosts
(%WINDIR%\system32\drivers\etc\hosts
on Windows hosts) is
checked for hostnames that map to anything within 127.0.0.0/8.
If none of the above are able to produce an id which is not localhost
, then
a sorted list of IP addresses on the minion (excluding any within
127.0.0.0/8) is inspected. The first publicly-routable IP address is
used, if there is one. Otherwise, the first privately-routable IP address is
used.
If all else fails, then localhost
is used as a fallback.
Note
Overriding the id
The minion id can be manually specified using the id
parameter in the minion config file. If this configuration value is
specified, it will override all other sources for the id
.
Using salt-key
Salt authenticates minions using public key encryption and authentication. For
a minion to start accepting commands from the master the minion keys need to be
accepted. The salt-key
command is used to manage all of the keys on the
master. To list the keys that are on the master run a salt-key list command:
The keys that have been rejected, accepted and pending acceptance are listed.
The easiest way to accept the minion key is to accept all pending keys:
Note
Keys should be verified! The secure thing to do before accepting a key is
to run salt-key -p minion-id
to print the public key for the minion.
This can then be compared against the minion's public key file, which is
located (on the minion, of course) at /etc/salt/pki/minion/minion.pub
.
On the master:
# salt-key -p foo.domain.com
Accepted Keys:
foo.domain.com: -----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA0JcA0IEp/yqghK5V2VLM
jbG7FWV6qtw/ubTDBnpDGQgrvSNOtd0QcJsAzAtDcHwrudQgyxTZGVJqPY7gLc7P
5b4EFWt5E1w3+KZ+XXy4YtW5oOzVN5BvsJ85g7c0TUnmjL7p3MUUXE4049Ue/zgX
jtbFJ0aa1HB8bnlQdWWOeflYRNEQL8482ZCmXXATFP1l5uJA9Pr6/ltdWtQTsXUA
bEseUGEpmq83vAkwtZIyJRG2cJh8ZRlJ6whSMg6wr7lFvStHQQzKHt9pRPml3lLK
ba2X07myAEJq/lpJNXJm5bkKV0+o8hqYQZ1ndh9HblHb2EoDBNbuIlhYft1uv8Tp
8beaEbq8ZST082sS/NjeL7W1T9JS6w2rw4GlUFuQlbqW8FSl1VDo+Alxu0VAr4GZ
gZpl2DgVoL59YDEVrlB464goly2c+eY4XkNT+JdwQ9LwMr83/yAAG6EGNpjT3pZg
Wey7WRnNTIF7H7ISwEzvik1GrhyBkn6K1RX3uAf760ZsQdhxwHmop+krgVcC0S93
xFjbBFF3+53mNv7BNPPgl0iwgA9/WuPE3aoE0A8Cm+Q6asZjf8P/h7KS67rIBEKV
zrQtgf3aZBbW38CT4fTzyWAP138yrU7VSGhPMm5KfTLywNsmXeaR5DnZl6GGNdL1
fZDM+J9FIGb/50Ee77saAlUCAwEAAQ==
-----END PUBLIC KEY-----
On the minion:
# cat /etc/salt/pki/minion/minion.pub
-----BEGIN PUBLIC KEY-----
MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA0JcA0IEp/yqghK5V2VLM
jbG7FWV6qtw/ubTDBnpDGQgrvSNOtd0QcJsAzAtDcHwrudQgyxTZGVJqPY7gLc7P
5b4EFWt5E1w3+KZ+XXy4YtW5oOzVN5BvsJ85g7c0TUnmjL7p3MUUXE4049Ue/zgX
jtbFJ0aa1HB8bnlQdWWOeflYRNEQL8482ZCmXXATFP1l5uJA9Pr6/ltdWtQTsXUA
bEseUGEpmq83vAkwtZIyJRG2cJh8ZRlJ6whSMg6wr7lFvStHQQzKHt9pRPml3lLK
ba2X07myAEJq/lpJNXJm5bkKV0+o8hqYQZ1ndh9HblHb2EoDBNbuIlhYft1uv8Tp
8beaEbq8ZST082sS/NjeL7W1T9JS6w2rw4GlUFuQlbqW8FSl1VDo+Alxu0VAr4GZ
gZpl2DgVoL59YDEVrlB464goly2c+eY4XkNT+JdwQ9LwMr83/yAAG6EGNpjT3pZg
Wey7WRnNTIF7H7ISwEzvik1GrhyBkn6K1RX3uAf760ZsQdhxwHmop+krgVcC0S93
xFjbBFF3+53mNv7BNPPgl0iwgA9/WuPE3aoE0A8Cm+Q6asZjf8P/h7KS67rIBEKV
zrQtgf3aZBbW38CT4fTzyWAP138yrU7VSGhPMm5KfTLywNsmXeaR5DnZl6GGNdL1
fZDM+J9FIGb/50Ee77saAlUCAwEAAQ==
-----END PUBLIC KEY-----
Sending the First Commands
Now that the minion is connected to the master and authenticated, the master
can start to command the minion. Salt commands allow for a vast set of
functions to be executed and for specific minions and groups of minions to be
targeted for execution. This makes the salt
command very powerful, but
the command is also very usable, and easy to understand.
The salt
command is comprised of command options, target specification,
the function to execute, and arguments to the function. A simple command to
start with looks like this:
The *
is the target, which specifies all minions, and test.ping
tells
the minion to run the test.ping
function.
The result of running this command will be the master instructing all of the
minions to execute test.ping
in parallel
and return the result. This is not an actual ICMP ping, but rather a simple
function which returns True
. Using test.ping
is a good way of confirming that a minion is
connected.
Note
Each minion registers itself with a unique minion id. This id defaults to
the minion's hostname, but can be explicitly defined in the minion config as
well by using the id
parameter.
Getting to Know the Functions
Salt comes with a vast library of functions available for execution, and Salt
functions are self documenting. To see what functions are available on the
minions execute the sys.doc
function:
This will display a very large list of available functions and documentation on
them, this documentation is also available here.
These functions cover everything from shelling out to package management to
manipulating database servers. They comprise a powerful system management API
which is the backbone to Salt configuration management and many other aspects
of Salt.
Note
Salt comes with many plugin systems. The functions that are available via
the salt
command are called Execution Modules.
Helpful Functions to Know
The cmd module contains
functions to shell out on minions, such as cmd.run
and cmd.run_all
:
salt '*' cmd.run 'ls -l /etc'
The pkg
functions automatically map local system package managers to the
same salt functions. This means that pkg.install
will install packages via
yum on Red Hat based systems, apt on Debian systems, etc.:
Note
Some custom Linux spins and derivatives of other distros are not properly
detected by Salt. If the above command returns an error message saying that
pkg.install
is not available, then you may need to override the pkg
provider. This process is explained here.
The network.interfaces
function will
list all interfaces on a minion, along with their IP addresses, netmasks, MAC
addresses, etc:
salt '*' network.interfaces
salt-call
The examples so far have described running commands from the Master using the
salt
command, but when troubleshooting it can be more beneficial to login
to the minion directly and use salt-call
. Doing so allows you to see the
minion log messages specific to the command you are running (which are not
part of the return data you see when running the command from the Master using
salt
), making it unnecessary to tail the minion log. More information on
salt-call
and how to use it can be found here.
Grains
Salt uses a system called Grains to build up
static data about minions. This data includes information about the operating
system that is running, CPU architecture and much more. The grains system is
used throughout Salt to deliver platform data to many components and to users.
Grains can also be statically set, this makes it easy to assign values to
minions for grouping and managing. A common practice is to assign grains to
minions to specify what the role or roles a minion might be. These static
grains can be set in the minion configuration file or via the
grains.setval
function.
Targeting
Salt allows for minions to be targeted based on a wide range of criteria. The
default targeting system uses globular expressions to match minions, hence if
there are minions named larry1
, larry2
, curly1
and curly2
, a
glob of larry*
will match larry1
and larry2
, and a glob of *1
will match larry1
and curly1
.
Many other targeting systems can be used other than globs, these systems
include:
- Regular Expressions
- Target using PCRE compliant regular expressions
- Grains
- Target based on grains data:
Targeting with Grains
- Pillar
- Target based on pillar data:
Targeting with Pillar
- IP
- Target based on IP addr/subnet/range
- Compound
- Create logic to target based on multiple targets:
Targeting with Compound
- Nodegroup
- Target with nodegroups:
Targeting with Nodegroup
The concepts of targets are used on the command line with salt, but also
function in many other areas as well, including the state system and the
systems used for ACLs and user permissions.
Passing in Arguments
Many of the functions available accept arguments, these arguments can be
passed in on the command line:
This example passes the argument vim
to the pkg.install function, since
many functions can accept more complex input then just a string the arguments
are parsed through YAML, allowing for more complex data to be sent on the
command line:
salt '*' test.echo 'foo: bar'
In this case Salt translates the string 'foo: bar' into the dictionary
"{'foo': 'bar'}"
Note
Any line that contains a newline will not be parsed by yaml.
Salt States
Now that the basics are covered the time has come to evaluate States
. Salt
States
, or the State System
is the component of Salt made for
configuration management. The State system is a fully functional configuration
management system which has been designed to be exceptionally powerful while
still being simple to use, fast, lightweight, deterministic and with salty
levels of flexibility.
The state system is already available with a basic salt setup, no additional
configuration is required, states can be set up immediately.
Note
Before diving into the state system, a brief overview of how states are
constructed will make many of the concepts clearer. Salt states are based
on data modeling, and build on a low level data structure that is used to
execute each state function. Then more logical layers are built on top of
each other. The high layers of the state system which this tutorial will
cover consists of everything that needs to be known to use states, the two
high layers covered here are the sls layer and the highest layer
highstate.
Again, knowing that there are many layers of data management, will help with
understanding states, but they never need to be used. Just as understanding
how a compiler functions when learning a programming language,
understanding what is going on under the hood of a configuration management
system will also prove to be a valuable asset.
Adding Some Depth
Obviously maintaining sls formulas right in the root of the file server will
not scale out to reasonably sized deployments. This is why more depth is
required. Start by making an nginx formula a better way, make an nginx
subdirectory and add an init.sls file:
/srv/salt/nginx/init.sls:
nginx:
pkg:
- installed
service:
- running
- require:
- pkg: nginx
A few things are introduced in this sls formula, first is the service statement
which ensures that the nginx service is running, but the nginx service can't be
started unless the package is installed, hence the require
. The require
statement makes sure that the required component is executed before and that it
results in success.
Note
The require option belongs to a family of options called requisites.
Requisites are a powerful component of Salt States, for more information
on how requisites work and what is available see:
Requisites
Also evaluation ordering is available in Salt as well:
Ordering States
Now this new sls formula has a special name, init.sls
, when an sls formula is
named init.sls
it inherits the name of the directory path that contains it,
so this formula can be referenced via the following command:
Now that subdirectories can be used the vim.sls formula can be cleaned up, but
to make things more flexible (and to illustrate another point of course), move
the vim.sls and vimrc into a new subdirectory called edit
and change the
vim.sls file to reflect the change:
/srv/salt/edit/vim.sls:
vim:
pkg.installed
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- mode: 644
- user: root
- group: root
The only change in the file is fixing the source path for the vimrc file. Now
the formula is referenced as edit.vim
because it resides in the edit
subdirectory. Now the edit subdirectory can contain formulas for emacs, nano,
joe or any other editor that may need to be deployed.
Next Reading
Two walkthroughs are specifically recommended at this point. First, a deeper
run through States, followed by an explanation of Pillar.
- Starting States
- Pillar Walkthrough
An understanding of Pillar is extremely helpful in using States.
Getting Deeper Into States
Two more in-depth States tutorials exist, which delve much more deeply into States
functionality.
- Thomas' original states tutorial, How Do I Use Salt
States?, covers much more to get off the
ground with States.
- The States Tutorial also provides a
fantastic introduction.
These tutorials include much more in depth information including templating
sls formulas etc.
So Much More!
This concludes the initial Salt walkthrough, but there are many more things to
learn still! These documents will cover important core aspects of Salt:
A few more tutorials are also available:
This still is only scratching the surface, many components such as the reactor
and event systems, extending Salt, modular components and more are not covered
here. For an overview of all Salt features and documentation, look at the
Table of Contents.
Access Control System
Salt maintains a standard system used to open granular control to non
administrative users to execute Salt commands. The access control system
has been applied to all systems used to configure access to non administrative
control interfaces in Salt.These interfaces include, the peer
system, the
external auth
system and the client acl
system.
The access control system mandated a standard configuration syntax used in
all of the three aforementioned systems. While this adds functionality to the
configuration in 0.10.4, it does not negate the old configuration.
Now specific functions can be opened up to specific minions from specific users
in the case of external auth and client ACLs, and for specific minions in the
case of the peer system.
The access controls are manifested using matchers in these configurations:
client_acl:
fred:
- web\*:
- pkg.list_pkgs
- test.*
- apache.*
In the above example, fred is able to send commands only to minions which match
the specified glob target. This can be expanded to include other functions for
other minions based on standard targets.
external_auth:
pam:
dave:
- test.ping
- mongo\*:
- network.*
- log\*:
- network.*
- pkg.*
- 'G@os:RedHat':
- kmod.*
steve:
- .*
The above allows for all minions to be hit by test.ping by dave, and adds a
few functions that dave can execute on other minions. It also allows steve
unrestricted access to salt commands.
External Authentication System
Salt 0.10.4 comes with a fantastic new way to open up running Salt commands
to users. This system allows for Salt itself to pass through authentication to
any authentication system (The Unix PAM system was the first) to determine
if a user has permission to execute a Salt command.
The external authentication system allows for specific users to be granted
access to execute specific functions on specific minions. Access is configured
in the master configuration file, and uses the new access control system:
external_auth:
pam:
thatch:
- 'web*':
- test.*
- network.*
steve:
- .*
So, the above allows the user thatch to execute functions in the test and
network modules on the minions that match the web* target. User steve is
given unrestricted access to minion commands.
The external authentication system can then be used from the command line by
any user on the same system as the master with the -a option:
$ salt -a pam web\* test.ping
The system will ask the user for the credentials required by the
authentication system and then publish the command.
Tokens
With external authentication alone the authentication credentials will be
required with every call to Salt. This can be alleviated with Salt tokens.
The tokens are short term authorizations and can be easily created by just
adding a -T
option when authenticating:
$ salt -T -a pam web\* test.ping
Now a token will be created that has a expiration of, by default, 12 hours.
This token is stored in a file named .salt_token
in the active user's home
directory. Once the token is created, it is sent with all subsequent communications.
The user authentication does not need to be entered again until the token expires. The
token expiration time can be set in the Salt master config file.
Pillar of Salt
Pillar is an interface for Salt designed to offer global values that can be
distributed to all minions. Pillar data is managed in a similar way as
the Salt State Tree.
Pillar was added to Salt in version 0.9.8
Note
Storing sensitive data
Unlike state tree, pillar data is only available for the targeted
minion specified by the matcher type. This makes it useful for
storing sensitive data specific to a particular minion.
Declaring the Master Pillar
The Salt Master server maintains a pillar_roots setup that matches the
structure of the file_roots used in the Salt file server. Like the
Salt file server the pillar_roots
option in the master config is based
on environments mapping to directories. The pillar data is then mapped to
minions based on matchers in a top file which is laid out in the same way
as the state top file. Salt pillars can use the same matcher types as the
standard top file.
The configuration for the pillar_roots
in the master config file
is identical in behavior and function as file_roots
:
pillar_roots:
base:
- /srv/pillar
This example configuration declares that the base environment will be located
in the /srv/pillar
directory. The top file used matches the name of the top
file used for States, and has the same structure:
/srv/pillar/top.sls
This further example shows how to use other standard top matching types (grain
matching is used in this example) to deliver specific salt pillar data to
minions with different os
grains:
dev:
'os:Debian':
- match: grain
- servers
/srv/pillar/packages.sls
{% if grains['os'] == 'RedHat' %}
apache: httpd
git: git
{% elif grains['os'] == 'Debian' %}
apache: apache2
git: git-core
{% endif %}
Now this data can be used from within modules, renderers, State SLS files, and
more via the shared pillar dict:
apache:
pkg:
- installed
- name: {{ pillar['apache'] }}
git:
pkg:
- installed
- name: {{ pillar['git'] }}
Note that you cannot just list key/value-information in top.sls
.
Pillar namespace flattened
The separate pillar files all share the same namespace. Given
a top.sls
of:
base:
'*':
- packages
- services
a packages.sls
file of:
and a services.sls
file of:
Then a request for the bind
pillar will only return 'named'; the 'bind9'
value is not available. It is better to structure your pillar files with more
hierarchy. For example your package.sls
file could look like:
Including Other Pillars
Pillar SLS files may include other pillar files, similar to State files.
Two syntaxes are available for this purpose. The simple form simply includes
the additional pillar as if it were part of the same file:
The full include form allows two additional options -- passing default values
to the templating engine for the included pillar file as well as an optional
key under which to nest the results of the included pillar:
include:
- users:
defaults:
- sudo: ['bob', 'paul']
key: users
With this form, the included file (users.sls) will be nested within the 'users'
key of the compiled pillar. Additionally, the 'sudo' value will be available
as a template variable to users.sls.
Viewing Minion Pillar
Once the pillar is set up the data can be viewed on the minion via the
pillar
module, the pillar module comes with two functions,
pillar.items
and and pillar.raw
. pillar.items
will return a freshly reloaded pillar and pillar.raw
will return the current pillar without a refresh:
Note
Prior to version 0.16.2, this function is named pillar.data
. This
function name is still supported for backwards compatibility.
Pillar "get" Function
The pillar.get
function works much in the same
way as the get
method in a python dict, but with an enhancement: nested
dict components can be extracted using a : delimiter.
If a structure like this is in pillar:
Extracting it from the raw pillar in an sls formula or file template is done
this way:
{{ pillar['foo']['bar']['baz'] }}
Now, with the new pillar.get
function the data
can be safely gathered and a default can be set, allowing the template to fall
back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier.
Refreshing Pillar Data
When pillar data is changed on the master the minions need to refresh the data
locally. This is done with the saltutil.refresh_pillar
function.
salt '*' saltutil.refresh_pillar
This function triggers the minion to asynchronously refresh the pillar and will
always return None
.
Targeting with Pillar
Pillar data can be used when targeting minions. This allows for ultimate
control and flexibility when targeting minions.
salt -I 'somekey:specialvalue' test.ping
Like with Grains, it is possible to use globbing
as well as match nested values in Pillar, by adding colons for each level that
is being traversed. The below example would match minions with a pillar named
foo
, which is a dict containing a key bar
, with a value beginning with
baz
:
salt -I 'foo:bar:baz*' test.ping
Master Config In Pillar
For convenience the data stored in the master configuration file is made
available in all minion's pillars. This makes global configuration of services
and systems very easy but may not be desired if sensitive data is stored in the
master configuration.
To disable the master config from being added to the pillar set pillar_opts
to False
:
Master Tops System
In 0.10.4 the external_nodes system was upgraded to allow for modular
subsystems to be used to generate the top file data for a highstate run on
the master.
The old external_nodes option still works, but will be removed in the
future in favor of the new master_tops option which uses the modular
system instead. The master tops system contains a number of subsystems that
are loaded via the Salt loader interfaces like modules, states, returners,
runners, etc.
Using the new master_tops option is simple:
master_tops:
ext_nodes: cobbler-external-nodes
for Cobbler or:
master_tops:
reclass:
inventory_base_uri: /etc/reclass
classes_uri: roles
for Reclass.
Job Management
Since Salt executes jobs running on many systems, Salt needs to be able to
manage jobs running on many systems. As of Salt 0.9.7, the capability was
added for more advanced job management.
The Minion proc System
The Salt Minions now maintain a proc directory in the Salt cachedir, the proc
directory maintains files named after the executed job ID. These files contain
the information about the current running jobs on the minion and allow for
jobs to be looked up. This is located in the proc directory under the
cachedir, with a default configuration it is under /var/cache/salt/proc.
Functions in the saltutil Module
Salt 0.9.7 introduced a few new functions to the
saltutil module for managing
jobs. These functions are:
running
Returns the data of all running jobs that are found in the proc directory.
find_job
Returns specific data about a certain job based on job id.
signal_job
Allows for a given jid to be sent a signal.
term_job
Sends a termination signal (SIGTERM, 15) to the process controlling the
specified job.
kill_job
Sends a kill signal (SIGKILL, 9) to the process controlling the
specified job.
These functions make up the core of the back end used to manage jobs at the
minion level.
The jobs Runner
A convenience runner front end and reporting system has been added as well.
The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
active
The active function runs saltutil.running on all minions and formats the
return data about all running jobs in a much more usable and compact format.
The active function will also compare jobs that have returned and jobs that
are still running, making it easier to see what systems have completed a job
and what systems are still being waited on.
lookup_jid
When jobs are executed the return data is sent back to the master and cached.
By default is is cached for 24 hours, but this can be configured via the
keep_jobs
option in the master configuration.
Using the lookup_jid runner will display the same return data that the initial
job invocation with the salt command would display.
# salt-run jobs.lookup_jid <job id number>
list_jobs
Before finding a historic job, it may be required to find the job id. list_jobs
will parse the cached execution data and display all of the job data for jobs
that have already, or partially returned.
# salt-run jobs.list_jobs
Salt Scheduling
In Salt versions greater than 0.12.0, the scheduling system allows incremental
executions on minions or the master. The schedule system exposes the execution
of any execution function on minions or any runner on the master.
To set up the scheduler on the master add the schedule option to the master
config file.
To set up the scheduler on the minion add the schedule option to
the minion config file or to the minion's pillar.
Note
The scheduler executes different functions on the master and minions. When
running on the master the functions reference runner functions, when
running on the minion the functions specify execution functions.
The schedule option defines jobs which execute at certain intervals. To set up a highstate
to run on a minion every 60 minutes set this in the minion config or pillar:
schedule:
highstate:
function: state.highstate
minutes: 60
Time intervals can be specified as seconds, minutes, hours, or days. Runner
executions can also be specified on the master within the master configuration
file:
schedule:
overstate:
function: state.over
seconds: 35
minutes: 30
hours: 3
The above configuration will execute the state.over runner every 3 hours,
30 minutes and 35 seconds, or every 12,635 seconds.
Scheduler With Returner
The scheduler is also useful for tasks like gathering monitoring data about
a minion, this schedule option will gather status data and send it to a mysql
returner database:
schedule:
uptime:
function: status.uptime
seconds: 60
returner: mysql
meminfo:
function: status.meminfo
minutes: 5
returner: mysql
Since specifying the returner repeatedly can be tiresome, the
schedule_returner option is available to specify one or a list of global
returners to be used by the minions when scheduling.
Running the Salt Master as Unprivileged User
While the default setup runs the Salt Master as the root user, it is generally
wise to run servers as an unprivileged user. In Salt 0.9.10 the management
of the running user was greatly improved, the only change needed is to alter
the option user
in the master configuration file and all salt system
components will be updated to function under the new user when the master
is started.
If running a version older that 0.9.10 then a number of files need to be
owned by the user intended to run the master:
# chown -R <user> /var/cache/salt
# chown -R <user> /var/log/salt
# chown -R <user> /etc/salt/pki
Troubleshooting
The intent of the troubleshooting section is to introduce solutions to a
number of common issues encountered by users and the tools that are available
to aid in developing States and Salt code.
Running in the Foreground
A great deal of information is available via the debug logging system, if you
are having issues with minions connecting or not starting run the minion and/or
master in the foreground:
salt-master -l debug
salt-minion -l debug
Anyone wanting to run Salt daemons via a process supervisor such as monit,
runit, or supervisord, should omit the -d
argument to the daemons and
run them in the foreground.
What Ports do the Master and Minion Need Open?
No ports need to be opened up on each minion. For the master, TCP ports 4505
and 4506 need to be open. If you've put both your Salt master and minion in
debug mode and don't see an acknowledgment that your minion has connected,
it could very well be a firewall.
You can check port connectivity from the minion with the nc command:
nc -v -z salt.master.ip 4505
nc -v -z salt.master.ip 4506
There is also a firewall configuration
document that might help as well.
If you've enabled the right TCP ports on your operating system or Linux
distribution's firewall and still aren't seeing connections, check that no
additional access control system such as SELinux or AppArmor is blocking
Salt.
Using salt-call
The salt-call
command was originally developed for aiding in the development
of new Salt modules. Since then, many applications have been developed for
running any Salt module locally on a minion. These range from the original
intent of salt-call, development assistance, to gathering more verbose output
from calls like state.highstate
.
When creating your state tree, it is generally recommended to invoke
state.highstate
with salt-call
. This
displays far more information about the highstate execution than calling it
remotely. For even more verbosity, increase the loglevel with the same argument
as salt-minion
:
salt-call -l debug state.highstate
The main difference between using salt
and using salt-call
is that
salt-call
is run from the minion, and it only runs the selected function on
that minion. By contrast, salt
is run from the master, and requires you to
specify the minions on which to run the command using salt's targeting
system.
Too many open files
The salt-master needs at least 2 sockets per host that connects to it, one for
the Publisher and one for response port. Thus, large installations may, upon
scaling up the number of minions accessing a given master, encounter:
12:45:29,289 [salt.master ][INFO ] Starting Salt worker process 38
Too many open files
sock != -1 (tcp_listener.cpp:335)
The solution to this would be to check the number of files allowed to be
opened by the user running salt-master (root by default):
[root@salt-master ~]# ulimit -n
1024
And modify that value to be at least equal to the number of minions x 2.
This setting can be changed in limits.conf as the nofile value(s),
and activated upon new a login of the specified user.
So, an environment with 1800 minions, would need 1800 x 2 = 3600 as a minimum.
Salt Master Stops Responding
There are known bugs with ZeroMQ versions less than 2.1.11 which can cause the
Salt master to not respond properly. If you're running a ZeroMQ version greater
than or equal to 2.1.9, you can work around the bug by setting the sysctls
net.core.rmem_max
and net.core.wmem_max
to 16777216. Next, set the third
field in net.ipv4.tcp_rmem
and net.ipv4.tcp_wmem
to at least 16777216.
You can do it manually with something like:
# echo 16777216 > /proc/sys/net/core/rmem_max
# echo 16777216 > /proc/sys/net/core/wmem_max
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_rmem
# echo "4096 87380 16777216" > /proc/sys/net/ipv4/tcp_wmem
Or with the following Salt state:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 | net.core.rmem_max:
sysctl:
- present
- value: 16777216
net.core.wmem_max:
sysctl:
- present
- value: 16777216
net.ipv4.tcp_rmem:
sysctl:
- present
- value: 4096 87380 16777216
net.ipv4.tcp_wmem:
sysctl:
- present
- value: 4096 87380 16777216
|
Salt and SELinux
Currently there are no SELinux policies for Salt. For the most part Salt runs
without issue when SELinux is running in Enforcing mode. This is because when
the minion executes as a daemon the type context is changed to initrc_t
.
The problem with SELinux arises when using salt-call or running the minion in
the foreground, since the type context stays unconfined_t
.
This problem is generally manifest in the rpm install scripts when using the
pkg module. Until a full SELinux Policy is available for Salt the solution
to this issue is to set the execution context of salt-call
and
salt-minion
to rpm_exec_t:
# CentOS 5 and RHEL 5:
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon -t system_u:system_r:rpm_exec_t:s0 /usr/bin/salt-call
# CentOS 6 and RHEL 6:
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-minion
chcon system_u:object_r:rpm_exec_t:s0 /usr/bin/salt-call
This works well, because the rpm_exec_t
context has very broad control over
other types.
Red Hat Enterprise Linux 5
Salt requires Python 2.6 or 2.7. Red Hat Enterprise Linux 5 and its variants
come with Python 2.4 installed by default. When installing on RHEL 5 from the
EPEL repository this is handled for you. But, if you run Salt from git, be
advised that its dependencies need to be installed from EPEL and that Salt
needs to be run with the python26
executable.
Live Python Debug Output
If the minion or master seems to be unresponsive, a SIGUSR1 can be passed to
the processes to display where in the code they are running. If encountering a
situation like this, this debug information can be invaluable. First make
sure the master of minion are running in the foreground:
salt-master -l debug
salt-minion -l debug
The pass the signal to the master or minion when it seems to be unresponsive:
killall -SIGUSR1 salt-master
killall -SIGUSR1 salt-minion
When filing an issue or sending questions to the mailing list for a problem
with an unresponsive daemon this information can be invaluable.
YAML Idiosyncrasies
One of Salt's strengths, the use of existing serialization systems for
representing SLS data, can also backfire. YAML is a general purpose system
and there are a number of things that would seem to make sense in an sls
file that cause YAML issues. It is wise to be aware of these issues. While
reports or running into them are generally rare they can still crop up at
unexpected times.
Spaces vs Tabs
YAML uses spaces, period. Do not use tabs in your SLS files! If strange
errors are coming up in rendering SLS files, make sure to check that
no tabs have crept in! In Vim, after enabling search highlighting
with: :set hlsearch
, you can check with the following key sequence in
normal mode(you can hit ESC twice to be sure): /
, Ctrl-v, Tab, then
hit Enter. Also, you can convert tabs to 2 spaces by these commands in Vim:
:set tabstop=2 expandtab
and then :retab
.
Indentation
The suggested syntax for YAML files is to use 2 spaces for indentation,
but YAML will follow whatever indentation system that the individual file
uses. Indentation of two spaces works very well for SLS files given the
fact that the data is uniform and not deeply nested.
Nested Dicts (key=value)
When dicts are more deeply nested, they no
longer follow the same indentation logic. This is rarely something that
comes up in Salt, since deeply nested options like these are discouraged
when making State modules, but some do exist. A good example is the context
and default options in the file.managed
state:
/etc/http/conf/http.conf:
file:
- managed
- source: salt://apache/http.conf
- user: root
- group: root
- mode: 644
- template: jinja
- context:
custom_var: "override"
- defaults:
custom_var: "default value"
other_var: 123
Notice that the spacing used is 2 spaces, and that when defining the context
and defaults options there is a 4 space indent. If only a 2 space indent is
used then the information will not be loaded correctly. If using double spacing
is not desirable, then a deeply nested dict can be declared with curly braces:
/etc/http/conf/http.conf:
file:
- managed
- source: salt://apache/http.conf
- user: root
- group: root
- mode: 644
- template: jinja
- context: {
custom_var: "override" }
- defaults: {
custom_var: "default value",
other_var: 123 }
True/False, Yes/No, On/Off
PyYAML will load these values as boolean True
or False
. Un-capitalized
versions will also be loaded as booleans (true
, false
, yes
, no
,
on
, and off
). This can be especially problematic when constructing
Pillar data. Make sure that your Pillars which need to use the string versions
of these values are enclosed in quotes.
Integers are Parsed as Integers
NOTE: This has been fixed in salt 0.10.0, as of this release passing an
integer that is preceded by a 0 will be correctly parsed
When passing integers
into an SLS file, they are
passed as integers. This means that if a state accepts a string value
and an integer is passed, that an integer will be sent. The solution here
is to send the integer as a string.
This is best explained when setting the mode for a file:
/etc/vimrc:
file:
- managed
- source: salt://edit/vimrc
- user: root
- group: root
- mode: 644
Salt manages this well, since the mode is passed as 644, but if the mode is
zero padded as 0644, then it is read by YAML as an integer and evaluated as
an octal value, 0644 becomes 420. Therefore, if the file mode is
preceded by a 0 then it needs to be passed as a string:
/etc/vimrc:
file:
- managed
- source: salt://edit/vimrc
- user: root
- group: root
- mode: '0644'
YAML does not like "Double Short Decs"
If I can find a way to make YAML accept "Double Short Decs" then I will, since
I think that double short decs would be awesome. So what is a "Double Short
Dec"? It is when you declare a multiple short decs in one ID. Here is a
standard short dec, it works great:
The short dec means that there are no arguments to pass, so it is not required
to add any arguments, and it can save space.
YAML though, gets upset when declaring multiple short decs, for the record...
THIS DOES NOT WORK:
vim:
pkg.installed
user.present
Similarly declaring a short dec in the same ID dec as a standard dec does not
work either...
ALSO DOES NOT WORK:
fred:
user.present
ssh_auth.present:
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
The correct way is to define them like this:
vim:
pkg.installed: []
user.present: []
fred:
user.present: []
ssh_auth.present:
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
Alternatively, they can be defined the "old way", or with multiple
"full decs":
vim:
pkg:
- installed
user:
- present
fred:
user:
- present
ssh_auth:
- present
- name: AAAAB3NzaC...
- user: fred
- enc: ssh-dss
- require:
- user: fred
YAML support only plain ASCII
According to YAML specification, only ASCII characters can be used.
Within double-quotes, special characters may be represented with C-style
escape sequences starting with a backslash ( \ ).
Examples:
- micro: "\u00b5"
- copyright: "\u00A9"
- A: "\x41"
- alpha: "\u0251"
- Alef: "\u05d0"
List of usable Unicode characters will help you to identify correct numbers.
Python can also be used to discover the Unicode number for a character:
repr(u"Text with wrong characters i need to figure out")
This shell command can find wrong characters in your SLS files:
find . -name '*.sls' -exec grep --color='auto' -P -n '[^\x00-\x7F]' \{} \;
Underscores stripped in Integer Definitions
If a definition only includes numbers and underscores, it is parsed by YAML as
an integer and all underscores are stripped. To ensure the object becomes a
string, it should be surrounded by quotes. More information here.
Here's an example:
>>> import yaml
>>> yaml.safe_load('2013_05_10')
20130510
>>> yaml.safe_load('"2013_05_10"')
'2013_05_10'
Salt Based Projects
A number of unofficial open source projects, based on Salt, or written to
enhance Salt have been created.
Salt Sandbox
Created by Aaron Bull Schaefer, aka "elasticdog".
https://github.com/elasticdog/salt-sandbox
Salt Sandbox is a multi-VM Vagrant-based Salt development environment used
for creating and testing new Salt state modules outside of your production
environment. It's also a great way to learn firsthand about Salt and its
remote execution capabilities.
Salt Sandbox will set up three separate virtual machines:
- salt.example.com - the Salt master server
- minion1.example.com - the first Salt minion machine
- minion2.example.com - the second Salt minion machine
These VMs can be used in conjunction to segregate and test your modules based
on node groups, top file environments, grain values, etc. You can even test
modules on different Linux distributions or release versions to better match
your production infrastructure.
Salt Event System
Salt 0.9.10 introduced the Salt Event System. This system is used to fire
off events enabling third party applications or external processes to react
to behavior within Salt.
The event system is comprised of a few components, the event sockets which
publish events, and the event library which can listen to events and send
events into the salt system.
Listening for Events
The event system is accessed via the event library and can only be accessed
by the same system user that Salt is running as. To listen to events a
SaltEvent object needs to be created and then the get_event function needs to
be run. The SaltEvent object needs to know the location that the Salt Unix
sockets are kept. In the configuration this is the sock_dir
option. The
sock_dir
option defaults to "/var/run/salt/master" on most systems.
The following code will check for a single event:
import salt.utils.event
event = salt.utils.event.MasterEvent('/var/run/salt/master')
data = event.get_event()
Events will also use a "tag". A "tag" allows for events to be filtered. By
default all events will be returned, but if only authentication events are
desired, then pass the tag "auth". Also, the get_event method has a default
poll time assigned of 5 seconds, to change this time set the "wait" option.
This example will only listen for auth events and will wait for 10 seconds
instead of the default 5.
import salt.utils.event
event = salt.utils.event.MasterEvent('/var/run/salt/master')
data = event.get_event(wait=10, tag='auth')
Instead of looking for a single event, the iter_events method can be used to
make a generator which will continually yield salt events. The iter_events
method also accepts a tag, but not a wait time:
import salt.utils.event
event = salt.utils.event.MasterEvent('/var/run/salt/master')
for data in event.iter_events(tag='auth'):
print(data)
Firing Events
It is possible to fire events on either the minion's local bus, or to fire
events intended for the master. To fire a local event from the minion, on the
command line:
salt-call event.fire 'message to be sent in the event' 'tag'
To fire an event to be sent to the master, from the minion:
salt-call event.fire_master 'message for the master' 'tag'
If a process is listening on the minion, it may be useful for a user on the
master to fire an event to it:
salt minionname event.fire 'message for the minion' 'tag'
Firing Events From Code
Events can be very useful when writing execution modules, in order to inform
various processes on the master when a certain task has taken place. In Salt
versions previous to 0.17.0, the basic code looks like:
# Import the proper library
import salt.utils.event
# Fire deploy action
sock_dir = '/var/run/salt/minion'
event = salt.utils.event.SaltEvent('master', sock_dir)
event.fire_event('Message to be sent', 'tag')
In Salt version 0.17.0, the ability to send a payload with a more complex data
structure than a string was added. When using this interface, a Python
dictionary should be sent instead.
# Import the proper library
import salt.utils.event
# Fire deploy action
sock_dir = '/var/run/salt/minion'
payload = {'sample-msg': 'this is a test',
'example': 'this is the same test'}
event = salt.utils.event.SaltEvent('master', sock_dir)
event.fire_event(payload, 'tag')
It should be noted that this code can be used in 3rd party applications as well.
So long as the salt-minion process is running, the minion socket can be used:
sock_dir = '/var/run/salt/minion'
So long as the salt-master process is running, the master socket can be used:
sock_dir = '/var/run/salt/master'
This allows 3rd party applications to harness the power of the Salt event bus
programmatically, without having to make other calls to Salt. A 3rd party
process can listen to the event bus on the master, and another 3rd party
process can fire events to the process on the master, which Salt will happily
pass along.
The Salt Mine
Granted, it took a while for this name to be used in Salt, but version 0.15.0
introduces a new system to Salt called the Salt Mine.
The Salt Mine is used to bridge the gap between setting static variables and
gathering live data. The Salt mine is used to collect arbitrary data from
minions and store it on the master. This data is then made available to
all minions via the mine
module.
The data is gathered on the minion and sent back to the master where only
the most recent data is maintained (if long term data is required use
returners or the external job cache).
Mine Functions
To enable the Salt Mine the mine_functions option needs to be applied to a
minion. This option can be applied via the minion's configuration file, or the
minion's pillar. The mine_functions option dictates what functions are being
executed and allows for arguments to be passed in:
mine_functions:
network.interfaces: []
test.ping: []
Mine Interval
The Salt Mine functions are executed when the minion starts and at a given
interval by the scheduler. The default interval is every 60 minutes and can
be adjusted for the minion via the mine_interval option:
Virtual Machine Disk Profiles
Salt Virt allows for the disks created for deployed virtual machines
to be finely configured. The configuration is a simple data structure which is
read from the config.option
function, meaning that the configuration can be
stored in the minion config file, the master config file, or the minion's
pillar.
This configuration option is called virt.disk
. The default virt.disk
data structure looks like this:
virt.disk:
default:
- system:
size: 8192
format: qcow2
model: virtio
Note
The format and model does not need to be defined, Salt will
default to the optimal format used by the underlying hypervisor,
in the case of kvm this it is qcow2 and
virtio.
This configuration sets up a disk profile called default. The default
profile creates a single system disk on the virtual machine.
Define More Profiles
Many environments will require more complex disk profiles and may require
more than one profile, this can be easily accomplished:
virt.disk:
default:
- system:
size: 8192
database:
- system:
size: 8192
- data:
size: 30720
web:
- system:
size: 1024
- logs:
size: 5120
This configuration allows for one of three profiles to be selected,
allowing virtual machines to be created with different storage needs
of the deployed vm.
Salt Virt - The Salt Stack Cloud Controller
The Salt Virt cloud controller capability was initial added to Salt in version
0.14.0 as an alpha technology.
The initial Salt Virt system supports core cloud operations:
- Virtual machine deployment
- Inspection of deployed VMs
- Virtual machine migration
- Network profiling
- Automatic VM integration with all aspects of Salt
- Image Pre-seeding
Many features are currently under development to enhance the capabilities of
the Salt Virt systems.
Note
It is noteworthy that Salt was originally developed with the intent of
using the Salt communication system as the backbone to a cloud controller.
This means that the Salt Virt system is not an afterthought, simply a
system that took the back seat to other development. The original attempt
to develop the cloud control aspects of Salt was a project called butter.
This project never took off, but was functional and proves the early
viability of Salt to be a cloud controller.
Salt Virt Tutorial
A tutorial about how to get Salt Virt up and running has been added to the
tutorial section:
Cloud Controller Tutorial
The Salt Virt Runner
The point of interaction with the cloud controller is the virt
runner. The virt runner comes with routines to execute specific
virtual machine routines.
Reference documentation for the virt runner is available with the runner
module documentation:
Virt Runner Reference
Based on Live State Data
The Salt Virt system is based on using Salt to query live data about
hypervisors and then using the data gathered to make decisions about cloud
operations. This means that no external resources are required to run Salt
Virt, and that the information gathered about the cloud is live and accurate.
Virtual Machine Network Profiles
Salt Virt allows for the network devices created for deployed virtual machines
to be finely configured. The configuration is a simple data structure which is
read from the config.option
function, meaning that the configuration can be
stored in the minion config file, the master config file, or the minion's
pillar.
This configuration option is called virt.nic
. By default the virt.nic
option is empty but defaults to a data structure which looks like this:
virt.nic:
default:
eth0:
bridge: br0
model: virtio
Note
The model does not need to be defined, Salt will default to the optimal
model used by the underlying hypervisor, in the case of kvm this model
is virtio
This configuration sets up a network profile called default. The default
profile creates a single Ethernet device on the virtual machine that is bridged
to the hypervisor's br0 interface. This default setup does not
require setting up the virt.nic
configuration, and is the reason why a
default install only requires setting up the br0 bridge device on the
hypervisor.
Define More Profiles
Many environments will require more complex network profiles and may require
more than one profile, this can be easily accomplished:
virt.nic:
dual:
eth0:
bridge: service_br
eth1:
bridge: storage_br
single:
eth0:
bridge: service_br
triple:
eth0:
bridge: service_br
eth1:
bridge: storage_br
eth2:
bridge: dmz_br
all:
eth0:
bridge: service_br
eth1:
bridge: storage_br
eth2:
bridge: dmz_br
eth3:
bridge: database_br
dmz:
eth0:
bridge: service_br
eth1:
bridge: dmz_br
database:
eth0:
bridge: service_br
eth1:
bridge: database_br
This configuration allows for one of six profiles to be selected, allowing
virtual machines to be created which attach to different network depending
on the needs of the deployed vm.
Salt SSH
Note
On many systems, salt-ssh
will be in its own package, usually named
salt-ssh
.
In version 0.17.0 of Salt a new transport system was introduced, the ability
to use SSH for Salt communication. This addition allows for Salt routines to
be executed on remote systems entirely through ssh, bypassing the need for
a Salt Minion to be running on the remote systems and the need for a Salt
Master.
Note
The Salt SSH system does not supercede the standard Salt communication
systems, it simply offers an SSH based alternative that does not require
ZeroMQ and a remote agent. Be aware that since all communication with Salt SSH is
executed via SSH it is substantially slower than standard Salt with ZeroMQ.
Salt SSH is very easy to use, simply set up a basic roster file of the
systems to connect to and run salt-ssh
commands in a similar way as
standard salt
commands.
Salt SSH Roster
The roster system in Salt allows for remote minions to be easily defined.
Simply create the roster file, the default location is /etc/salt/roster:
This is a very basic roster file where a Salt ID is being assigned to an IP
address. A more elaborate roster can be created:
web1:
host: 192.168.42.1 # The IP addr or DNS hostname
user: fred # Remote executions will be executed as user fred
passwd: foobarbaz # The password to use for login, if omitted, keys are used
sudo: True # Whether to sudo to root, not enabled by default
web2:
host: 192.168.42.2
Calling Salt SSH
The salt-ssh
command can be easily executed in the same was as a salt
command:
Commands with salt-ssh
follow the same syntax as the salt
command.
The standard salt functions are available! The output is the same as salt
and many of the same flags are available. Please see
http://docs.saltstack.com/ref/cli/salt-ssh.html for all of the available
options.
Raw Shell Calls
By default salt-ssh
runs Salt execution modules on the remote system,
but salt-ssh
can also execute raw shell commands:
salt-ssh '*' -r 'ifconfig'
States Via Salt SSH
The Salt State system can also be used with salt-ssh
. The state system
abstracts the same interface to the user in salt-ssh
as it does when using
standard salt
. The intent is that Salt Formulas defined for standard
salt
will work seamlessly with salt-ssh
and vice-versa.
The standard Salt States walkthroughs function by simply replacing salt
commands with salt-ssh
.
Targeting with Salt SSH
Due to the fact that the targeting approach differs in salt-ssh, only glob
and regex targets are supported as of this writing, the remaining target
systems still need to be implemented.
Salt Rosters
Salt rosters are plugable systems added in Salt 0.17.0 to facilitate the
salt-ssh
system.
The roster system was created because salt-ssh
needs a means to
identify which systems need to be targeted for execution.
Note
The Roster System is not needed or used in standard Salt because the
master does not need to be initially aware of target systems, since the
Salt Minion checks itself into the master.
Since the roster system is pluggable, it can be easily augmented to attach to
any existing systems to gather information about what servers are presently
available and should be attached to by salt-ssh
. By default the roster
file is located at /etc/salt/roster.
How Rosters Work
The roster system compiles a data structure internally refered to as
targets. The targets is a list of target systems and attributes about how
to connect to said systems. The only requirement for a roster module in Salt
is to return the targets data structure.
Targets Data
The information which can be stored in a roster target is the following:
<Salt ID>: # The id to reference the target system with
host: # The IP address or DNS name of the remote host
user: # The user to log in as
passwd: # The password to log in with
Running The Tests
To run the tests, use tests/runtests.py
, see --help
for more info.
Examples:
- To run all tests:
sudo ./tests/runtests.py
- Run unit tests only:
sudo ./tests/runtests.py --unit-tests
You will need 'mock' (https://pypi.python.org/pypi/mock) in addition to salt requirements in order to run the tests.
Writing Tests
Salt uses a test platform to verify functionality of components in a simple
way. Two testing systems exist to enable testing salt functions in somewhat
real environments. The two subsystems available are integration tests and
unit tests.
Salt uses the python standard library unittest2 system for testing.
Integration Tests
The integration tests start up a number of salt daemons to test functionality
in a live environment. These daemons include 2 salt masters, 1 syndic and 2
minions. This allows for the syndic interface to be tested and master/minion
communication to be verified. All of the integration tests are executed as
live salt commands sent through the started daemons.
Integration tests are particularly good at testing modules, states and shell
commands.
Unit Tests
Direct unit tests are also available, these tests are good for internal
functions.
Integration Tests
The Salt integration tests come with a number of classes and methods which
allow for components to be easily tested. These classes are generally inherited
from and provide specific methods for hooking into the running integration test
environment created by the integration tests.
It is noteworthy that since integration tests validate against a running
environment that they are generally the preferred means to write tests.
The integration system is all located under tests/integration in the Salt
source tree.
Integration Classes
The integration classes are located in tests/integration/__init__.py and
can be extended therein. There are three classes available to extend:
ModuleCase
Used to define executions run via the master to minions and to call
single modules and states.
The available methods are as follows:
- run_function:
- Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
- state_result:
- Return the result data from a single state return
- run_state:
- Run the state.single command and return the state return structure
SyndicCase
Used to execute remote commands via a syndic, only used to verify the
capabilities of the Syndic.
The available methods are as follows:
- run_function:
- Run a single salt function and condition the return down to match the
behavior of the raw function call. This will run the command and only
return the results from a single minion to verify.
ShellCase
Shell out to the scripts which ship with Salt.
The available methods are as follows:
- run_script:
- Execute a salt script with the given argument string
- run_salt:
- Execute the salt command, pass in the argument string as it would be
passed on the command line.
- run_run:
- Execute the salt-run command, pass in the argument string as it would be
passed on the command line.
- run_run_plus:
- Execute Salt run and the salt run function and return the data from
each in a dict
- run_key:
- Execute the salt-key command, pass in the argument string as it would be
passed on the command line.
- run_cp:
- Execute salt-cp, pass in the argument string as it would be
passed on the command line.
- run_call:
- Execute salt-call, pass in the argument string as it would be
passed on the command line.
Examples
Module Example via ModuleCase Class
Import the integration module, this module is already added to the python path
by the test execution. Inherit from the integration.ModuleCase
class. The
tests that execute against salt modules should be placed in the
tests/integration/modules directory so that they will be detected by the test
system.
Now the workhorse method run_function
can be used to test a module:
import os
import integration
class TestModuleTest(integration.ModuleCase):
'''
Validate the test module
'''
def test_ping(self):
'''
test.ping
'''
self.assertTrue(self.run_function('test.ping'))
def test_echo(self):
'''
test.echo
'''
self.assertEqual(self.run_function('test.echo', ['text']), 'text')
ModuleCase can also be used to test states, when testing states place the test
module in the tests/integration/states directory. The state_result
and
the run_state
methods are the workhorse here:
import os
import shutil
import integration
HFILE = os.path.join(integration.TMP, 'hosts')
class HostTest(integration.ModuleCase):
'''
Validate the host state
'''
def setUp(self):
shutil.copyfile(os.path.join(integration.FILES, 'hosts'), HFILE)
super(HostTest, self).setUp()
def tearDown(self):
if os.path.exists(HFILE):
os.remove(HFILE)
super(HostTest, self).tearDown()
def test_present(self):
'''
host.present
'''
name = 'spam.bacon'
ip = '10.10.10.10'
ret = self.run_state('host.present', name=name, ip=ip)
result = self.state_result(ret)
self.assertTrue(result)
with open(HFILE) as fp_:
output = fp_.read()
self.assertIn('{0}\t\t{1}'.format(ip, name), output)
The above example also demonstrates using the integration files and the
integration state tree. The variable integration.FILES will point to the
directory used to store files that can be used or added to to help enable tests
that require files. The location integration.TMP can also be used to store
temporary files that the test system will clean up when the execution finishes.
The integration state tree can be found at tests/integration/files/file/base.
This is where the referenced host.present sls file resides.
Shell Example via ShellCase
Validating the shell commands can be done via shell tests. Here are some
examples:
import sys
import shutil
import tempfile
import integration
class KeyTest(integration.ShellCase):
'''
Test salt-key script
'''
_call_binary_ = 'salt-key'
def test_list(self):
'''
test salt-key -L
'''
data = self.run_key('-L')
expect = [
'Unaccepted Keys:',
'Accepted Keys:',
'minion',
'sub_minion',
'Rejected:', '']
self.assertEqual(data, expect)
This example verifies that the salt-key
command executes and returns as
expected by making use of the run_key
method.
All shell tests should be placed in the tests/integraion/shell directory.
Reactor System
Salt version 0.11.0 introduced the reactor system. The premise behind the
reactor system is that with Salt's events and the ability to execute commands,
a logic engine could be put in place to allow events to trigger actions, or
more accurately, reactions.
This system binds sls files to event tags on the master. These sls files then
define reactions. This means that the reactor system has two parts. First, the
reactor option needs to be set in the master configuration file. The reactor
option allows for event tags to be associated with sls reaction files. Second,
these reaction files use highdata (like the state system) to define reactions
to be executed.
Event System
A basic understanding of the event system is required to understand reactors.
The event system is a local ZeroMQ PUB interface which fires salt events. This
event bus is an open system used for sending information notifying Salt and
other systems about operations.
The event system fires events with a very specific criteria. Every event has a
tag which is comprised of a maximum of 20 characters. Event tags
allow for fast top level filtering of events. In addition to the tag, each
event has a data structure. This data structure is a dict, which contains
information about the event.
Mapping Events to Reactor SLS Files
The event tag and data are both critical when working with the reactor system.
In the master configuration file under the reactor option, tags are associated
with lists of reactor sls formulas (globs can be used for matching):
reactor:
- 'auth':
- /srv/reactor/authreact1.sls
- /srv/reactor/authreact2.sls
- 'minion_start':
- /srv/reactor/start.sls
When an event with a tag of auth
is fired, the reactor will catch the event
and render the two listed files. The rendered files are standard sls files, so
by default they are yaml + Jinja. The Jinja is packed with a few data
structures similar to state and pillar sls files. The data available is in
tag
and data
variables. The tag
variable is just the tag in the
fired event and the data
variable is the event's data dict. Here is a
simple reactor sls:
{% if data['id'] == 'mysql1' %}
highstate_run:
cmd.state.highstate:
- tgt: mysql1
{% endif %}
This simple reactor file uses Jinja to further refine the reaction to be made.
If the id
in the event data is mysql1
(in other words, if the name of
the minion is mysql1
) then the following reaction is defined. The same
data structure and compiler used for the state system is used for the reactor
system. The only difference is that the data is matched up to the salt command
API and the runner system. In this example, a command is published to the
mysql1
minion with a function of state.highstate
. Similarly, a runner
can be called:
{% if data['data']['overstate'] == 'refresh' %}
overstate_run:
runner.state.over
{% endif %}
This example will execute the state.overstate runner and initiate an overstate
execution.
Fire an event
From a minion, run bellow command
salt-call event.fire_master '{"overstate": "refresh"}' 'foo'
In reactor fomular files that are associated with tag foo
, data can be
accessed via data['data']
. Above command passed a dictionary as data, its
overstate
key can be accessed via data['data']['overstate']
. See
salt.modules.event
for more information.
Salt Conventions
SaltStack Packaging Guide
Since Salt provides a powerful toolkit for system management and automation,
the package can be spit into a number of sub-tools. While packaging Salt as
a single package containing all components is perfectly acceptable, the split
packages should follow this convention.
Source Files
Release packages should always be built from the source tarball distributed via
pypi. Release packages should NEVER use a git checkout as the source for
distribution.
Single Package
Shipping Salt as a single package, where the minion, master and all tools are
together is perfectly acceptable and practiced by distributions such as
FreeBSD.
Split Package
Salt Should always be split in a standard way, with standard dependencies, this lowers
cross distribution confusion about what components are going to be shipped with
specific packages. These packages can be defined from the Salt Source as of
Salt 0.17.0:
Salt Common
The salt-common or salt package should contain the files provided by the
salt python package, or all files distributed from the salt/
directory in
the source distribution packages. The documentation contained under the
doc/
directory can be a part of this package but splitting out a doc
package is preferred.
Since salt-call is the entry point to utilize the libs and is useful for all
salt packages it is included in the salt-common package.
Files
- salt/*
- man/salt.7
- scripts/salt-call
- tests/*
- man/salt-call.1
Depends
- Python 2.6-2.7
- PyYAML
- Jinja2
Salt Master
The salt-master package contains the applicable scripts, related man
pages and init information for the given platform.
Files
- scripts/salt-master
- scripts/salt
- scripts/salt-run
- scripts/salt-key
- scripts/salt-cp
- pkg/<master init data>
- man/salt.1
- man/salt-master.1
- man/salt-run.1
- man/salt-key.1
- man/salt-cp.1
- conf/master
Depends
- Salt Common
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Syndic
The Salt Syndic package can be rolled completely into the Salt Master package.
Platforms which start services as part of the package deployment need to
maintain a separate salt-syndic package (primarily Debian based platforms).
The Syndic may optionally not depend on the anything more than the Salt Master since
the master will bring in all needed dependencies, but fall back to the platform
specific packaging guidelines.
Files
- scripts/salt-syndic
- pkg/<syndic init data>
- man/salt-syndic.1
Depends
- Salt Common
- Salt Master
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Minion
The Minion is a standalone package and should not be split beyond the
salt-minion and salt-common packages.
Files
- scripts/salt-minion
- pkg/<minion init data>
- man/salt-minion.1
- conf/minion
Depends
- Salt Common
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt SSH
Since Salt SSH does not require the same dependencies as the minion and master, it
should be split out.
Files
- scripts/salt-ssh
- man/salt-ssh.1
Salt Doc
The documentation package is very distribution optional. A completely split
package will split out the documentation, but some platform conventions do not
prefer this.
If the documentation is not split out, it should be included with the
Salt Common package.
Name
Optional Depends
- Salt Common
- Python Sphinx
- Make
Salt Release Process
The goal for Salt projects is to cut a new feature release every four to six
weeks. This document outlines the process for these releases, and the
subsequent bug fix releases which follow.
Feature Release Process
When a new release is ready to be cut, the person responsible for cutting the
release will follow the following steps (written using the 0.16 release as an
example):
- All open issues on the release milestone should be moved to the next release
milestone. (e.g. from the
0.16
milestone to the 0.17
milestone)
- Release notes should be created documenting the major new features and
bugfixes in the release.
- Create an annotated tag with only the major and minor version numbers,
preceded by the letter
v
. (e.g. v0.16
) This tag will reside on the
develop
branch.
- Create a branch for the new release, using only the major and minor version
numbers. (e.g.
0.16
)
- On this new branch, create an annotated tag for the first revision release,
which is generally a release candidate. It should be preceded by the letter
v
. (e.g. v0.16.0RC
)
- The release should be packaged from this annotated tag and uploaded to PyPI.
- The packagers should be notified on the
salt-packagers
mailing list so
they can create packages for all the major operating systems. (note that
release candidates should go in the testing repositories)
- After the packagers have been given a few days to compile the packages, the
release is announced on the
salt-users
mailing list.
- Log into RTD and add the new release there. (Have to do it manually)
Maintenance and Bugfix Releases
Once a release has been cut, regular cherry-picking sessions should begin to
cherry-pick any bugfixes from the develop
branch to the release branch
(e.g. 0.16
). Once major bugs have been fixes and cherry-picked, a bugfix
release can be cut:
- On the release branch (i.e.
0.16
), create an annotated tag for the
revision release. It should be preceded by the letter v
. (e.g.
v0.16.2
) Release candidates are unnecessary for bugfix releases.
- The release should be packaged from this annotated tag and uploaded to PyPI.
- The packagers should be notified on the
salt-packagers
mailing list so
they can create packages for all the major operating systems.
- After the packagers have been given a few days to compile the packages, the
release is announced on the
salt-users
mailing list.
Salt Coding Style
Salt is developed with a certain coding style, while the style is dominantly
PEP 8 it is not completely PEP 8. It is also noteworthy that a few
development techniques are also employed which should be adhered to. In the
end, the code is made to be "Salty".
Most importantly though, we will accept code that violates the coding style and
KINDLY ask the contributor to fix it, or go ahead and fix the code on behalf of
the contributor. Coding style is NEVER grounds to reject code contributions,
and is never grounds to talk down to another member of the community (There are
no grounds to treat others without respect, especially people working to
improve Salt)!!
Strings
Salt follows a few rules when formatting strings:
Single Quotes
In Salt, all strings use single quotes unless there is a good reason not to.
This means that docstrings use single quotes, standard strings use single
quotes etc.:
def foo():
'''
A function that does things
'''
name = 'A name'
return name
Docstring Conventions
Docstrings should always add a newline, docutils takes care of the new line and
it makes the code cleaner and more vertical:
GOOD:
def bar():
'''
Here lies a docstring with a newline after the quotes and is the salty
way to handle it! Vertical code is the way to go!
'''
return
BAD:
def baz():
'''This is not ok!'''
return
Imports
Salt code prefers importing modules and not explicit functions. This is both a
style and functional preference. The functional preference originates around
the fact that the module import system used by pluggable modules will include
callable objects (functions) that exist in the direct module namespace. This
is not only messy, but may unintentionally expose code python libs to the Salt
interface and pose a security problem.
To say this more directly with an example, this is GOOD:
import os
def minion_path():
path = os.path.join(self.opts['cachedir'], 'minions')
return path
This on the other hand is DISCOURAGED:
from os.path import join
def minion_path():
path = join(self.opts['cachedir'], 'minions')
return path
The time when this is changed is for importing exceptions, generally directly
importing exceptions is preferred:
This is a good way to import exceptions:
from salt.exceptions import CommandExecutionError
Absolute Imports
Although absolute imports seems like an awesome idea, please do not use it.
Extra care would be necessary all over salt's code in order for absolute
imports to work as supposed. Believe it, it has been tried before and, as a
tried example, by renaming salt.modules.sysmod
to salt.modules.sys
, all
other salt modules which needed to import sys
would have to
also import absolute_import
, which should be
avoided.
Vertical is Better
When writing Salt code, vertical code is generally preferred. This is not a hard
rule but more of a guideline. As PEP 8 specifies, Salt code should not exceed 79
characters on a line, but it is preferred to separate code out into more
newlines in some cases for better readability:
import os
os.chmod(
os.path.join(self.opts['sock_dir'],
'minion_event_pub.ipc'),
448
)
Where there are more line breaks, this is also apparent when constructing a
function with many arguments, something very common in state functions for
instance:
def managed(name,
source=None,
source_hash='',
user=None,
group=None,
mode=None,
template=None,
makedirs=False,
context=None,
replace=True,
defaults=None,
env=None,
backup='',
**kwargs):
Note
Making function and class definitions vertical is only required if the
arguments are longer then 80 characters. Otherwise, the formatting is
optional and both are acceptable.
Indenting
Some confusion exists in the python world about indenting things like function
calls, the above examples use 8 spaces when indenting comma-delimited
constructs.
The confusion arises because the pep8 program INCORRECTLY flags this as wrong,
where PEP 8, the document, cites only using 4 spaces here as wrong, as it
doesn't differentiate from a new indent level.
Right:
def managed(name,
source=None,
source_hash='',
user=None)
WRONG:
def managed(name,
source=None,
source_hash='',
user=None)
Lining up the indent is also correct:
def managed(name,
source=None,
source_hash='',
user=None)
This also applies to function calls and other hanging indents.
pep8 and Flake8 (and, by extension, the vim plugin Syntastic) will complain
about the double indent for hanging indents. This is a known conflict between
pep8 (the script) and the actual PEP 8 standard. It is recommended that this
particular warning be ignored with the following lines in
~/.config/flake8
:
[flake8]
ignore = E226,E241,E242,E126
Make sure your Flake8/pep8 are up to date. The first three errors are ignored
by default and are present here to keep the behavior the same. This will also
work for pep8 without the Flake8 wrapper -- just replace all instances of
'flake8' with 'pep8', including the filename.
Code Churn
Many pull requests have been submitted that only churn code in the name of
PEP 8. Code churn is a leading source of bugs and is strongly discouraged.
While style fixes are encouraged they should be isolated to a single file per
commit, and the changes should be legitimate, if there are any questions about
whether a style change is legitimate please reference this document and the
official PEP 8 (http://www.python.org/dev/peps/pep-0008/) document before
changing code. Many claims that a change is PEP 8 have been invalid, please
double check before committing fixes.
SaltStack Packaging Guide
Since Salt provides a powerful toolkit for system management and automation,
the package can be spit into a number of sub-tools. While packaging Salt as
a single package containing all components is perfectly acceptable, the split
packages should follow this convention.
Source Files
Release packages should always be built from the source tarball distributed via
pypi. Release packages should NEVER use a git checkout as the source for
distribution.
Single Package
Shipping Salt as a single package, where the minion, master and all tools are
together is perfectly acceptable and practiced by distributions such as
FreeBSD.
Split Package
Salt Should always be split in a standard way, with standard dependencies, this lowers
cross distribution confusion about what components are going to be shipped with
specific packages. These packages can be defined from the Salt Source as of
Salt 0.17.0:
Salt Common
The salt-common or salt package should contain the files provided by the
salt python package, or all files distributed from the salt/
directory in
the source distribution packages. The documentation contained under the
doc/
directory can be a part of this package but splitting out a doc
package is preferred.
Since salt-call is the entry point to utilize the libs and is useful for all
salt packages it is included in the salt-common package.
Files
- salt/*
- man/salt.7
- scripts/salt-call
- tests/*
- man/salt-call.1
Depends
- Python 2.6-2.7
- PyYAML
- Jinja2
Salt Master
The salt-master package contains the applicable scripts, related man
pages and init information for the given platform.
Files
- scripts/salt-master
- scripts/salt
- scripts/salt-run
- scripts/salt-key
- scripts/salt-cp
- pkg/<master init data>
- man/salt.1
- man/salt-master.1
- man/salt-run.1
- man/salt-key.1
- man/salt-cp.1
- conf/master
Depends
- Salt Common
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Syndic
The Salt Syndic package can be rolled completely into the Salt Master package.
Platforms which start services as part of the package deployment need to
maintain a separate salt-syndic package (primarily Debian based platforms).
The Syndic may optionally not depend on the anything more than the Salt Master since
the master will bring in all needed dependencies, but fall back to the platform
specific packaging guidelines.
Files
- scripts/salt-syndic
- pkg/<syndic init data>
- man/salt-syndic.1
Depends
- Salt Common
- Salt Master
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt Minion
The Minion is a standalone package and should not be split beyond the
salt-minion and salt-common packages.
Files
- scripts/salt-minion
- pkg/<minion init data>
- man/salt-minion.1
- conf/minion
Depends
- Salt Common
- ZeroMQ >= 3.2
- PyZMQ >= 2.10
- PyCrypto
- M2Crypto
- Python MessagePack (Messagepack C lib, or msgpack-pure)
Salt SSH
Since Salt SSH does not require the same dependencies as the minion and master, it
should be split out.
Files
- scripts/salt-ssh
- man/salt-ssh.1
Salt Doc
The documentation package is very distribution optional. A completely split
package will split out the documentation, but some platform conventions do not
prefer this.
If the documentation is not split out, it should be included with the
Salt Common package.
Name
Optional Depends
- Salt Common
- Python Sphinx
- Make
Salt Release Process
The goal for Salt projects is to cut a new feature release every four to six
weeks. This document outlines the process for these releases, and the
subsequent bug fix releases which follow.
Feature Release Process
When a new release is ready to be cut, the person responsible for cutting the
release will follow the following steps (written using the 0.16 release as an
example):
- All open issues on the release milestone should be moved to the next release
milestone. (e.g. from the
0.16
milestone to the 0.17
milestone)
- Release notes should be created documenting the major new features and
bugfixes in the release.
- Create an annotated tag with only the major and minor version numbers,
preceded by the letter
v
. (e.g. v0.16
) This tag will reside on the
develop
branch.
- Create a branch for the new release, using only the major and minor version
numbers. (e.g.
0.16
)
- On this new branch, create an annotated tag for the first revision release,
which is generally a release candidate. It should be preceded by the letter
v
. (e.g. v0.16.0RC
)
- The release should be packaged from this annotated tag and uploaded to PyPI.
- The packagers should be notified on the
salt-packagers
mailing list so
they can create packages for all the major operating systems. (note that
release candidates should go in the testing repositories)
- After the packagers have been given a few days to compile the packages, the
release is announced on the
salt-users
mailing list.
- Log into RTD and add the new release there. (Have to do it manually)
Maintenance and Bugfix Releases
Once a release has been cut, regular cherry-picking sessions should begin to
cherry-pick any bugfixes from the develop
branch to the release branch
(e.g. 0.16
). Once major bugs have been fixes and cherry-picked, a bugfix
release can be cut:
- On the release branch (i.e.
0.16
), create an annotated tag for the
revision release. It should be preceded by the letter v
. (e.g.
v0.16.2
) Release candidates are unnecessary for bugfix releases.
- The release should be packaged from this annotated tag and uploaded to PyPI.
- The packagers should be notified on the
salt-packagers
mailing list so
they can create packages for all the major operating systems.
- After the packagers have been given a few days to compile the packages, the
release is announced on the
salt-users
mailing list.
Salt Coding Style
Salt is developed with a certain coding style, while the style is dominantly
PEP 8 it is not completely PEP 8. It is also noteworthy that a few
development techniques are also employed which should be adhered to. In the
end, the code is made to be "Salty".
Most importantly though, we will accept code that violates the coding style and
KINDLY ask the contributor to fix it, or go ahead and fix the code on behalf of
the contributor. Coding style is NEVER grounds to reject code contributions,
and is never grounds to talk down to another member of the community (There are
no grounds to treat others without respect, especially people working to
improve Salt)!!
Strings
Salt follows a few rules when formatting strings:
Single Quotes
In Salt, all strings use single quotes unless there is a good reason not to.
This means that docstrings use single quotes, standard strings use single
quotes etc.:
def foo():
'''
A function that does things
'''
name = 'A name'
return name
Docstring Conventions
Docstrings should always add a newline, docutils takes care of the new line and
it makes the code cleaner and more vertical:
GOOD:
def bar():
'''
Here lies a docstring with a newline after the quotes and is the salty
way to handle it! Vertical code is the way to go!
'''
return
BAD:
def baz():
'''This is not ok!'''
return
Imports
Salt code prefers importing modules and not explicit functions. This is both a
style and functional preference. The functional preference originates around
the fact that the module import system used by pluggable modules will include
callable objects (functions) that exist in the direct module namespace. This
is not only messy, but may unintentionally expose code python libs to the Salt
interface and pose a security problem.
To say this more directly with an example, this is GOOD:
import os
def minion_path():
path = os.path.join(self.opts['cachedir'], 'minions')
return path
This on the other hand is DISCOURAGED:
from os.path import join
def minion_path():
path = join(self.opts['cachedir'], 'minions')
return path
The time when this is changed is for importing exceptions, generally directly
importing exceptions is preferred:
This is a good way to import exceptions:
from salt.exceptions import CommandExecutionError
Absolute Imports
Although absolute imports seems like an awesome idea, please do not use it.
Extra care would be necessary all over salt's code in order for absolute
imports to work as supposed. Believe it, it has been tried before and, as a
tried example, by renaming salt.modules.sysmod
to salt.modules.sys
, all
other salt modules which needed to import sys
would have to
also import absolute_import
, which should be
avoided.
Vertical is Better
When writing Salt code, vertical code is generally preferred. This is not a hard
rule but more of a guideline. As PEP 8 specifies, Salt code should not exceed 79
characters on a line, but it is preferred to separate code out into more
newlines in some cases for better readability:
import os
os.chmod(
os.path.join(self.opts['sock_dir'],
'minion_event_pub.ipc'),
448
)
Where there are more line breaks, this is also apparent when constructing a
function with many arguments, something very common in state functions for
instance:
def managed(name,
source=None,
source_hash='',
user=None,
group=None,
mode=None,
template=None,
makedirs=False,
context=None,
replace=True,
defaults=None,
env=None,
backup='',
**kwargs):
Note
Making function and class definitions vertical is only required if the
arguments are longer then 80 characters. Otherwise, the formatting is
optional and both are acceptable.
Indenting
Some confusion exists in the python world about indenting things like function
calls, the above examples use 8 spaces when indenting comma-delimited
constructs.
The confusion arises because the pep8 program INCORRECTLY flags this as wrong,
where PEP 8, the document, cites only using 4 spaces here as wrong, as it
doesn't differentiate from a new indent level.
Right:
def managed(name,
source=None,
source_hash='',
user=None)
WRONG:
def managed(name,
source=None,
source_hash='',
user=None)
Lining up the indent is also correct:
def managed(name,
source=None,
source_hash='',
user=None)
This also applies to function calls and other hanging indents.
pep8 and Flake8 (and, by extension, the vim plugin Syntastic) will complain
about the double indent for hanging indents. This is a known conflict between
pep8 (the script) and the actual PEP 8 standard. It is recommended that this
particular warning be ignored with the following lines in
~/.config/flake8
:
[flake8]
ignore = E226,E241,E242,E126
Make sure your Flake8/pep8 are up to date. The first three errors are ignored
by default and are present here to keep the behavior the same. This will also
work for pep8 without the Flake8 wrapper -- just replace all instances of
'flake8' with 'pep8', including the filename.
Code Churn
Many pull requests have been submitted that only churn code in the name of
PEP 8. Code churn is a leading source of bugs and is strongly discouraged.
While style fixes are encouraged they should be isolated to a single file per
commit, and the changes should be legitimate, if there are any questions about
whether a style change is legitimate please reference this document and the
official PEP 8 (http://www.python.org/dev/peps/pep-0008/) document before
changing code. Many claims that a change is PEP 8 have been invalid, please
double check before committing fixes.
Salt Stack Git Policy
The Salt Stack team follows a git policy to maintain stability and consistency
with the repository. The git policy has been developed to encourage
contributions and make contributing to Salt as easy as possible. Code
contributors to Salt Stack projects DO NOT NEED TO READ THIS DOCUMENT, because
all contributions come into Salt Stack via a single gateway to make it as
easy as possible for contributors to give us code.
The primary rule of git management in Salt Stack is to make life easy on
contributors and developers to send in code. Simplicity is always a goal!
New Code Entry
All new Salt Stack code is posted to the develop branch, this is the single
point of entry. The only exception here is when a bugfix to develop cannot be
cleanly merged into a release branch and the bugfix needs to be rewritten for
the release branch.
Release Branching
Salt Stack maintains two types of releases, Feature Releases and
Point Releases. A feature release is managed by incrementing the first or
second release point number, so 0.10.5 -> 0.11.0 signifies a feature release
and 0.11.0 -> 0.11.1 signifies a point release, also a hypothetical
0.42.7 -> 1.0.0 would also signify a feature release.
Feature Release Branching
Each feature release is maintained in a dedicated git branch derived from the
last applicable release commit on develop. All file changes relevant to the
feature release will be completed in the develop branch prior to the creation
of the feature release branch. The feature release branch will be named after
the relevant numbers to the feature release, which constitute the first two
numbers. This means that the release branch for the 0.11.0 series is named
0.11.
A feature release branch is created with the following command:
# git checkout -b 0.11 # From the develop branch
# git push origin 0.11
Point Releases
Each point release is derived from its parent release branch. Constructing point
releases is a critical aspect of Salt development and is managed by members of
the core development team. Point releases comprise bug and security fixes which
are cherry picked from develop onto the aforementioned release branch. At the
time when a core developer accepts a pull request a determination needs to be
made if the commits in the pull request need to be backported to the release
branch. Some simple criteria are used to make this determination:
- Is this commit fixing a bug?
Backport
- Does this commit change or add new features in any way?
Don't backport
- Is this a PEP8 or code cleanup commit?
Don't backport
- Does this commit fix a security issue?
Backport
Determining when a point release is going to be made is up to the project
leader (Thomas Hatch). Generally point releases are made every 1-2 weeks or
if there is a security fix they can be made sooner.
The point release is only designated by tagging the commit on the release
branch with release number using the existing convention (version 0.11.1 is
tagged with v0.11.1). From the tag point a new source tarball is generated
and published to PyPI, and a release announcement is made.
Salt Development Guidelines
Deprecating Code
Salt should remain backwards compatible, though sometimes, this backwards
compatibility needs to be broken because a specific feature and/or solution is
no longer necessary or required. At first one might think, let me change this
code, it seems that it's not used anywhere else so it should be safe to remove.
Then, once there's a new release, users complain about functionality which was
removed and they where using it, etc. This should, at all costs, be avoided,
and, in these cases, that specific code should be deprecated.
Depending on the complexity and usage of a specific piece of code, the
deprecation time frame should be properly evaluated. As an example, a
deprecation warning which is shown for 2 major releases, for example 0.17.0
and 0.18.0, gives users enough time to stop using the deprecated code and
adapt to the new one.
For example, if you're deprecating the usage of a keyword argument to a
function, that specific keyword argument should remain in place for the full
deprecation time frame and if that keyword argument is used, a deprecation
warning should be shown to the user.
To help in this deprecation task, salt provides salt.utils.warn_until
. The idea behind this helper function is to show the
deprecation warning until salt reaches the provided version. Once that provided
version is equaled salt.utils.warn_until
will
raise a RuntimeError
making salt stop its execution. This stoppage
is unpleasant and will remind the developer that the deprecation limit has been
reached and that the code can then be safely removed.
Consider the following example:
def some_function(bar=False, foo=None):
if foo is not None:
salt.utils.warn_until(
(0, 18),
'The \'foo\' argument has been deprecated and its '
'functionality removed, as such, its usage is no longer '
'required.'
)
Consider that the current salt release is 0.16.0
. Whenever foo
is
passed a value different from None
that warning will be shown to the user.
This will happen in versions 0.16.2
to 0.18.0
, after which a
RuntimeError
will be raised making us aware that the deprecated code
should now be removed.
Dunder Dictionaries
Salt provides several special "dunder" dictionaries as a convenience for Salt
development. These include __opts__
, __context__
, __salt__
, and
others. This document will describe each dictionary and detail where they exist
and what information and/or functionality they provide.
__opts__
Available in
The __opts__
dictionary contains all of the options passed in the
configuration file for the master or minion.
Note
In many places in salt, instead of pulling raw data from the __opts__
dict, configuration data should be pulled from the salt get frunctions
such as config.get, aka - __salt__['config.get']('foo:bar')
The get functions also allow for dict traversal via the : delimiter.
Consider using get functions whenever using __opts__ or __pillar__ and
__grains__ (when using grains for configuration data)
The configuration file data made available in the __opts__
dictionary is the
configuration data relative to the running daemon. If the modules are loaded and
executed by the master, then the master configuration data is available, if the
modules are executed by the minion, then the minion configuration is
available. Any additional information passed into the respective configuration
files is made available
__salt__
Available in
- Execution Modules
- State Modules
- Returners
__salt__
contains the execution module functions. This allows for all
functions to be called as they have been set up by the salt loader.
__salt__['cmd.run']('fdisk -l')
__salt__['network.ip_addrs']()
__grains__
Available in
- Execution Modules
- State Modules
- Returners
- External Pillar
The __grains__
dictionary contains the grains data generated by the minion
that is currently being worked with. In execution modules, state modules and
returners this is the grains of the minion running the calls, when generating
the external pillar the __grains__
is the grains data from the minion that
the pillar is being generated for.
__pillar__
Available in
- Execution Modules
- State Modules
- Returners
The __pillar__
dictionary contains the pillar for the respective minion.
__context__
__context__
exists in state modules and execution modules.
During a state run the __context__
dictionary persists across all states
that are run and then is destroyed when the state ends.
When running an execution module __context__
persists across all module
executions until the modules are refreshed; such as when saltutils.sync_all
or state.highstate
are executed.
A great place to see how to use __context__
is in the cp.py module in
salt/modules/cp.py. The fileclient authenticates with the master when it is
instantiated and then is used to copy files to the minion. Rather than create a
new fileclient for each file that is to be copied down, one instance of the
fileclient is instantiated in the __context__
dictionary and is reused for
each file. Here is an example from salt/modules/cp.py:
if not 'cp.fileclient' in __context__:
__context__['cp.fileclient'] = salt.fileclient.get_file_client(__opts__)
Note
Because __context__ may or may not have been destroyed, always be
sure to check for the existence of the key in __context__ and
generate the key before using it.
External Pillars
Salt provides a mechanism for generating pillar data by calling external
pillar interfaces. This document will describe an outline of an ext_pillar
module.
Location
Salt expects to find your ext_pillar
module in the same location where it
looks for other python modules. If the extension_modules
option in your
Salt master configuration is set, Salt will look for a pillar
directory
under there and load all the modules it finds. Otherwise, it will look in
your Python site-packages salt/pillar
directory.
Configuration
The external pillars that are called when a minion refreshes its pillars is
controlled by the ext_pillar
option in the Salt master configuration. You
can pass a single argument, a list of arguments or a dictionary of arguments
to your pillar:
ext_pillar:
- example_a: some argument
- example_b:
- argumentA
- argumentB
- example_c:
keyA: valueA
keyB: valueB
The Module
Imports and Logging
Import modules your external pillar module needs. You should first include
generic modules that come with stock Python:
And then start logging. This is an idiomatic way of setting up logging in Salt:
log = logging.getLogger(__name__)
Finally, load modules that are specific to what you are doing. You should catch
import errors and set a flag that the the __virtual__
function can use later.
try:
import weird_thing
example_a_loaded = True
except ImportError:
example_a_loaded = False
Options
If you define an __opts__
dictionary, it will be merged into the
__opts__
dictionary handed to the ext_pillar
function later. This is a
good place to put default configuration items. The convention is to name
things modulename.option
.
__opts__ = { 'example_a.someconfig': 137 }
Initialization
If you define an __init__
function, it will be called with the following
signature:
def __init__( __opts__ ):
# Do init work here
Note: The __init__
function is ran every time a particular minion causes
the external pillar to be called, so don't put heavy initialization code here.
The __init__
functionality is a side-effect of the Salt loader, so it may
not be as useful in pillars as it is in other Salt items.
__virtual__
If you define a __virtual__
function, you can control whether or not this
module is visible. If it returns False
then Salt ignores this module. If
it returns a string, then that string will be how Salt identifies this external
pillar in its ext_pillar
configuration. If this function does not exist,
then the name Salt's ext_pillar
will use to identify this module is its
conventional name in Python.
This is useful to write modules that can be installed on all Salt masters, but
will only be visible if a particular piece of software your module requires is
installed.
# This external pillar will be known as `example_a`
def __virtual__():
if example_a_loaded:
return 'example_a'
else:
return False
# This external pillar will be known as `something_else`
def __virtual__():
if example_a_loaded:
return 'something_else'
else:
return False
ext_pillar
This is where the real work of an external pillar is done. If this module is
active and has a function called ext_pillar
, whenever a minion updates its
pillar this function is called.
How it is called depends on how it is configured in the Salt master
configuration. The first argument is always the current pillar dictionary, this
contains pillar items that have already been added, starting with the data from
pillar_roots
, and then from any already-ran external pillars.
Using our example above:
ext_pillar( pillar, 'some argument' ) # example_a
ext_pillar( pillar, 'argumentA', 'argumentB' ) # example_b
ext_pillar( pillar, keyA='valueA', keyB='valueB' } ) # example_c
In the example_a
case, pillar
will contain the items from the
pillar_roots
, in example_b
pillar
will contain that plus the items
added by example_a
, and in example_c
pillar
will contain that plus
the items added by example_b
.
This function should return a dictionary, the contents of which are merged in
with all of the other pillars and returned to the minion. Note: this function
is called once for each minion that fetches its pillar data.
def ext_pillar( pillar, *args, **kwargs ):
my_pillar = {}
# Do stuff
return my_pillar
You shouldn't just add items to pillar
and return that, since that will
cause Salt to merge data that already exists. Rather, just return the items
you are adding or changing. You could, however, use pillar
in your module
to make some decision based on pillar data that already exists.
This function has access to some useful globals:
__opts__: | A dictionary of mostly Salt configuration options. If you had an
__opts__ dictionary defined in your module, those values will be
included. Also included and most useful is __opts__['id'] , which
is the minion id of the minion asking for pillar data. |
__salt__: | A dictionary of Salt module functions, useful so you don't have to
duplicate functions that already exist. E.g.
__salt__['cmd.run']( 'ls -l' ) Note, runs on the master |
__grains__: | A dictionary of the grains of the minion making this pillar call. |
Example configuration
As an example, if you wanted to add external pillar via the cmd_json
external pillar, add something like this to your master config:
ext_pillar:
- cmd_json: "echo {'arg':'value'}"
Modular Systems
When first working with Salt, it is not always clear where all of the modular
components are and what they do. Salt comes loaded with more modular systems
than many users are aware of, making Salt very easy to extend in many places.
The most commonly used modular systems are execution modules and states. But
the modular systems extend well beyond the more easily exposed components
and are often added to Salt to make the complete system more flexible.
Execution Modules
Execution modules make up the core of the functionality used by Salt to
interact with client systems. The execution modules create the core system
management library used by all Salt systems, including states, which
interact with minion systems.
Execution modules are completely open ended in their execution. They can
be used to do anything required on a minion, from installing packages to
detecting information about the system. The only restraint in execution
modules is that the defined functions always return a JSON serializable
object.
For a list of all built in execution modules, click here
For information on writing execution modules, see this page.
State Modules
State modules are used to define the state interfaces used by Salt States.
These modules are restrictive in that they must follow a number of rules to
function properly.
Note
State modules define the available routines in sls files. If calling
an execution module directly is desired, take a look at the module
state.
Auth
The auth module system allows for external authentication routines to be easily
added into Salt. The auth function needs to be implemented to satisfy the
requirements of an auth module. Use the pam
module as an example.
Fileserver
The fileserver module system is used to create fileserver backends used by the
Salt Master. These modules need to implement the functions used in the
fileserver subsystem. Use the gitfs
module as an example.
Grains
Grain modules define extra routines to populate grains data. All defined
public functions will be executed and MUST return a Python dict object. The
dict keys will be added to the grains made available to the minion.
Output
The output modules supply the outputter system with routines to display data
in the terminal. These modules are very simple and only require the output
function to execute. The default system outputter is the nested
module.
Pillar
Used to define optional external pillar systems. The pillar generated via
the filesystem pillar is passed into external pillars. This is commonly used
as a bridge to database data for pillar, but is also the backend to the libvirt
state used to generate and sign libvirt certificates on the fly.
Renderers
Renderers are the system used to render sls files into salt highdata for the
state compiler. They can be as simple as the py
renderer and as complex as
stateconf
and pydsl
.
Returners
Returners are used to send data from minions to external sources, commonly
databases. A full returner will implement all routines to be supported as an
external job cache. Use the redis
returner as an example.
Runners
Runners are purely master-side execution sequences. These range from simple
reporting to orchestration engines like the overstate.
Tops
Tops modules are used to convert external data sources into top file data for
the state system.
Wheel
The wheel system is used to manage master side management routines. These
routines are primarily intended for the API to enable master configuration.
Package Providers
This page contains guidelines for writing package providers.
Package Functions
One of the most important features of Salt is package management. There is no
shortage of package managers, so in the interest of providing a consistent
experience in pkg
states, there are certain functions
that should be present in a package provider. Note that these are subject to
change as new features are added or existing features are enhanced.
list_pkgs
This function should declare an empty dict, and then add packages to it by
calling pkg_resource.add_pkg
, like
so:
__salt__['pkg_resource.add_pkg'](ret, name, version)
The last thing that should be done before returning is to execute
pkg_resource.sort_pkglist
. This
function does not presently do anything to the return dict, but will be used in
future versions of Salt.
__salt__['pkg_resource.sort_pkglist'](ret)
list_pkgs
returns a dictionary of installed packages, with the keys being
the package names and the values being the version installed. Example return
data:
{'foo': '1.2.3-4',
'bar': '5.6.7-8'}
latest_version
Accepts an arbitrary number of arguments. Each argument is a package name. The
return value for a package will be an empty string if the package is not found
or if the package is up-to-date. The only case in which a non-empty string is
returned is if the package is available for new installation (i.e. not already
installed) or if there is an upgrade available.
If only one argument was passed, this function return a string, otherwise a
dict of name/version pairs is returned.
This function must also accept **kwargs
, in order to receive the
fromrepo
and repo
keyword arguments from pkg states. Where supported,
these arguments should be used to find the install/upgrade candidate in the
specified repository. The fromrepo
kwarg takes precedence over repo
, so
if both of those kwargs are present, the repository specified in fromrepo
should be used. However, if repo
is used instead of fromrepo
, it should
still work, to preserve backwards compatibility with older versions of Salt.
version
Like latest_version
, accepts an arbitrary number of arguments and
returns a string if a single package name was passed, or a dict of name/value
pairs if more than one was passed. The only difference is that the return
values are the currently-installed versions of whatever packages are passed. If
the package is not installed, an empty string is returned for that package.
upgrade_available
Deprecated and destined to be removed. For now, should just do the following:
return __salt__['pkg.latest_version'](name) != ''
install
The following arguments are required and should default to None
:
- name (for single-package pkg states)
- pkgs (for multiple-package pkg states)
- sources (for binary package file installation)
The first thing that this function should do is call
pkg_resource.parse_targets
(see below). This function will convert the SLS input into a more easily parsed
data structure.
pkg_resource.parse_targets
may
need to be modified to support your new package provider, as it does things
like parsing package metadata which cannot be done for every package management
system.
pkg_params, pkg_type = __salt__['pkg_resource.parse_targets'](name,
pkgs,
sources)
Two values will be returned to the install function. The first of
them will be a dictionary. The keys of this dictionary will be package names,
though the values will differ depending on what kind of installation is being
done:
- If name was provided (and pkgs was not), then there will
be a single key in the dictionary, and its value will be
None
. Once the
data has been returned, if the version keyword argument was
provided, then it should replace the None
value in the dictionary.
- If pkgs was provided, then name is ignored, and the
dictionary will contain one entry for each package in the pkgs
list. The values in the dictionary will be
None
if a version was not
specified for the package, and the desired version if specified. See the
Multiple Package Installation Options section of the
pkg.installed
state for more info.
- If sources was provided, then name is ignored, and the
dictionary values will be the path/URI for the package.
The second return value will be a string with two possible values:
repository
or file
. The install function can use this value
(if necessary) to build the proper command to install the targeted package(s).
Both before and after the installing the target(s), you should run
list_pkgs to obtain a list of the installed packages. You should then
return the output of
pkg_resource.find_changes
:
return __salt__['pkg_resource.find_changes'](old, new)
remove
Removes the passed package and return a list of the packages removed.
Package Repo Functions
There are some functions provided by pkg
which are specific to package
repositories, and not to packages themselves. When writing modules for new
package managers, these functions should be made available as stated below, in
order to provide compatibility with the pkgrepo
state.
All repo functions should accept a basedir option, which defines which
directory repository configuration should be found in. The default for this
is dictated by the repo manager that is being used, and rarely needs to be
changed.
basedir = '/etc/yum.repos.d'
__salt__['pkg.list_repos'](basedir)
list_repos
Lists the repositories that are currently configured on this system.
__salt__['pkg.list_repos']()
Returns a dictionary, in the following format:
{'reponame': 'config_key_1': 'config value 1',
'config_key_2': 'config value 2',
'config_key_3': ['list item 1 (when appropriate)',
'list item 2 (when appropriate)]}
get_repo
Displays all local configuration for a specific repository.
__salt__['pkg.get_repo'](repo='myrepo')
The information is formatted in much the same way as list_repos, but is
specific to only one repo.
{'config_key_1': 'config value 1',
'config_key_2': 'config value 2',
'config_key_3': ['list item 1 (when appropriate)',
'list item 2 (when appropriate)]}
del_repo
Removes the local configuration for a specific repository. Requires a repo
argument, which must match the locally configured name. This function returns
a string, which informs the user as to whether or not the operation was a
success.
__salt__['pkg.del_repo'](repo='myrepo')
mod_repo
Modify the local configuration for one or more option for a configured repo.
This is also the way to create new repository configuration on the local
system; if a repo is specified which does not yet exist, it will be created.
The options specified for this function are specific to the system; please
refer to the documentation for your specific repo manager for specifics.
__salt__['pkg.mod_repo'](repo='myrepo', url='http://myurl.com/repo')
Low-Package Functions
In general, the standard package functions as describes above will meet your
needs. These functions use the system's native repo manager (for instance,
yum or the apt tools). In most cases, the repo manager is actually separate
from the package manager. For instance, yum is usually a front-end for rpm, and
apt is usually a front-end for dpkg. When possible, the package functions that
use those package managers directly should do so through the low package
functions.
It is normal and sane for pkg
to make calls to lowpkgs
, but lowpkg
must never make calls to pkg
. This is affects functions which are required
by both pkg
and lowpkg
, but the technique in pkg
is more performant
than what is available to lowpkg
. When this is the case, the lowpkg
function that requires that technique must still use the lowpkg
version.
list_pkgs
Returns a dict of packages installed, including the package name and version.
Can accept a list of packages; if none are specified, then all installed
packages will be listed.
installed = __salt__['lowpkg.list_pkgs']('foo', 'bar')
Example output:
{'foo': '1.2.3-4',
'bar': '5.6.7-8'}
verify
Many (but not all) package management systems provide a way to verify that the
files installed by the package manager have or have not changed. This function
accepts a list of packages; if none are specified, all packages will be
included.
installed = __salt__['lowpkg.verify']('httpd')
Example output:
{'/etc/httpd/conf/httpd.conf': {'mismatch': ['size', 'md5sum', 'mtime'],
'type': 'config'}}
file_list
Lists all of the files installed by all packages specified. If not packages are
specified, then all files for all known packages are returned.
installed = __salt__['lowpkg.file_list']('httpd', 'apache')
This function does not return which files belong to which packages; all files
are returned as one giant list (hence the file_list function name. However,
This information is still returned inside of a dict, so that it can provide
any errors to the user in a sane manner.
{'errors': ['package apache is not installed'],
'files': ['/etc/httpd',
'/etc/httpd/conf',
'/etc/httpd/conf.d',
'...SNIP...']}
file_dict
Lists all of the files installed by all packages specified. If not packages are
specified, then all files for all known packages are returned.
installed = __salt__['lowpkg.file_dict']('httpd', 'apache', 'kernel')
Unlike file_list, this function will break down which files belong to which
packages. It will also return errors in the same manner as file_list.
{'errors': ['package apache is not installed'],
'packages': {'httpd': ['/etc/httpd',
'/etc/httpd/conf',
'...SNIP...'],
'kernel': ['/boot/.vmlinuz-2.6.32-279.el6.x86_64.hmac',
'/boot/System.map-2.6.32-279.el6.x86_64',
'...SNIP...']}}
Logging
The salt project tries to get the logging to work for you and help us solve any
issues you might find along the way.
If you want to get some more information on the nitty-gritty of salt's logging
system, please head over to the logging development
document, if all you're after is salt's logging
configurations, please continue reading.
Available Configuration Settings
log_file
The log records can be sent to a regular file, local path name, or network location.
Remote logging works best when configured to use rsyslogd(8) (e.g.: file:///dev/log
),
with rsyslogd(8) configured for network logging. The format for remote addresses is:
<file|udp|tcp>://<host|socketpath>:<port-if-required>/<log-facility>
.
Default: Dependent of the binary being executed, for example, for salt-master
,
/var/log/salt/master
.
Examples:
log_file: /var/log/salt/master
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
Default: warning
The level of log record messages to send to the console.
One of all
, garbage
, trace
, debug
, info
, warning
,
error
, critical
, quiet
.
log_level_logfile
Default: warning
The level of messages to send to the log file.
One of all
, garbage
, trace
, debug
, info
, warning
,
error
, critical
, quiet
.
log_level_logfile: warning
log_datefmt
Default: %H:%M:%S
The date and time format used in console log messages. Allowed date/time
formatting can be seen on time.strftime
.
log_datefmt_logfile
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. Allowed date/time
formatting can be seen on time.strftime
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. Allowed formatting options can
be seen on the LogRecord attributes.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. Allowed formatting options can
be seen on the LogRecord attributes.
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
Default: {}
This can be used to control logging levels more specifically. The example sets
the main salt library at the 'warning' level, but sets salt.modules
to log
at the debug
level:
log_granular_levels:
'salt': 'warning',
'salt.modules': 'debug'
External Logging Handlers
Besides the internal logging handlers used by salt, there are some external
which can be used, see the external logging handlers
document.
External Logging Handlers
Logstash Logging Handler
This module provides some Logstash logging handlers.
UDP Logging Handler
In order to setup the datagram handler for Logstash, please define on
the salt configuration file:
logstash_udp_handler:
host: 127.0.0.1
port = 9999
On the Logstash configuration file you need something like:
input {
udp {
type => "udp-type"
format => "json_event"
}
}
Please read the UDP input configuration page for additional information.
ZeroMQ Logging Handler
In order to setup the ZMQ handler for Logstash, please define on the
salt configuration file:
logstash_zmq_handler:
address: tcp://127.0.0.1:2021
On the Logstash configuration file you need something like:
input {
zeromq {
type => "zeromq-type"
mode => "server"
topology => "pubsub"
address => "tcp://0.0.0.0:2021"
charset => "UTF-8"
format => "json_event"
}
}
Please read the ZeroMQ input configuration page for additional
information.
Important Logstash Setting
One of the most important settings that you should not forget on your
Logstash configuration file regarding these logging handlers is
format
.
Both the UDP and ZeroMQ inputs need to have format
as
json_event
which is what we send over the wire.
Log Level
Both the logstash_udp_handler
and the logstash_zmq_handler
configuration sections accept an additional setting log_level
. If not
set, the logging level used will be the one defined for log_level
in
the global configuration file section.
HWM
The high water mark for the ZMQ socket setting. Only applicable for the
logstash_zmq_handler
.
Sentry Logging Handler
Configuring the python Sentry client, Raven, should be done under the
sentry_handler
configuration key.
At the bare minimum, you need to define the DSN. As an example:
sentry_handler:
dsn: https://pub-key:secret-key@app.getsentry.com/app-id
More complex configurations can be achieved, for example:
sentry_handler:
servers:
- https://sentry.example.com
- http://192.168.1.1
project: app-id
public_key: deadbeefdeadbeefdeadbeefdeadbeef
secret_key: beefdeadbeefdeadbeefdeadbeefdead
All the client configuration keys are supported, please see the
Raven client documentation.
The default logging level for the sentry handler is ERROR
. If you wish
to define a different one, define log_level
under the
sentry_handler
configuration key:
sentry_handler:
dsn: https://pub-key:secret-key@app.getsentry.com/app-id
log_level: warning
The available log levels are those also available for the salt cli
tools and configuration; salt --help
should give you the required
information.
Threaded Transports
Raven's documents rightly suggest using its threaded transport for
critical applications. However, don't forget that if you start having
troubles with Salt after enabling the threaded transport, please try
switching to a non-threaded transport to see if that fixes your problem.
Introduction to Extending Salt
Salt is made to be used, and made to be extended. The primary goal of Salt is
to provide a foundation which can be used to solve problems without
assuming what those problems might be.
One of the greatest benefit of developing Salt has been the vast array of ways
in which people have wanted to use it, while the original intention was as a
communication layer for a cloud controller Salt has been extended to facilitate
so much more.
Client API
The primary interface used to extend Salt, is to simply use it. Salt executions
can be called via the Salt client API, making programming master side solutions
with Salt is easy.
Adding Loadable Plugins
Salt is comprised of a core platform that loads many types of easy to write
plugins. The idea is to enable all of the breaking points in the Salt processes
to have a point of pluggable interaction. This means that all of the main
features of Salt can be extended, modified or used.
The breaking points and helping interfaces span from convenience master side
executions to manipulating the flow of how data is handled by Salt.
Minion Execution Modules
The minion execution modules or just modules
are the core to what Salt is
and does. These modules are found in:
https://github.com/saltstack/salt/blob/develop/salt/modules
These modules are what is called by the Salt command line and the salt client
API. Adding modules is done by simply adding additional Python modules to the
modules directory and restarting the minion.
Grains
Salt grains, or "grains of truth" are bits of static information that are
generated when the minion starts. This information is useful when determining
what package manager to default to, or where certain configuration files are
stored on the minion.
The Salt grains are the interface used for auto detection and dynamic assignment
of execution modules and types to specific Salt minions.
The code used to generate the Salt grains can be found here:
https://github.com/saltstack/salt/blob/develop/salt/grains
Renderers
Salt states are controlled by simple data structures, these structures can be
abstracted in a number of ways. While the default is to be in a YAML file
wrapped in a jinja template, any abstraction can be used. This means that any
format that can be dreamed is possible, so long as a renderer is written for
it.
The existing renderers can be found here:
https://github.com/saltstack/salt/blob/develop/salt/renderers
Returners
The Salt commands all produce a return value, that return value is sent to the
Salt master by default, but it can be sent anywhere. The returner interface
makes it programmatically possible for the information to be sent to anything
from an SQL or NoSQL database, to a custom application made to use Salt.
The existing returners can be found here:
https://github.com/saltstack/salt/blob/develop/salt/returners
Runners
Sometimes a certain application can be made to execute and run from the
existing Salt command line. This is where the Salt runners come into play.
The Salt Runners what is called by the Salt-run command and are meant to
act as a generic interface for encapsulating master side executions.
Existing Salt runners are located here:
https://github.com/saltstack/salt/blob/develop/salt/runners
Modules
Salt modules are the functions called by the salt command.
Modules Are Easy to Write!
Salt modules are amazingly simple to write. Just write a regular Python module
or a regular Cython module and place it a directory called _modules/
within the file_roots
specified by the master config file, and
they will be synced to the minions when state.highstate
is run, or by executing the
saltutil.sync_modules
or
saltutil.sync_all
functions.
Any custom modules which have been synced to a minion, that are named the
same as one of Salt's default set of modules, will take the place of the default
module with the same name. Note that a module's default name is its filename
(i.e. foo.py
becomes module foo
), but that its name can be overridden
by using a __virtual__ function.
Since Salt modules are just Python/Cython modules, there are no restraints on
what you can put inside of a Salt module. If a Salt module has errors and
cannot be imported, the Salt minion will continue to load without issue and the
module with errors will simply be omitted.
If adding a Cython module the file must be named <modulename>.pyx
so that
the loader knows that the module needs to be imported as a Cython module. The
compilation of the Cython module is automatic and happens when the minion
starts, so only the *.pyx
file is required.
Cross Calling Modules
All of the Salt modules are available to each other, and can be "cross called".
This means that, when creating a module, functions in modules that already exist
can be called.
The variable __salt__
is packed into the modules after they are loaded into
the Salt minion. This variable is a Python dictionary
of all of the Salt functions, laid out in the same way that they are made available
to the Salt command.
Salt modules can be cross called by accessing the value in the __salt__
dict:
def foo(bar):
return __salt__['cmd.run'](bar)
This code will call the Salt cmd module's run
function and pass the argument
bar
.
Preloaded Modules Data
When interacting with modules often it is nice to be able to read information
dynamically about the minion, or load in configuration parameters for a module.
Salt allows for different types of data to be loaded into the modules by the
minion, as of this writing Salt loads information gathered from the Salt Grains
system and from the minion configuration file.
Grains Data
The Salt minion detects information about the system when started. This allows
for modules to be written dynamically with respect to the underlying hardware
and operating system. This information is referred to as Salt Grains, or
"grains of salt". The Grains system was introduced to replace Facter, since
relying on a Ruby application from a Python application was both slow and
inefficient. Grains support replaces Facter in all Salt releases after 0.8
The values detected by the Salt Grains on the minion are available in a
dict named __grains__
and can be accessed
from within callable objects in the Python modules.
To see the contents of the grains dict for a given system in your deployment
run the grains.items()
function:
salt 'hostname' grains.items
To use the __grains__
dict simply call it as a Python dict from within your
code, an excellent example is available in the Grains module:
salt.modules.grains
.
Module Configuration
Since parameters for configuring a module may be desired, Salt allows for
configuration information stored in the main minion config file to be passed to
the modules.
Since the minion configuration file is a YAML document, arbitrary configuration
data can be passed in the minion config that is read by the modules. It is
strongly recommended that the values passed in the configuration file match
the module. This means that a value intended for the test
module should be
named test.<value>
.
Configuration also requires that default configuration parameters need to be
loaded as well. This can be done simply by adding the __opts__
dict to the
top level of the module.
The test module contains usage of the module configuration, and the default
configuration file for the minion contains the information and format used to
pass data to the modules. salt.modules.test
, conf/minion
.
Printout Configuration
Since module functions can return different data, and the way the data is
printed can greatly change the presentation, Salt has a printout
configuration.
When writing a module the __outputter__
dict can be declared in the module.
The __outputter__
dict contains a mapping of function name to Salt
Outputter.
__outputter__ = {
'run': 'txt'
}
This will ensure that the text outputter is used.
Documentation
Salt modules are self documenting, the sys.doc()
function will return the
documentation for all available modules:
This function simply prints out the docstrings found in the modules; when
writing Salt modules, please follow the formatting conventions for docstrings as
they appear in the other modules.
Adding Documentation to Salt Modules
Since life is much better with documentation, it is strongly suggested that
all Salt modules have documentation added. Any Salt modules submitted for
inclusion in the main distribution of Salt will be required to have
documentation.
Documenting Salt modules is easy! Just add a Python docstring to the function.
def spam(eggs):
'''
A function to make some spam with eggs!
CLI Example::
salt '*' test.spam eggs
'''
return eggs
Now when the sys.doc call is executed the docstring will be cleanly returned
to the calling terminal.
How Functions are Read
In Salt, Python callable objects contained within a module are made available
to the Salt minion for use. The only exception to this rule is a callable
object with a name starting with an underscore _
.
Objects Loaded Into the Salt Minion
def foo(bar):
return bar
class baz:
def __init__(self, quo):
pass
Objects NOT Loaded into the Salt Minion
def _foobar(baz): # Preceded with an _
return baz
cheese = {} # Not a callable Python object
Useful Decorators for Modules
Sometimes when writing modules for large scale deployments you run into some small
things that end up severely complicating the code. To alleviate some of this pain
Salt has some useful decorators for use within modules!
Depends Decorator
When writing custom modules there are many times where some of the module will
work on all hosts, but some functions require (for example) a service to be installed.
Instead of trying to wrap much of the code in large try/except blocks you can use
a simple decorator to do this. If the dependencies passed to the decorator don't
exist, then the salt minion will remove those functions from the module on that host.
If a "fallback_funcion" is defined, it will replace the function instead of removing it
from salt.utils.decorators import depends
try:
import dependency_that_sometimes_exists
except ImportError:
pass
@depends('dependency_that_sometimes_exists')
def foo():
'''
Function with a dependency on the "dependency_that_sometimes_exists" module,
if the "dependency_that_sometimes_exists" is missing this function will not exist
'''
return True
def _fallback():
'''
Fallback function for the depends decorator to replace a function with
'''
return '"dependency_that_sometimes_exists" needs to be installed for this function to exist'
@depends('dependency_that_sometimes_exists', fallback_funcion=_fallback)
def foo():
'''
Function with a dependency on the "dependency_that_sometimes_exists" module.
If the "dependency_that_sometimes_exists" is missing this function will be
replaced with "_fallback"
'''
return True
Examples of Salt Modules
The existing Salt modules should be fairly easy to read and understand, the
goal of the main distribution's Salt modules is not only to build a set of
functions for Salt, but to stand as examples for building out more Salt
modules.
The existing modules can be found here:
https://github.com/saltstack/salt/blob/develop/salt/modules
The most simple module is the test module, it contains the simplest Salt
function, test.ping
:
def ping():
'''
Just used to make sure the minion is up and responding
Return True
CLI Example::
salt '*' test.ping
'''
return True
Full list of builtin execution modules
Virtual modules
salt.modules.pkg
pkg
is a virtual module that is fulfilled by one of the following modules:
salt.modules.sys
The regular salt modules execute in a separate context from the salt minion
and manipulating the actual salt modules needs to happen in a higher level
context within the minion process. This is where the sys pseudo module is
used.
The sys pseudo module comes with a few functions that return data about the
available functions on the minion or allows for the minion modules to be
refreshed. These functions are as follows:
-
salt.modules.sys.
doc
([module[, module.function]])
Display the inline documentation for all available modules, or for the
specified module or function.
-
salt.modules.sys.
reload_modules
()
Instruct the minion to reload all available modules in memory. This
function can be called if the modules need to be re-evaluated for
availability or new modules have been made available to the minion.
-
salt.modules.sys.
list_modules
()
List all available (loaded) modules.
-
salt.modules.sys.
list_functions
()
List all known functions that are in available (loaded) modules.
aliases |
Manage the information in the aliases file |
alternatives |
Support for Alternatives system |
apache |
Support for Apache |
apt |
Support for APT (Advanced Packaging Tool) |
archive |
A module to wrap archive calls |
at |
Wrapper module for at(1) |
augeas_cfg |
Manages configuration files via augeas |
bluez |
Support for Bluetooth (using BlueZ in Linux). |
brew |
Homebrew for Mac OS X |
bridge |
Module for gathering and managing bridging information |
bsd_shadow |
Manage the password database on BSD systems |
cassandra |
Cassandra NoSQL Database Module |
cmdmod |
A module for shelling out |
config |
Return config information |
cp |
Minion side functions for salt-cp |
cron |
Work with cron |
daemontools |
daemontools service module. This module will create daemontools type |
darwin_sysctl |
Module for viewing and modifying sysctl parameters |
data |
Manage a local persistent data structure that can hold any arbitrary data |
ddns |
Support for RFC 2136 dynamic DNS updates. |
debconfmod |
Support for Debconf |
debian_service |
Service support for Debian systems (uses update-rc.d and /sbin/service) |
dig |
Compendium of generic DNS utilities |
disk |
Module for gathering disk information |
djangomod |
Manage Django sites |
dnsmasq |
Module for managing dnqmasq |
dnsutil |
Compendium of generic DNS utilities |
dpkg |
Support for DEB packages |
ebuild |
Support for Portage |
eix |
Support for Eix |
eselect |
Support for eselect, Gentoo's configuration and management tool. |
event |
Use the Salt Event System to fire events from the master to the minion and vice-versa. |
extfs |
Module for managing ext2/3/4 file systems |
file |
Manage information about regular files, directories, |
freebsd_sysctl |
Module for viewing and modifying sysctl parameters |
freebsdjail |
The jail module for FreeBSD |
freebsdkmod |
Module to manage FreeBSD kernel modules |
freebsdpkg |
Remote package support using pkg_add(1) |
freebsdservice |
The service module for FreeBSD |
gem |
Manage ruby gems. |
gentoo_service |
Top level package command wrapper, used to translate the os detected by grains to the correct service manager |
gentoolkitmod |
Support for Gentoolkit |
git |
Support for the Git SCM |
glance |
Module for handling openstack glance calls. |
grains |
Return/control aspects of the grains data |
groupadd |
Manage groups on Linux and OpenBSD |
grub_legacy |
Support for GRUB Legacy |
guestfs |
Interact with virtual machine images via libguestfs |
hg |
Support for the Mercurial SCM |
hosts |
Manage the information in the hosts file |
img |
Virtual machine image management tools |
iptables |
Support for iptables |
key |
Functions to view the minion's public key information |
keyboard |
Module for managing keyboards on POSIX-like systems. |
keystone |
Module for handling openstack keystone calls. |
kmod |
Module to manage Linux kernel modules |
launchctl |
Module for the management of MacOS systems that use launchd/launchctl |
layman |
Support for Layman |
ldapmod |
Salt interface to LDAP commands |
linux_acl |
Support for Linux File Access Control Lists |
linux_lvm |
Support for Linux LVM2 |
linux_sysctl |
Module for viewing and modifying sysctl parameters |
localemod |
Module for managing locales on POSIX-like systems. |
locate |
Module for using the locate utilities |
logrotate |
Module for managing logrotate. |
lxc |
Work with linux containers |
mac_group |
Manage groups on Mac OS 10.7+ |
mac_user |
Manage users on Mac OS 10.7+ |
makeconf |
Support for modifying make.conf under Gentoo |
match |
The match module allows for match routines to be run and determine target specs |
mdadm |
Salt module to manage RAID arrays with mdadm |
mine |
The function cache system allows for data to be stored on the master so it can be easily read by other minions |
modjk |
Control Modjk via the Apache Tomcat "Status" worker |
mongodb |
Module to provide MongoDB functionality to Salt |
monit |
Monit service module. |
moosefs |
Module for gathering and managing information about MooseFS |
mount |
Salt module to manage unix mounts and the fstab file |
munin |
Run munin plugins/checks from salt and format the output as data. |
mysql |
Module to provide MySQL compatibility to salt. |
netbsd_sysctl |
Module for viewing and modifying sysctl parameters |
netbsdservice |
The service module for NetBSD |
network |
Module for gathering and managing network information |
nfs3 |
Module for managing NFS version 3. |
nginx |
Support for nginx |
nova |
Module for handling openstack nova calls. |
npm |
Manage and query NPM packages. |
nzbget |
Support for nzbget |
openbsdpkg |
Package support for OpenBSD |
openbsdservice |
The service module for OpenBSD |
osxdesktop |
Mac OS X implementations of various commands in the "desktop" interface |
pacman |
A module to wrap pacman calls, since Arch is the best |
pam |
Support for pam |
parted |
Module for managing partitions on POSIX-like systems. |
pecl |
Manage PHP pecl extensions. |
pillar |
Extract the pillar data for this minion |
pip |
Install Python packages with pip to either the system or a virtualenv |
pkg_resource |
Resources needed by pkg providers |
pkgin |
Package support for pkgin based systems, inspired from freebsdpkg module |
pkgng |
Support for pkgng , the new package manager for FreeBSD |
pkgutil |
Pkgutil support for Solaris |
portage_config |
Configure portage(5) |
postgres |
Module to provide Postgres compatibility to salt. |
poudriere |
Support for poudriere |
ps |
A salt interface to psutil, a system and process library. |
publish |
Publish a command from a minion to a target |
puppet |
Execute puppet routines |
pw_group |
Manage groups on FreeBSD |
pw_user |
Manage users with the useradd command |
qemu_img |
Qemu-img Command Wrapper |
qemu_nbd |
Qemu Command Wrapper |
quota |
Module for managing quotas on POSIX-like systems. |
rabbitmq |
Module to provide RabbitMQ compatibility to Salt. |
rbenv |
Manage ruby installations with rbenv. |
reg |
Manage the registry on Windows |
ret |
Module to integrate with the returner system and retrieve data sent to a salt returner |
rh_ip |
The networking module for RHEL/Fedora based distros |
rh_service |
Service support for RHEL-based systems, including support for both upstart and sysvinit |
rpm |
Support for rpm |
rvm |
Manage ruby installations and gemsets with RVM, the Ruby Version Manager. |
s3 |
Connection module for Amazon S3 |
saltutil |
The Saltutil module is used to manage the state of the salt minion itself. |
seed |
Virtual machine image management tools |
selinux |
Execute calls on selinux |
service |
The default service module, if not otherwise specified salt will fall back |
shadow |
Manage the shadow file |
smartos_imgadm |
Module for running imgadm command on SmartOS |
smartos_vmadm |
Module for managing VMs on SmartOS |
smf |
Service support for Solaris 10 and 11, should work with other systems that use SMF also. |
solaris_group |
Manage groups on Solaris |
solaris_shadow |
Manage the password database on Solaris systems |
solaris_user |
Manage users with the useradd command |
solarispkg |
Package support for Solaris |
solr |
Apache Solr Salt Module |
sqlite3 |
Support for SQLite3 |
ssh |
Manage client ssh components |
state |
Control the state system on the minion |
status |
Module for returning various status data about a minion. |
supervisord |
Provide the service module for system supervisord or supervisord in a |
svn |
Subversion SCM |
sysbench |
The 'sysbench' module is used to analyse the performance of the minions, right from the master! It measures various system parameters such as CPU, Memory, FileI/O, Threads and Mutex. |
sysmod |
The sys module provides information about the available functions on the minion |
system |
Support for reboot, shutdown, etc |
systemd |
Provide the service module for systemd |
test |
Module for running arbitrary tests |
timezone |
Module for managing timezone on POSIX-like systems. |
tls |
A salt module for SSL/TLS. |
tomcat |
Support for Tomcat |
upstart |
Module for the management of upstart systems. |
useradd |
Manage users with the useradd command |
virt |
Work with virtual machines managed by libvirt |
virtualenv_mod |
Create virtualenv environments |
win_disk |
Module for gathering disk information on Windows |
win_file |
Manage information about files on the minion, set/read user, group |
win_groupadd |
Manage groups on Windows |
win_network |
Module for gathering and managing network information |
win_ntp |
Management of NTP servers on Windows |
win_pkg |
A module to manage software on Windows |
win_service |
Windows Service module. |
win_shadow |
Manage the shadow file |
win_status |
Module for returning various status data about a minion. |
win_system |
Support for reboot, shutdown, etc |
win_useradd |
Manage Windows users with the net user command |
xapi |
This module (mostly) uses the XenAPI to manage Xen virtual machines. |
yumpkg |
Support for YUM |
yumpkg5 |
Support for YUM |
zfs |
Module for running ZFS command |
zpool |
Module for running ZFS zpool command |
zypper |
Package support for openSUSE via the zypper package manager |
Returners
By default the return values of the commands sent to the Salt minions are
returned to the salt-master. But since the commands executed on the Salt
minions are detached from the call on the Salt master, anything at all can be
done with the results data.
This is where the returner interface comes in. Returners are modules called
in addition to returning the data to the Salt master.
The returner interface allows the return data to be sent to any system that
can receive data. This means that return data can be sent to a Redis server,
a MongoDB server, a MySQL server, or any system!
Using Returners
All commands will return the command data back to the master. Adding more
returners will ensure that the data is also sent to the specified returner
interfaces.
Specifying what returners to use is done when the command is invoked:
salt '*' test.ping --return redis_return
This command will ensure that the redis_return returner is used.
It is also possible to specify multiple returners:
salt '*' test.ping --return mongo_return,redis_return,cassandra_return
In this scenario all three returners will be called and the data from the
test.ping command will be sent out to the three named returners.
Writing a Returner
A returner is a module which contains a returner function, the returner
function must accept a single argument. this argument is the return data from
the called minion function. So if the minion function test.ping
is called
the value of the argument will be True
.
A simple returner is implemented here:
import redis
import json
def returner(ret):
'''
Return information to a redis server
'''
# Get a redis connection
serv = redis.Redis(
host='redis-serv.example.com',
port=6379,
db='0')
serv.sadd("%(id)s:jobs" % ret, ret['jid'])
serv.set("%(jid)s:%(id)s" % ret, json.dumps(ret['return']))
serv.sadd('jobs', ret['jid'])
serv.sadd(ret['jid'], ret['id'])
This simple example of a returner set to send the data to a redis server
serializes the data as json and sets it in redis.
You can place your custom returners in a _returners
directory within the
file_roots
specified by the master config file. These custom
returners are distributed when state.highstate
is run, or by executing the
saltutil.sync_returners
or
saltutil.sync_all
functions.
Any custom returners which have been synced to a minion, that are named the
same as one of Salt's default set of returners, will take the place of the
default returner with the same name. Note that a returner's default name is its
filename (i.e. foo.py
becomes returner foo
), but that its name can be
overridden by using a __virtual__ function. A good
example of this can be found in the redis returner, which is named
redis_return.py
but is loaded as simply redis
:
try:
import redis
HAS_REDIS = True
except ImportError:
HAS_REDIS = False
def __virtual__():
if not HAS_REDIS:
return False
return 'redis'
Full list of builtin returner modules
File State Backups
In 0.10.2 a new feature was added for backing up files that are replaced by
the file.managed and file.recurse states. The new feature is called the backup
mode. Setting the backup mode is easy, but is can be set in a number of
places.
The backup_mode can be set in the minion config file:
Or it can be set for each file:
/etc/ssh/sshd_config:
file.managed:
- source: salt://ssh/sshd_config
- backup: minion
Backed-up Files
The files will be saved in the minion cachedir under the directory named
file_backup
. The files will be in the location relative to where they
were under the root filesystem and be appended with a timestamp. This should
make them easy to browse.
Interacting with Backups
Starting with version 0.17.0, it will be possible to list, restore, and delete
previously-created backups.
Listing
The backups for a given file can be listed using file.list_backups
:
# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
----------
0:
----------
Backup Time:
Sat Jul 27 2013 17:48:41.738027
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
Size:
13
1:
----------
Backup Time:
Sat Jul 27 2013 17:48:28.369804
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
Size:
35
Restoring
Restoring is easy using file.restore_backup
, just pass the path and the numeric id
found with file.list_backups
:
# salt foo.bar.com file.restore_backup /tmp/foo.txt 1
foo.bar.com:
----------
comment:
Successfully restored /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013 to /tmp/foo.txt
result:
True
The existing file will be backed up, just in case, as can be seen if
file.list_backups
is run again:
# salt foo.bar.com file.list_backups /tmp/foo.txt
foo.bar.com:
----------
0:
----------
Backup Time:
Sat Jul 27 2013 18:00:19.822550
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
Size:
53
1:
----------
Backup Time:
Sat Jul 27 2013 17:48:41.738027
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:41_738027_2013
Size:
13
2:
----------
Backup Time:
Sat Jul 27 2013 17:48:28.369804
Location:
/var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_17:48:28_369804_2013
Size:
35
Note
Since no state is being run, restoring a file will not trigger any watches
for the file. So, if you are restoring a config file for a service, it will
likely still be necessary to run a service.restart
.
Deleting
Deleting backups can be done using mod:file.delete_backup
<salt.modules.file.delete_backup>:
# salt foo.bar.com file.delete_backup /tmp/foo.txt 0
foo.bar.com:
----------
comment:
Successfully removed /var/cache/salt/minion/file_backup/tmp/foo.txt_Sat_Jul_27_18:00:19_822550_2013
result:
True
Extending External SLS Data
Sometimes a state defined in one SLS file will need to be modified from a
separate SLS file. A good example of this is when an argument needs to be
overwritten or when a service needs to watch an additional state.
The Extend Declaration
The standard way to extend is via the extend declaration. The extend
declaration is a top level declaration like include
and encapsulates ID
declaration data included from other SLS files. A standard extend looks like
this:
include:
- http
- ssh
extend:
apache:
file:
- name: /etc/httpd/conf/httpd.conf
- source: salt://http/httpd2.conf
ssh-server:
service:
- watch:
- file: /etc/ssh/banner
/etc/ssh/banner:
file.managed:
- source: salt://ssh/banner
A few critical things happened here, first off the SLS files that are going to
be extended are included, then the extend dec is defined. Under the extend dec
2 IDs are extended, the apache ID's file state is overwritten with a new name
and source. Than the ssh server is extended to watch the banner file in
addition to anything it is already watching.
Extend is a Top Level Declaration
This means that extend
can only be called once in an sls, if if is used
twice then only one of the extend blocks will be read. So this is WRONG:
include:
- http
- ssh
extend:
apache:
file:
- name: /etc/httpd/conf/httpd.conf
- source: salt://http/httpd2.conf
# Second extend will overwrite the first!! Only make one
extend:
ssh-server:
service:
- watch:
- file: /etc/ssh/banner
The Requisite "in" Statement
Since one of the most common things to do when extending another SLS is to add
states for a service to watch, or anything for a watcher to watch, the
requisite in statement was added to 0.9.8 to make extending the watch and
require lists easier. The ssh-server extend statement above could be more
cleanly defined like so:
include:
- ssh
/etc/ssh/banner:
file.managed:
- source: salt://ssh/banner
- watch_in:
- service: ssh-server
Rules to Extend By
There are a few rules to remember when extending states:
- Always include the SLS being extended with an include declaration
- Requisites (watch and require) are appended to, everything else is
overwritten
- extend is a top level declaration, like an ID declaration, cannot be
declared twice in a single SLS
- Many IDs can be extended under the extend declaration
Failhard Global Option
Normally, when a state fails Salt continues to execute the remainder of the
defined states and will only refuse to execute states that require the failed
state.
But the situation may exist, where you would want all state execution to stop
if a single state execution fails. The capability to do this is called
failing hard
.
State Level Failhard
A single state can have a failhard set, this means that if this individual
state fails that all state execution will immediately stop. This is a great
thing to do if there is a state that sets up a critical config file and
setting a require for each state that reads the config would be cumbersome.
A good example of this would be setting up a package manager early on:
/etc/yum.repos.d/company.repo:
file.managed:
- source: salt://company/yumrepo.conf
- user: root
- group: root
- mode: 644
- order: 1
- failhard: True
In this situation, the yum repo is going to be configured before other states,
and if it fails to lay down the config file, than no other states will be
executed.
Global Failhard
It may be desired to have failhard be applied to every state that is executed,
if this is the case, then failhard can be set in the master configuration
file. Setting failhard in the master configuration file will result in failing
hard when any minion gathering states from the master have a state fail.
This is NOT the default behavior, normally Salt will only fail states that
require a failed state.
Using the global failhard is generally not recommended, since it can result
in states not being executed or even checked. It can also be confusing to
see states failhard if an admin is not actively aware that the failhard has
been set.
To use the global failhard set failhard: True in the master configuration
file.
Highstate data structure definitions
The Salt State Tree
- Top file
The main state file that instructs minions what environment and modules
to use during state execution.
Configurable via state_top
.
- State tree
- A collection of
SLS
files that live under the directory specified
in file_roots
. A state tree can be organized into
SLS modules
.
Include declaration
- Include declaration
Defines a list of module reference strings to include in this
SLS.
Occurs only in the top level of the highstate structure.
Example:
include:
- edit.vim
- http.server
Module reference
- Module reference
- The name of a SLS module defined by a separate SLS file and residing on
the Salt Master. A module named
edit.vim
is a reference to the SLS
file salt://edit/vim.sls
.
ID declaration
- ID declaration
Defines an individual highstate component. Always references a value of
a dictionary containing keys referencing state declarations and requisite declarations. Can be overridden by a name declaration or a
names declaration.
Occurs on the top level or under the extend declaration.
Must be unique across entire state tree. If the same ID declaration is
used twice, only the first one matched will be used. All subsequent
ID declarations with the same name will be ignored.
Note
Naming gotchas
Until 0.9.6, IDs could not contain a dot, otherwise highstate summary output was
unpredictable. (It was fixed in versions 0.9.7 and above)
Extend declaration
- Extend declaration
Extends a name declaration from an included SLS module
. The
keys of the extend declaration always define existing ID
declarations which have been defined in included
SLS modules
.
Occurs only in the top level and defines a dictionary.
Extend declarations are useful for adding-to or overriding parts of a
state declaration that is defined in another SLS
file. In the
following contrived example, the shown mywebsite.sls
file is include
-ing and extend
-ing the apache.sls
module in order to add a watch
declaration that will restart Apache whenever the Apache configuration file,
mywebsite
changes.
include:
- apache
extend:
apache:
service:
- watch:
- file: mywebsite
mywebsite:
file:
- managed
Requisite declaration
- Requisite declaration
A list containing requisite references.
Used to build the action dependency tree. While Salt states are made to
execute in a deterministic order, this order is managed by requiring
and watching other Salt states.
Occurs as a list component under a state declaration or as a
key under an ID declaration.
Function declaration
- Function declaration
The name of the function to call within the state. A state declaration
can contain only a single function declaration.
For example, the following state declaration calls the installed
function in the pkg
state module:
The function can be declared inline with the state as a shortcut, but
the actual data structure is better referenced in this form:
Where the function is a string in the body of the state declaration.
Technically when the function is declared in dot notation the compiler
converts it to be a string in the state declaration list. Note that the
use of the first example more than once in an ID declaration is invalid
yaml.
INVALID:
httpd:
pkg.installed
service.running
When passing a function without arguments and another state declaration
within a single ID declaration, then the long or "standard" format
needs to be used since otherwise it does not represent a valid data
structure.
VALID:
httpd:
pkg:
- installed
service:
- running
Occurs as the only index in the state declaration list.
Function arg declaration
- Function arg declaration
A single key dictionary referencing a Python type which is to be passed
to the named function declaration as a parameter. The type must
be the data type expected by the function.
Occurs under a function declaration.
For example in the following state declaration user
, group
, and
mode
are passed as arguments to the managed
function in the file
state module:
/etc/http/conf/http.conf:
file.managed:
- user: root
- group: root
- mode: 644
Name declaration
- Name declaration
Overrides the name
argument of a state declaration. If
name
is not specified the ID declaration satisfies the
name
argument.
The name is always a single key dictionary referencing a string.
Overriding name
is useful for a variety of scenarios.
For example, avoiding clashing ID declarations. The following two state
declarations cannot both have /etc/motd
as the ID declaration:
motd_perms:
file.managed:
- name: /etc/motd
- mode: 644
motd_quote:
file.append:
- name: /etc/motd
- text: "Of all smells, bread; of all tastes, salt."
Another common reason to override name
is if the ID declaration is long and
needs to be referenced in multiple places. In the example below it is much
easier to specify mywebsite
than to specify
/etc/apache2/sites-available/mywebsite.com
multiple times:
mywebsite:
file.managed:
- name: /etc/apache2/sites-available/mywebsite.com
- source: salt://mywebsite.com
a2ensite mywebsite.com:
cmd.wait:
- unless: test -L /etc/apache2/sites-enabled/mywebsite.com
- watch:
- file: mywebsite
apache2:
service:
- running
- watch:
- file: mywebsite
Names declaration
- Names declaration
- Expands the contents of the containing state declaration into
multiple state declarations, each with its own name.
For example, given the following state declaration:
python-pkgs:
pkg.installed:
- names:
- python-django
- python-crypto
- python-yaml
Once converted into the lowstate data structure the above state
declaration will be expanded into the following three state declarations:
python-django:
pkg.installed
python-crypto:
pkg.installed
python-yaml:
pkg.installed
Large example
Here is the layout in yaml using the names of the highdata structure
components.
<Include Declaration>:
- <Module Reference>
- <Module Reference>
<Extend Declaration>:
<ID Declaration>:
[<overrides>]
# standard declaration
<ID Declaration>:
<State Declaration>:
- <Function>
- <Function Arg>
- <Function Arg>
- <Function Arg>
- <Name>: <name>
- <Requisite Declaration>:
- <Requisite Reference>
- <Requisite Reference>
# inline function and names
<ID Declaration>:
<State Declaration>.<Function>:
- <Function Arg>
- <Function Arg>
- <Function Arg>
- <Names>:
- <name>
- <name>
- <name>
- <Requisite Declaration>:
- <Requisite Reference>
- <Requisite Reference>
# multiple states for single id
<ID Declaration>:
<State Declaration>:
- <Function>
- <Function Arg>
- <Name>: <name>
- <Requisite Declaration>:
- <Requisite Reference>
<State Declaration>:
- <Function>
- <Function Arg>
- <Names>:
- <name>
- <name>
- <Requisite Declaration>:
- <Requisite Reference>
Include and Exclude
Salt sls files can include other sls files and exclude sls files that have been
otherwise included. This allows for an sls file to easily extend or manipulate
other sls files.
Include
When other sls files are included, everything defined in the included sls file
will be added to the state run. When including define a list of sls formulas
to include:
include:
- http
- libvirt
The include statement will include sls formulas from the same environment
that the including sls formula is in. But the environment can be explicitly
defined in the configuration to override the running environment, therefore
if an sls formula needs to be included from an external environment named "dev"
the following syntax is used:
Relative Include
In Salt 0.16.0 the capability to include sls formulas which are relative to
the running sls formula was added, simply precede the formula name with a
.:
include:
- .virt
- .virt.hyper
Exclude
The exclude statement, added in Salt 0.10.3 allows an sls to hard exclude
another sls file or a specific id. The component is excluded after the
high data has been compiled, so nothing should be able to override an
exclude.
Since the exclude can remove an id or an sls the type of component to
exclude needs to be defined. an exclude statement that verifies that the
running highstate does not contain the http sls and the /etc/vimrc id
would look like this:
exclude:
- sls: http
- id: /etc/vimrc
State Enforcement
Salt offers an optional interface to manage the configuration or "state" of the
Salt minions. This interface is a fully capable mechanism used to enforce the
state of systems from a central manager.
The Salt state system is made to be accurate, simple, and fast. And like the
rest of the Salt system, Salt states are highly modular.
State management
State management, also frequently called software configuration management
(SCM), is a program that puts and keeps a system into a predetermined state. It
installs software packages, starts or restarts services, or puts configuration
files in place and watches them for changes.
Having a state management system in place allows you to easily and reliably
configure and manage a few servers or a few thousand servers. It allows you to
keep that configuration under version control.
Salt States is an extension of the Salt Modules that we discussed in the
previous remote execution tutorial. Instead
of calling one-off executions the state of a system can be easily defined and
then enforced.
Understanding the Salt State System Components
The Salt state system is comprised of a number of components. As a user, an
understanding of the SLS and renderer systems are needed. But as a developer,
an understanding of Salt states and how to write the states is needed as well.
Salt SLS System
- SLS
The primary system used by the Salt state system is the SLS system. SLS
stands for SaLt State.
The Salt States are files which contain the information about how to
configure Salt minions. The states are laid out in a directory tree and
can be written in many different formats.
The contents of the files and they way they are laid out is intended to
be as simple as possible while allowing for maximum flexibility. The
files are laid out in states and contains information about how the
minion needs to be configured.
SLS File Layout
SLS files are laid out in the Salt file server. A simple layout can look like
this:
top.sls
ssh.sls
sshd_config
users/init.sls
users/admin.sls
salt/init.sls
salt/master.sls
This example shows the core concepts of file layout. The top file is a key
component and is used with Salt matchers to match SLS states with minions.
The .sls
files are states. The rest of the files are seen by the Salt
master as just files that can be downloaded.
The states are translated into dot notation, so the ssh.sls
file is
seen as the ssh state, the users/admin.sls
file is seen as the
users.admin states.
The init.sls files are translated to be the state name of the parent
directory, so the salt/init.sls
file translates to the Salt state.
The plain files are visible to the minions, as well as the state files. In
Salt, everything is a file; there is no "magic translation" of files and file
types. This means that a state file can be distributed to minions just like a
plain text or binary file.
SLS Files
The Salt state files are simple sets of data. Since the SLS files are just data
they can be represented in a number of different ways. The default format is
yaml generated from a Jinja template. This allows for the states files to have
all the language constructs of Python and the simplicity of yaml. State files
can then be complicated Jinja templates that translate down to yaml, or just
plain and simple yaml files!
The State files are constructed data structures in a simple format. The format
allows for many real activates to be expressed in very little text, while
maintaining the utmost in readability and usability.
Here is an example of a Salt State:
vim:
pkg:
- installed
salt:
pkg:
- latest
service.running:
- require:
- file: /etc/salt/minion
- pkg: salt
- names:
- salt-master
- salt-minion
- watch:
- file: /etc/salt/minion
/etc/salt/minion:
file.managed:
- source: salt://salt/minion
- user: root
- group: root
- mode: 644
- require:
- pkg: salt
This short stanza will ensure that vim is installed, Salt is installed and up
to date, the salt-master and salt-minion daemons are running and the Salt
minion configuration file is in place. It will also ensure everything is
deployed in the right order and that the Salt services are restarted when the
watched file updated.
The Top File
The top file is the mapping for the state system. The top file specifies which
minions should have which modules applied and which environments they should
draw the states from.
The top file works by specifying the environment, containing matchers with
lists of Salt states sent to the matching minions:
base:
'*':
- salt
- users
- users.admin
'saltmaster.*':
- match: pcre
- salt.master
This simple example uses the base environment, which is built into the default
Salt setup, and then all minions will have the modules salt, users and
users.admin since '*' will match all minions. Then the regular expression
matcher will match all minions' with an id matching saltmaster.* and add the
salt.master state.
Renderer System
The Renderer system is a key component to the state system. SLS files are
representations of Salt "high data" structures. All Salt cares about when
reading an SLS file is the data structure that is produced from the file.
This allows Salt states to be represented by multiple types of files. The
Renderer system can be used to allow different formats to be used for SLS
files.
The available renderers can be found in the renderers directory in the Salt
source code:
https://github.com/saltstack/salt/blob/develop/salt/renderers
By default SLS files are rendered using Jinja as a templating engine, and yaml
as the serialization format. Since the rendering system can be extended simply
by adding a new renderer to the renderers directory, it is possible that any
structured file could be used to represent the SLS files.
In the future XML will be added, as well as many other formats.
Reloading Modules
Some salt states require specific packages to be installed in order for the
module to load, as an example the pip
state
module requires the pip package for proper name and version parsing. On
most of the common cases, salt is clever enough to transparently reload the
modules, for example, if you install a package, salt reloads modules because
some other module or state might require just that package which was installed.
On some edge-cases salt might need to be told to reload the modules. Consider
the following state file which we'll call pep8.sls
:
python-pip:
cmd:
- run
- cwd: /
- name: easy_install --script-dir=/usr/bin -U pip
pep8:
pip.installed
requires:
- cmd: python-pip
The above example installs pip using easy_install
from setuptools and
installs pep8 using pip
, which, as told
earlier, requires pip to be installed system-wide. Let's execute this state:
The execution output would be something like:
----------
State: - pip
Name: pep8
Function: installed
Result: False
Comment: State pip.installed found in sls pep8 is unavailable
Changes:
Summary
------------
Succeeded: 1
Failed: 1
------------
Total: 2
If we executed the state again the output would be:
----------
State: - pip
Name: pep8
Function: installed
Result: True
Comment: Package was successfully installed
Changes: pep8==1.4.6: Installed
Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2
Since we installed pip using cmd
, salt has no way
to know that a system-wide package was installed. On the second execution,
since the required pip package was installed, the state executed perfectly.
To those thinking, couldn't salt reload modules on every state step since it
already does for some cases? It could, but it should not since it would
greatly slow down state execution.
So how do we solve this edge-case? reload_modules
!
reload_modules
is a boolean option recognized by salt on all available
states which, does exactly what it tells use, forces salt to reload it's
modules once that specific state finishes. The fixed state file would now be:
python-pip:
cmd:
- run
- cwd: /
- name: easy_install --script-dir=/usr/bin -U pip
- reload_modules: true
pep8:
pip.installed
requires:
- cmd: python-pip
Let's run it, once:
And it's output now is:
----------
State: - pip
Name: pep8
Function: installed
Result: True
Comment: Package was successfully installed
Changes: pep8==1.4.6: Installed
Summary
------------
Succeeded: 2
Failed: 0
------------
Total: 2
State System Layers
The Salt state system is comprised of multiple layers. While using Salt does
not require an understanding of the state layers, a deeper understanding of
how Salt compiles and manages states can be very beneficial.
Function Call
The lowest layer of functionality in the state system is the direct state
function call. State executions are executions of single state functions at
the core. These individual functions are defined in state modules and can
be called directly via the state.single
command.
salt '*' state.single pkg.installed name='vim'
Low Chunk
The low chunk is the bottom of the Salt state compiler. This is a data
representation of a single function call. The low chunk is sent to the state
caller and used to execute a single state function.
A single low chunk can be executed manually via the state.low
command.
salt '*' state.low '{name: vim, state: pkg, fun: installed}'
The passed data reflects what the state execution system gets after compiling
the data down from sls formulas.
Low State
The Low State layer is the list of low chunks "evaluated" in order. To see
what the low state looks like for a highstate, run:
salt '*' state.show_lowstate
This will display the raw lowstate in the order which each low chunk will be
evaluated. The order of evaluation is not necessarily the order of execution,
since requisites are evaluated at runtime. Requisite execution and evaluation
is finite; this means that the order of execution can be ascertained with 100%
certainty based on the order of the low state.
High Data
High data is the data structure represented in YAML via SLS files. The High
data structure is created by merging the data components rendered inside sls
files (or other render systems). The High data can be easily viewed by
executing the state.show_highstate
or state.show_sls
functions. Since
this data is a somewhat complex data structure, it may be easier to read using
the json, yaml, or pprint outputters:
salt '*' state.show_highstate --out yaml
salt '*' state.show_sls edit.vim --out pprint
SLS
Above "High Data", the logical layers are no longer technically required to be
executed, or to be executed in a hierarchy. This means that how the High data
is generated is optional and very flexible. The SLS layer allows for many
mechanisms to be used to render sls data from files or to use the fileserver
backend to generate sls and file data from external systems.
The SLS layer can be called directly to execute individual sls formulas.
Note
SLS Formulas have historically been called "SLS files". This is because a
single SLS was only constituted in a single file. Now the term
"SLS Formula" better expresses how a compartmentalized SLS can be expressed
in a much more dynamic way by combining pillar and other sources, and the
SLS can be dynamically generated.
To call a single SLS formula named edit.vim
, execute state.sls
:
salt '*' state.sls edit.vim
HighState
Calling SLS directly logically assigns what states should be executed from the
context of the calling minion. The Highstate layer is used to allow for full
contextual assignment of what is executed where to be tied to groups of, or
individual, minions entirely from the master. This means that the environment of
a minion, and all associated execution data pertinent to said minion, can be
assigned from the master without needing to execute or configure anything on
the target minion. This also means that the minion can independently retrieve
information about its complete configuration from the master.
To execute the High State call state.highstate
:
OverState
The overstate layer expresses the highest functional layer of Salt's automated
logic systems. The Overstate allows for stateful and functional orchestration
of routines from the master. The overstate defines in data execution stages
which minions should execute states, or functions, and in what order using
requisite logic.
Remote Control States
Remote Control States is the capability to organize routines on minions from the
master, using state files.
This allows for the use of the Salt state system to execute state runs and
function runs in a way more powerful than the overstate, will full command of
the requisite and ordering systems inside of states.
Note
Remote Control States was added in 0.17.0 with the intent to eventually
deprecate the overstate system in favor of this new, substantially more
powerful system.
The Overstate will still be maintained for the forseable future.
Creating States Trigger Remote Executions
The new salt state module allows for these new states to be defined in
such a way to call out to the salt and/or the salt-ssh remote execution
systems, this also supports the addition of states to connect to remote
embedded devices.
To create a state that calls out to minions simple specify the salt.state
or salt.function states:
webserver_setup:
salt.state:
- tgt: 'web*'
- highstate: True
This sls file can now be referenced by the state.sls runner the same way
an sls is normally referenced, assuming the default configurtion with /srv/salt
as the root of the state tree and the above file being saved as
/srv/salt/webserver.sls, the state can be run from the master with the salt-run
command:
salt-run state.sls webserver
This will execute the defined state to fire up the webserver routine.
Calling Multiple State Runs
All of the concepts of states exist so building something more complex is
easy:
Note
As of Salt 0.17.0 states are run in the order in which they are defined,
so the cmd.run defined below will always execute first
cmd.run:
salt.function:
- roster: scan
- tgt: 10.0.0.0/24
- arg:
- 'bootstrap'
storage_setup:
salt.state:
- tgt: 'role:storage'
- tgt_type: grain
- sls: ceph
webserver_setup:
salt.state:
- tgt: 'web*'
- highstate: True
Ordering States
The way in which configuration management systems are executed is a hotly
debated topic in the configuration management world. Two
major philosophies exist on the subject, to either execute in an imperative
fashion where things are executed in the order in which they are defined, or
in a declarative fashion where dependencies need to be mapped between objects.
Imperative ordering is finite and generally considered easier to write, but
declarative ordering is much more powerful and flexible but generally considered
more difficult to create.
Salt has been created to get the best of both worlds. States are evaluated in
a finite order, which guarantees that states are always executed in the same
order, and the states runtime is declarative, making Salt fully aware of
dependencies via the requisite system.
State Auto Ordering
Salt always executes states in a finite manner, meaning that they will always
execute in the same order regardless of the system that is executing them.
But in Salt 0.17.0, the state_auto_order
option was added. This option
makes states get evaluated in the order in which they are defined in sls
files.
The evaluation order makes it easy to know what order the states will be
executed in, but it is important to note that the requisite system will
override the ordering defined in the files, and the order
option described
below will also override the order in which states are defined in sls files.
If the classic ordering is preferred (lexicographic), then set state_auto_order
to False
in the master configuration file.
Requisite Statements
Note
This document represents behavior exhibited by Salt requisites as of
version 0.9.7 of Salt.
Often when setting up states any single action will require or depend on
another action. Salt allows you to build relationships between states with
requisite statements. A requisite statement ensure that the named state is
evaluated before the state requiring it. There are two types of requisite
statements in Salt, require and watch.
These requisite statements are applied to a specific state declaration:
httpd:
pkg:
- installed
file.managed:
- name: /etc/httpd/conf/httpd.conf
- source: salt://httpd/httpd.conf
- require:
- pkg: httpd
In this example we use the require requisite to declare that the file
/etc/httpd/conf/httpd.conf should only be set up if the pkg state executes
successfully.
The requisite system works by finding the states that are required and
executing them before the state that requires them. Then the required states
can be evaluated to see if they have executed correctly.
Note
Requisite matching
Requisites match on both the ID Declaration and the name
parameter.
Therefore, if you are using the pkgs
or sources
argument to install
a list of packages in a pkg state, it's important to note that you cannot
have a requisite that matches on an individual package in the list.
Multiple Requisites
The requisite statement is passed as a list, allowing for the easy addition of
more requisites. Both requisite types can also be separately declared:
httpd:
pkg:
- installed
service.running:
- enable: True
- watch:
- file: /etc/httpd/conf/httpd.conf
- require:
- pkg: httpd
- user: httpd
- group: httpd
file.managed:
- name: /etc/httpd/conf/httpd.conf
- source: salt://httpd/httpd.conf
- require:
- pkg: httpd
user:
- present
group:
- present
In this example the httpd service is only going to be started if the package,
user, group and file are executed successfully.
The Require Requisite
The foundation of the requisite system is the require
requisite. The
require requisite ensures that the required state(s) are executed before the
requiring state. So, if a state is declared that sets down a vimrc, then it
would be pertinent to make sure that the vimrc file would only be set down if
the vim package has been installed:
vim:
pkg:
- installed
file.managed:
- source: salt://vim/vimrc
- require:
- pkg: vim
In this case, the vimrc file will only be applied by Salt if and after the vim
package is installed.
The Watch Requisite
The watch
requisite is more advanced than the require
requisite. The
watch requisite executes the same logic as require (therefore if something is
watched it does not need to also be required) with the addition of executing
logic if the required states have changed in some way.
The watch requisite checks to see if the watched states have returned any
changes. If the watched state returns changes, and the watched states execute
successfully, then the watching state will execute a function that reacts to
the changes in the watched states.
Perhaps an example can better explain the behavior:
redis:
pkg:
- latest
file.managed:
- source: salt://redis/redis.conf
- name: /etc/redis.conf
- require:
- pkg: redis
service.running:
- enable: True
- watch:
- file: /etc/redis.conf
- pkg: redis
In this example the redis service will only be started if the file
/etc/redis.conf is applied, and the file is only applied if the package is
installed. This is normal require behavior, but if the watched file changes,
or the watched package is installed or upgraded, then the redis service is
restarted.
Watch and the mod_watch Function
The watch requisite is based on the mod_watch
function. Python state
modules can include a function called mod_watch
which is then called
if the watch call is invoked. When mod_watch
is called depends on the
execution of the watched state, which:
- If no changes then just run the watching state itself as usual.
mod_watch
is not called. This behavior is same as using a require
.
- If changes then run the watching state AND if that changes nothing then
react by calling
mod_watch
.
When reacting, in the case of the service module the underlying service is
restarted. In the case of the cmd state the command is executed.
The mod_watch
function for the service state looks like this:
def mod_watch(name, sig=None, reload=False, full_restart=False):
'''
The service watcher, called to invoke the watch command.
name
The name of the init or rc script used to manage the service
sig
The string to search for when looking for the service process with ps
'''
if __salt__['service.status'](name, sig):
if 'service.reload' in __salt__ and reload:
restart_func = __salt__['service.reload']
elif 'service.full_restart' in __salt__ and full_restart:
restart_func = __salt__['service.full_restart']
else:
restart_func = __salt__['service.restart']
else:
restart_func = __salt__['service.start']
result = restart_func(name)
return {'name': name,
'changes': {name: result},
'result': result,
'comment': 'Service restarted' if result else \
'Failed to restart the service'
}
The watch requisite only works if the state that is watching has a
mod_watch
function written. If watch is set on a state that does not have
a mod_watch
function (like pkg), then the listed states will behave only
as if they were under a require
statement.
Also notice that a mod_watch
may accept additional keyword arguments,
which, in the sls file, will be taken from the same set of arguments specified
for the state that includes the watch
requisite. This means, for the
earlier service.running
example above, you can tell the service to
reload
instead of restart like this:
redis:
# ... other state declarations omitted ...
service.running:
- enable: True
- reload: True
- watch:
- file: /etc/redis.conf
- pkg: redis
The Order Option
Before using the order option, remember that the majority of state ordering
should be done with a requisite declaration, and that a requisite
declaration will override an order option.
The order option is used by adding an order number to a state declaration
with the option order:
vim:
pkg.installed:
- order: 1
By adding the order option to 1 this ensures that the vim package will be
installed in tandem with any other state declaration set to the order 1.
Any state declared without an order option will be executed after all states
with order options are executed.
But this construct can only handle ordering states from the beginning.
Sometimes you may want to send a state to the end of the line. To do this,
set the order to last
:
vim:
pkg.installed:
- order: last
Remember that requisite statements override the order option. So the order
option should be applied to the highest component of the requisite chain:
vim:
pkg.installed:
- order: last
- require:
- file: /etc/vimrc
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
OverState System
Often servers need to be set up and configured in a specific order, and systems
should only be set up if systems earlier in the sequence has been set up
without any issues.
The 0.11.0 release of Salt addresses this problem with a new layer in the state
system called the Over State. The concept of the Over State is managed on
the master, a series of state executions is controlled from the master and
executed in order. If an execution requires that another execution first run
without problems then the state executions will stop.
The Over State system is used to orchestrate deployment in a smooth and
reliable way across multiple systems in small to large environments.
The Over State SLS
The overstate system is managed by an sls file located in the root of an
environment. This file uses a data structure like all sls files.
The overstate sls file configures an unordered list of stages, each stage
defines the minions to execute on and can define what sls files to run
or to execute a state.highstate.
mysql:
match: 'db*'
sls:
- mysql.server
- drbd
webservers:
match: 'web*'
require:
- mysql
all:
match: '*'
require:
- mysql
- webservers
The above defined over state will execute the mysql stage first because it is
required by the webservers stage. The webservers stage will then be executed
only if the mysql stage executes without any issues. The webservers stage
will execute state.highstate on the matched minions, while the mysql stage
will execute state.sls with the named sls files.
Finally the all stage will execute state.highstate on all systems only if the
mysql and webservers stages complete without failures. The overstate system
checks for any states that return a result of False, if the run has any
False returns then the overstate will quit.
Adding Functions To Overstate
In 0.15.0 the ability to execute module functions directly in the overstate
was added. Functions are called as a stage with the function key:
http:
function:
pkg.install:
- http
The list of function arguments are passed after the declared function.
Requisites only functions properly if the given function supports returning
a custom return code.
Executing the Over State
The over state can be executed from the salt-run command, calling the
state.over runner function. The function will by default look in the base
environment for the overstate.sls file:
To specify the location of the overstate file and the environment to pull from
pass the arguments to the salt-run command:
salt-run state.over base /root/overstate.sls
Remember, that these calls are made on the master.
State Providers
Salt predetermines what modules should be mapped to what uses based on the
properties of a system. These determinations are generally made for modules
that provide things like package and service management.
Sometimes in states, it may be necessary to use an alternative module to
provide the needed functionality. For instance, an older Arch Linux system may
not be running systemd, so instead of using the systemd service module, you can
revert to the default service module:
httpd:
service.running:
- enable: True
- provider: service
In this instance, the basic service
module (which
manages sysvinit-based services) will replace the
systemd
module which is used by default on Arch Linux.
However, if it is necessary to make this override for most or every service,
it is better to just override the provider in the minion config file, as
described in the section below.
Setting a Provider in the Minion Config File
Sometimes, when running Salt on custom Linux spins, or distros that are derived
from other distros, Salt does not successfully detect providers. The providers
which are most likely to be affected by this are:
When something like this happens, rather than specifying the provider manually
in each state, it easier to use the providers
parameter in the
minion config file to set the provider.
If you end up needing to override a provider because it was not detected,
please let us know! File an issue on the issue tracker, and provide the
output from the grains.items
function,
taking care to sanitize any sensitive information.
Below are tables that should help with deciding which provider to use if one
needs to be overridden.
Provider: pkg
Execution Module |
Used for |
apt |
Debian/Ubuntu-based distros which use apt-get(8)
for package management |
brew |
Mac OS software management using Homebrew |
ebuild |
Gentoo-based systems (utilizes the portage python
module as well as emerge(1) ) |
freebsdpkg |
FreeBSD-based OSes using pkg_add(1) |
openbsdpkg |
OpenBSD-based OSes using pkg_add(1) |
pacman |
Arch Linux-based distros using pacman(8) |
pkgin |
NetBSD-based OSes using pkgin(1) |
pkgng |
FreeBSD-based OSes using pkg(8) |
pkgutil |
Solaris-based OSes using OpenCSW's pkgutil(1) |
solarispkg |
Solaris-based OSes using pkgadd(1M) |
win_pkg |
Windows |
yumpkg |
RedHat-based distros and derivatives (utilizes the
yum and rpmUtils modules) |
yumpkg5 |
RedHat-based distros and derivatives (wraps yum(8) ) |
zypper |
SUSE-based distros using zypper(8) |
Provider: service
Execution Module |
Used for |
debian_service |
Debian Linux (non-systemd) |
freebsdservice |
FreeBSD-based OSes using service(8) |
gentoo_service |
Gentoo Linux using sysvinit and
rc-update(8) |
launchctl |
Mac OS hosts using launchctl(1) |
netbsdservice |
NetBSD-based OSes |
openbsdservice |
OpenBSD-based OSes |
rh_service |
RedHat-based distros and derivatives using
service(8) and chkconfig(8) . Supports both
pure sysvinit and mixed sysvinit/upstart systems. |
service |
Fallback which simply wraps sysvinit scripts |
smf |
Solaris-based OSes which use SMF |
systemd |
Linux distros which use systemd |
upstart |
Ubuntu-based distros using upstart |
win_service |
Windows |
Provider: user
Execution Module |
Used for |
useradd |
Linux, NetBSD, and OpenBSD systems using
useradd(8) , userdel(8) , and usermod(8) |
pw_user |
FreeBSD-based OSes using pw(8) |
solaris_user |
Solaris-based OSes using useradd(1M) ,
userdel(1M) , and usermod(1M) |
win_useradd |
Windows |
Provider: group
Execution Module |
Used for |
groupadd |
Linux, NetBSD, and OpenBSD systems using
groupadd(8) , groupdel(8) , and groupmod(8) |
pw_group |
FreeBSD-based OSes using pw(8) |
solaris_group |
Solaris-based OSes using groupadd(1M) ,
groupdel(1M) , and groupmod(1M) |
win_groupadd |
Windows |
Arbitrary Module Redirects
The provider statement can also be used for more powerful means, instead of
overwriting or extending the module used for the named service an arbitrary
module can be used to provide certain functionality.
emacs:
pkg.installed:
- provider:
- pkg: yumpkg5
- cmd: customcmd
In this example the default pkg
module is being
redirected to use the yumpkg5
module (yum
via shelling out instead of via the yum Python API), but is also
using a custom module to invoke commands. This could be used to dramatically
change the behavior of a given state.
Requisites
The Salt requisite system is used to create relationships between states. The
core idea being that, when one state is dependent somehow on another, that
inter-dependency can be easily defined.
Requisites come in two types. Direct requisites, and requisite_ins. The
relationships are directional, so a requisite statement makes the requiring
state declaration depend on the required state declaration:
vim:
pkg.installed
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
- require:
- pkg: vim
So in this example, the file /etc/vimrc
depends on the vim package.
Requisite_in statements are the opposite, instead of saying "I depend on
something", requisite_ins say "Someone depends on me":
vim:
pkg.installed:
- require_in:
- file: /etc/vimrc
/etc/vimrc:
file.managed:
- source: salt://edit/vimrc
So here, with a requisite_in, the same thing is accomplished, but just from
the other way around. The vim package is saying "/etc/vimrc depends on me".
In the end, a single dependency map is created and everything is executed in a
finite and predictable order.
Note
Requisite matching
Requisites match on both the ID Declaration and the name
parameter.
This means that, in the example above, the require_in
requisite would
also have been matched if the /etc/vimrc
state was written as follows:
vimrc:
file.managed:
- name: /etc/vimrc
- source: salt://edit/vimrc
Requisite and Requisite in types
There are three requisite statements that can be used in Salt. the require
,
watch
and use
requisites. Each requisite also has a corresponding
requisite_in: require_in
, watch_in
and use_in
. All of the
requisites define specific relationships and always work with the dependency
logic defined above.
Require
The most basic requisite statement is require
. The behavior of require is
simple. Make sure that the dependent state is executed before the depending
state, and if the dependent state fails, don't run the depending state. So in
the above examples the file /etc/vimrc
will only be applied after the vim
package is installed and only if the vim package is installed successfully.
Require an entire sls file
As of Salt 0.16.0, it is possible to require an entire sls file. Do this by first including
the sls file and then setting a state to require
the included sls file.
include:
- foo
bar:
pkg.installed:
- require:
- sls: foo
Watch
The watch statement does everything the require statement does, but with a
little more. The watch statement looks into the state modules for a function
called mod_watch
. If this function is not available in the corresponding
state module, then watch does the same thing as require. If the mod_watch
function is in the state module, then the watched state is checked to see if
it made any changes to the system, if it has, then mod_watch
is called.
Perhaps the best example of using watch is with a service.running
state. When a service watches a state, then
the service is reloaded/restarted when the watched state changes:
ntpd:
service.running:
- watch:
- file: /etc/ntp.conf
file.managed:
- name: /etc/ntp.conf
- source: salt://ntp/files/ntp.conf
Prereq
The prereq
requisite is a powerful requisite added in 0.16.0. This
requisite allows for actions to be taken based on the expected results of
a state that has not yet been executed. In more practical terms, a service
can be shut down because the prereq
knows that underlying code is going to
be updated and the service should be off-line while the update occurs.
The motivation to add this requisite was to allow for routines to remove a
system from a load balancer while code is being updated.
The prereq
checks if the required state expects to have any changes by
running the single state with test=True
. If the pre-required state returns
changes, then the state requiring it will execute.
graceful-down:
cmd.run:
- name: service apache graceful
- prereq:
- file: site-code
site-code:
file.recurse:
- name: /opt/site_code
- source: salt://site/code
In this case the apache server will only be shutdown if the site-code state
expects to deploy fresh code via the file.recurse call, and the site-code
deployment will only be executed if the graceful-down run completes
successfully.
Use
The use
requisite is used to inherit the arguments passed in another
id declaration. This is useful when many files need to have the same defaults.
The use
statement was developed primarily for the networking states but
can be used on any states in Salt. This made sense for the networking state
because it can define a long list of options that need to be applied to
multiple network interfaces.
Require In
The require_in
requisite is the literal reverse of require
. If
a state declaration needs to be required by another state declaration then
require_in can accommodate it, so these two sls files would be the same in
the end:
Using require
httpd:
pkg:
- installed
service:
- running
- require:
- pkg: httpd
Using require_in
httpd:
pkg:
- installed
- require_in:
- service: httpd
service:
- running
The require_in
statement is particularly useful when assigning a require
in a separate sls file. For instance it may be common for httpd to require
components used to set up PHP or mod_python, but the HTTP state does not need
to be aware of the additional components that require it when it is set up:
http.sls
httpd:
pkg:
- installed
service:
- running
- require:
- pkg: httpd
php.sls
include:
- http
php:
pkg:
- installed
- require_in:
- service: httpd
mod_python.sls
include:
- http
mod_python:
pkg:
- installed
- require_in:
- service: httpd
Now the httpd server will only start if php or mod_python are first verified to
be installed. Thus allowing for a requisite to be defined "after the fact".
Watch In
Watch in functions the same as require in, but applies a watch statement
rather than a require statement to the external state declaration.
Prereq In
The prereq_in
requisite in follows the same assignment logic as the
require_in
requisite in. The prereq_in
call simply assigns
prereq
to the state referenced. The above example for prereq
can
be modified to function in the same way using prereq_in
:
graceful-down:
cmd.run:
- name: service apache graceful
site-code:
file.recurse:
- name: /opt/site_code
- source: salt://site/code
- prereq_in:
- cmd: graceful-down
Startup States
Sometimes it may be desired that the salt minion execute a state run when it is
started. This alleviates the need for the master to initiate a state run on a
new minion and can make provisioning much easier.
As of Salt 0.10.3 the minion config reads options that allow for states to be
executed at startup. The options are startup_states, sls_list and
top_file.
The startup_states option can be passed one of a number of arguments to
define how to execute states. The available options are:
- highstate
- Execute
state.highstate
- sls
- Read in the
sls_list
option and execute the named sls files
- top
- Read in the
top_file
option and execute states based on that top file
on the Salt Master
Examples:
Execute state.highstate
when starting the minion:
startup_states: highstate
Execute the sls files edit.vim and hyper:
startup_states: sls
sls_list:
- edit.vim
- hyper
State Testing
Executing a Salt state run can potentially change many aspects of a system and
it may be desirable to first see what a state run is going to change before
applying the run.
Salt has a test interface to report on exactly what will be changed, this
interface can be invoked on any of the major state run functions:
salt '*' state.highstate test=True
salt '*' state.sls test=True
salt '*' state.single test=True
The test run is mandated by adding the test=True
option to the states. The
return information will show states that will be applied in yellow and the
result is reported as None
.
Default Test
If the value test
is set to True
in the minion configuration file then
states will default to being executed in test mode. If this value is set then
states can still be run by calling test=False:
salt '*' state.highstate test=False
salt '*' state.sls test=False
salt '*' state.single test=False
The Top File
The top file is used to map what SLS modules get loaded onto what minions via
the state system. The top file creates a few general abstractions. First it
maps what nodes should pull from which environments, next it defines which
matches systems should draw from.
Environments
- Environment
- A configuration that allows conceptually organizing state tree
directories. Environments can be made to be self-contained or state
trees can be made to bleed through environments.
Note
Environments in Salt are very flexible, this section defines how the top
file can be used to define what ststates from what environments are to be
used fro specific minions.
If the intent is to bind minions to specific environments, then the
environment option can be set in the minion configuration file.
The environments in the top file corresponds with the environments defined in
the file_roots
variable. In a simple, single environment setup
you only have the base
environment, and therefore only one state tree. Here
is a simple example of file_roots
in the master configuration:
file_roots:
base:
- /srv/salt
This means that the top file will only have one environment to pull from,
here is a simple, single environment top file:
This also means that /srv/salt
has a state tree. But if you want to use
multiple environments, or partition the file server to serve more than
just the state tree, then the file_roots
option can be expanded:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
prod:
- /srv/salt/prod
Then our top file could reference the environments:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
In this setup we have state trees in three of the four environments, and no
state tree in the base
environment. Notice that the targets for the minions
specify environment data. In Salt the master determines who is in what
environment, and many environments can be crossed together. For instance, a
separate global state tree could be added to the base
environment if it
suits your deployment:
base:
'*':
- global
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
In this setup all systems will pull the global SLS from the base environment,
as well as pull from their respective environments. If you assign only one SLS
to a system, as in this example, a shorthand is also available:
base:
'*': global
dev:
'webserver*dev*': webserver
'db*dev*': db
qa:
'webserver*qa*': webserver
'db*qa*': db
prod:
'webserver*prod*': webserver
'db*prod*': db
Note
The top files from all defined environments will be compiled into a single
top file for all states. Top files are environment agnostic.
Remember, that since everything is a file in Salt, the environments are
primarily file server environments, this means that environments that have
nothing to do with states can be defined and used to distribute other files.
A clean and recommended setup for multiple environments would look like this:
# Master file_roots configuration:
file_roots:
base:
- /srv/salt/base
dev:
- /srv/salt/dev
qa:
- /srv/salt/qa
prod:
- /srv/salt/prod
Then only place state trees in the dev, qa and prod environments, leaving
the base environment open for generic file transfers. Then the top.sls file
would look something like this:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
prod:
'webserver*prod*':
- webserver
'db*prod*':
- db
Other Ways of Targeting Minions
In addition to globs, minions can be specified in top files a few other
ways. Some common ones are compound matches
and node groups.
Here is a slightly more complex top file example, showing the different types
of matches you can perform:
base:
'*':
- ldap-client
- networking
- salt.minion
'salt-master*':
- salt.master
'^(memcache|web).(qa|prod).loc$':
- match: pcre
- nagios.mon.web
- apache.server
'os:Ubuntu':
- match: grain
- repos.ubuntu
'os:(RedHat|CentOS)':
- match: grain_pcre
- repos.epel
'foo,bar,baz':
- match: list
- database
'somekey:abc':
- match: pillar
- xyz
'nag1* or G@role:monitoring':
- match: compound
- nagios.server
In this example top.sls
, all minions get the ldap-client, networking and
salt.minion states. Any minion with an id matching the salt-master*
glob
will get the salt.master state. Any minion with ids matching the regular
expression ^(memcache|web).(qa|prod).loc$
will get the nagios.mon.web and
apache.server states. All Ubuntu minions will receive the repos.ubuntu state,
while all RHEL and CentOS minions will receive the repos.epel state. The
minions foo
, bar
, and baz
will receive the database state. Any
minion with a pillar named somekey
, having a value of abc
will receive
the xyz state. Finally, minions with ids matching the nag1* glob or with a
grain named role
equal to monitoring
will receive the nagios.server
state.
How Top Files Are Compiled
As mentioned earlier, the top files in the different environments are compiled
into a single set of data. The way in which this is done follows a few rules,
which are important to understand when arranging top files in different
environments. The examples below all assume that the file_roots
are set as in the above multi-environment example.
- The
base
environment's top file is processed first. Any environment which
is defined in the base
top.sls as well as another environment's top file,
will use the instance of the environment configured in base
and ignore
all other instances. In other words, the base
top file is
authoritative when defining environments. Therefore, in the example below,
the dev
section in /srv/salt/dev/top.sls
would be completely
ignored.
/srv/salt/base/top.sls:
base:
'*':
- common
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
/srv/salt/dev/top.sls:
dev:
'10.10.100.0/24':
- match: ipcidr
- deployments.dev.site1
'10.10.101.0/24':
- match: ipcidr
- deployments.dev.site2
Note
The rules below assume that the environments being discussed were not
defined in the base
top file.
- If, for some reason, the
base
environment is not configured in the
base
environment's top file, then the other environments will be checked
in alphabetical order. The first top file found to contain a section for the
base
environment wins, and the other top files' base
sections are
ignored. So, provided there is no base
section in the base
top file,
with the below two top files the dev
environment would win out, and the
common.centos
SLS would not be applied to CentOS hosts.
/srv/salt/dev/top.sls:
base:
'os:Ubuntu':
- common.ubuntu
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
/srv/salt/qa/top.sls:
base:
'os:Ubuntu':
- common.ubuntu
'os:CentOS':
- common.centos
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
- For environments other than
base
, the top file in a given environment
will be checked for a section matching the environment's name. If one is
found, then it is used. Otherwise, the remaining (non-base
) environments
will be checked in alphabetical order. In the below example, the qa
section in /srv/salt/dev/top.sls
will be ignored, but if
/srv/salt/qa/top.sls
were cleared or removed, then the states configured
for the qa
environment in /srv/salt/dev/top.sls
will be applied.
/srv/salt/dev/top.sls:
dev:
'webserver*dev*':
- webserver
'db*dev*':
- db
qa:
'10.10.200.0/24':
- match: ipcidr
- deployments.qa.site1
'10.10.201.0/24':
- match: ipcidr
- deployments.qa.site2
/srv/salt/qa/top.sls:
qa:
'webserver*qa*':
- webserver
'db*qa*':
- db
Note
When in doubt, the simplest way to configure your states is with a single
top.sls in the base
environment.
SLS Template Variable Reference
The template engines available to sls files and file templates come loaded
with a number of context variables. These variables contain information and
functions to assist in the generation of templates.
Salt
The salt variable is available to abstract the salt library functions. This
variable is a python dictionary containing all of the functions available to
the running salt minion:
{% for file in salt['cmd.run'](ls /opt/to_remove) %}
{{ file }}:
file.absent
{% endfor %}
Opts
The opts variable abstracts the contents of the minion's configuration file
directly to the template. The opts variable is a dictionary.
The config.get
function also searches for values in the opts dictionary.
Pillar
The pillar dictionary can be referenced directly:
Using the pillar.get
function via the salt variable is generally
recommended since a default can be safely set in the event that the value
is not available in pillar and dictionaries can be traversed directly:
{{ salt['pillar.get']('key', 'failover_value') }}
{{ salt['pillar.get']('stuff:more:deeper') }}
Grains
The grains dictionary makes the minion's grains directly available:
The grains.get
function can be used to traverse deeper grains and set
defaults:
{{ salt['grains.get']('os') }}
env
The env variable is available in sls files when gathering the sls from
an environment.
sls
The sls variable contains the sls reference value. The sls reference value
is the value used to include the sls in top files or via the include option.
State Modules
State Modules are the components that map to actual enforcement and management
of Salt states.
States are Easy to Write!
State Modules should be easy to write and straightforward. The information
passed to the SLS data structures will map directly to the states modules.
Mapping the information from the SLS data is simple, this example should
illustrate:
/etc/salt/master: # maps to "name"
file: # maps to State module filename e.g. https://github.com/saltstack/salt/blob/develop/salt/states/file.py
- managed # maps to the managed function in the file State module
- user: root # one of many options passed to the manage function
- group: root
- mode: 644
- source: salt://salt/master
Therefore this SLS data can be directly linked to a module, function and
arguments passed to that function.
This does issue the burden, that function names, state names and function
arguments should be very human readable inside state modules, since they
directly define the user interface.
Keyword Arguments
Salt passes a number of keyword arguments to states when rendering them,
including the environment, a unique identifier for the state, and more.
Additionally, keep in mind that the requisites for a state are part of the
keyword arguments. Therefore, if you need to iterate through the keyword
arguments in a state, these must be considered and handled appropriately.
One such example is in the pkgrepo.managed
state, which needs to be able to handle
arbitrary keyword arguments and pass them to module execution functions.
An example of how these keyword arguments can be handled can be found
here.
Using Custom State Modules
Place your custom state modules inside a _states
directory within the
file_roots
specified by the master config file. These custom
state modules can then be distributed in a number of ways. Custom state modules
are distributed when state.highstate
is
run, or by executing the saltutil.sync_states
or saltutil.sync_all
functions.
Any custom states which have been synced to a minion, that are named the
same as one of Salt's default set of states, will take the place of the default
state with the same name. Note that a state's default name is its filename
(i.e. foo.py
becomes state foo
), but that its name can be overridden
by using a __virtual__ function.
Cross Calling Modules
As with Execution Modules, State Modules can also make use of the __salt__
and __grains__
data.
It is important to note that the real work of state management should not be
done in the state module unless it is needed. A good example is the pkg state
module. This module does not do any package management work, it just calls the
pkg execution module. This makes the pkg state module completely generic, which
is why there is only one pkg state module and many backend pkg execution
modules.
On the other hand some modules will require that the logic be placed in the
state module, a good example of this is the file module. But in the vast
majority of cases this is not the best approach, and writing specific
execution modules to do the backend work will be the optimal solution.
Return Data
A State Module must return a dict containing the following keys/values:
- name: The same value passed to the state as "name".
- changes: A dict describing the changes made. Each thing changed should
be a key, with its value being another dict with keys called "old" and "new"
containing the old/new values. For example, the pkg state's changes dict
has one key for each package changed, with the "old" and "new" keys in its
sub-dict containing the old and new versions of the package.
- result: A boolean value. True if the action was successful, otherwise
False.
- comment: A string containing a summary of the result.
Test State
All states should check for and support test
being passed in the options.
This will return data about what changes would occur if the state were actually
run. An example of such a check could look like this:
# Return comment of changes if test.
if __opts__['test']:
ret['result'] = None
ret['comment'] = 'State Foo will execute with param {0}'.format(bar)
return ret
Make sure to test and return before performing any real actions on the minion.
Watcher Function
If the state being written should support the watch requisite then a watcher
function needs to be declared. The watcher function is called whenever the
watch requisite is invoked and should be generic to the behavior of the state
itself.
The watcher function should accept all of the options that the normal state
functions accept (as they will be passed into the watcher function).
A watcher function typically is used to execute state specific reactive
behavior, for instance, the watcher for the service module restarts the
named service and makes it useful for the watcher to make the service
react to changes in the environment.
The watcher function also needs to return the same data that a normal state
function returns.
Mod_init Interface
Some states need to execute something only once to ensure that an environment
has been set up, or certain conditions global to the state behavior can be
predefined. This is the realm of the mod_init interface.
A state module can have a function called mod_init which executes when the
first state of this type is called. This interface was created primarily to
improve the pkg state. When packages are installed the package metadata needs
to be refreshed, but refreshing the package metadata every time a package is
installed is wasteful. The mod_init function for the pkg state sets a flag down
so that the first, and only the first, package installation attempt will refresh
the package database (the package database can of course be manually called to
refresh via the refresh
option in the pkg state).
The mod_init function must accept the Low State Data for the given
executing state as an argument. The low state data is a dict and can be seen by
executing the state.show_lowstate function. Then the mod_init function must
return a bool. If the return value is True, then the mod_init function will not
be executed again, meaning that the needed behavior has been set up. Otherwise,
if the mod_init function returns False, then the function will be called the
next time.
A good example of the mod_init function is found in the pkg state module:
def mod_init(low):
'''
Refresh the package database here so that it only needs to happen once
'''
if low['fun'] == 'installed' or low['fun'] == 'latest':
rtag = __gen_rtag()
if not os.path.exists(rtag):
open(rtag, 'w+').write('')
return True
else:
return False
The mod_init function in the pkg state accepts the low state data as low
and then checks to see if the function being called is going to install
packages, if the function is not going to install packages then there is no
need to refresh the package database. Therefore if the package database is
prepared to refresh, then return True and the mod_init will not be called
the next time a pkg state is evaluated, otherwise return False and the mod_init
will be called next time a pkg state is evaluated.
Full list of builtin state modules
alias |
Configuration of email aliases. |
alternatives |
Configuration of the alternatives system |
apt |
Package management operations specific to APT- and DEB-based systems |
augeas |
Configuration management using Augeas |
cmd |
Execution of arbitrary commands |
cron |
Management of cron, the Unix command scheduler. |
debconfmod |
Management of debconf selections. |
disk |
Disk monitoring state |
eselect |
Management of Gentoo configuration using eselect |
file |
Operations on regular files, special files, directories, and symlinks. |
gem |
Installation of Ruby modules packaged as gems. |
git |
Interaction with Git repositories. |
grains |
Manage grains on the minion. |
group |
Management of user groups. |
hg |
Interaction with Mercurial repositories. |
host |
Management of addresses and names in hosts file. |
iptables |
Management of iptables |
keyboard |
Management of keyboard layouts |
kmod |
Loading and unloading of kernel modules. |
layman |
Management of Gentoo Overlays using layman |
libvirt |
Manage libvirt certs. |
locale |
Management of languages/locales |
lvm |
Management of Linux logical volumes |
makeconf |
Management of Gentoo make.conf |
mdadm |
Managing software RAID with mdadm |
modjk_worker |
Send commands to a modjk load balancer via the peer system |
module |
Execution of Salt modules from within states. |
mongodb_database |
Management of Mongodb databases |
mongodb_user |
Management of Mongodb users |
mount |
Mounting of filesystems. |
mysql_database |
Management of MySQL databases (schemas). |
mysql_grants |
Management of MySQL grants (user permissions). |
mysql_user |
Management of MySQL users. |
network |
Configuration of network interfaces. |
npm |
Installation of NPM Packages |
ntp |
Management of NTP servers |
pecl |
Installation of PHP Extensions Using pecl |
pip_state |
Installation of Python Packages Using pip |
pkg |
Installation of packages using OS package managers such as yum or apt-get |
pkgng |
Manage package remote repo using FreeBSD pkgng |
pkgrepo |
Management of package repos |
portage_config |
Management of Portage package configuration on Gentoo |
postgres_database |
Management of PostgreSQL databases. |
postgres_group |
Management of PostgreSQL groups (roles). |
postgres_user |
Management of PostgreSQL users (roles). |
quota |
Management of POSIX Quotas |
rabbitmq_user |
Manage RabbitMQ Users. |
rabbitmq_vhost |
Manage RabbitMQ Virtual Hosts. |
rbenv |
Managing Ruby installations with rbenv. |
rvm |
Managing Ruby installations and gemsets with Ruby Version Manager (RVM). |
selinux |
Management of SELinux rules. |
service |
Starting or restarting of services and daemons. |
ssh_auth |
Control of entries in SSH authorized_key files. |
ssh_known_hosts |
Control of SSH known_hosts entries. |
stateconf |
Stateconf System |
supervisord |
Interaction with the Supervisor daemon. |
svn |
Manage SVN repositories |
sysctl |
Configuration of the Linux kernel using sysctrl. |
timezone |
Management of timezones |
tomcat |
This state uses the manager webapp to manage Apache tomcat webapps |
user |
Management of user accounts. |
virtualenv_mod |
Setup of Python virtualenv sandboxes. |
win_dns_client |
Module for configuring DNS Client on Windows systems |
win_firewall |
State for configuring Windows Firewall |
win_path |
Manage the Windows System PATH |
win_servermanager |
Manage Windows features via the ServerManager powershell module |
win_system |
Management of Windows system information |
Renderers
The Salt state system operates by gathering information from simple data
structures. The state system was designed in this way to make interacting with
it generic and simple. This also means that state files (SLS files) can be one
of many formats.
By default SLS files are rendered as Jinja templates and then parsed as YAML
documents. But since the only thing the state system cares about is raw data,
the SLS files can be any structured format that can be dreamed up.
Currently there is support for Jinja + YAML
, Mako + YAML
,
Wempy + YAML
, Jinja + json
Mako + json
and Wempy + json
. But
renderers can be written to support anything. This means that the Salt states
could be managed by XML files, HTML files, puppet files, or any format that
can be translated into the data structure used by the state system.
Multiple Renderers
When deploying a state tree a default renderer is selected in the master
configuration file with the renderer option. But multiple renderers can be
used inside the same state tree.
When rendering SLS files Salt checks for the presence of a Salt specific
shebang line. The shebang line syntax was chosen because it is familiar to
the target audience, the systems admin and systems engineer.
The shebang line directly calls the name of the renderer as it is specified
within Salt. One of the most common reasons to use multiple renderers in to
use the Python or py
renderer:
#!py
def run():
'''
Install the python-mako package
'''
return {'include': ['python'],
'python-mako': {'pkg': ['installed']}}
The first line is a shebang that references the py
renderer.
Composing Renderers
A renderer can be composed from other renderers by connecting them in a series
of pipes(|
). In fact, the default Jinja + YAML
renderer is implemented
by combining a YAML renderer and a Jinja renderer. Such renderer configuration
is specified as: jinja | yaml
.
Other renderer combinations are possible, here's a few examples:
yaml
- i.e, just YAML, no templating.
mako | yaml
- pass the input to the
mako
renderer, whose output is then fed into the
yaml
renderer.
jinja | mako | yaml
- This one allows you to use both jinja and mako templating syntax in the
input and then parse the final rendered output as YAML.
And here's a contrived example sls file using the jinja | mako | yaml
renderer:
#!jinja|mako|yaml
An_Example:
cmd.run:
- name: |
echo "Using Salt ${grains['saltversion']}" \
"from path {{grains['saltpath']}}."
- cwd: /
<%doc> ${...} is Mako's notation, and so is this comment. </%doc>
{# Similarly, {{...}} is Jinja's notation, and so is this comment. #}
For backward compatibility, jinja | yaml
can also be written as
yaml_jinja
, and similarly, the yaml_mako
, yaml_wempy
,
json_jinja
, json_mako
, and json_wempy
renderers are all supported
as well.
Keep in mind that not all renderers can be used alone or with any other renderers.
For example, the template renderers shouldn't be used alone as their outputs are
just strings, which still need to be parsed by another renderer to turn them into
highstate data structures. Also, for example, it doesn't make sense to specify
yaml | jinja
either, because the output of the yaml renderer is a highstate
data structure(a dict in Python), which cannot be used as the input to a template
renderer. Therefore, when combining renderers, you should know what each renderer
accepts as input and what it returns as output.
Writing Renderers
Writing a renderer is easy, all that is required is that a Python module is
placed in the rendered directory and that the module implements the render
function. The render
function will be passed the path of the SLS file. In
the render
function, parse the passed file and return the data structure
derived from the file. You can place your custom renderers in a _renderers
directory within the file_roots
specified by the master config
file. These custom renderers are distributed when state.highstate
is run, or by executing the
saltutil.sync_renderers
or
saltutil.sync_all
functions.
Any custom renderers which have been synced to a minion, that are named the
same as one of Salt's default set of renderers, will take the place of the
default renderer with the same name.
Examples
The best place to find examples of renderers is in the Salt source code. The
renderers included with Salt can be found here:
https://github.com/saltstack/salt/blob/develop/salt/renderers
Here is a simple YAML renderer example:
import yaml
def render(yaml_data, env='', sls='', **kws):
if not isinstance(yaml_data, basestring):
yaml_data = yaml_data.read()
data = yaml.load(yaml_data)
return data if data else {}
Full list of builtin renderer modules
Full list of builtin pillar modules
cmd_json |
Execute a command and read the output as JSON. |
cmd_yaml |
Execute a command and read the output as YAML. |
cobbler |
Cobbler Pillar ============== A pillar module to pull data from Cobbler via its API into the pillar dictionary. |
django_orm |
Generate pillar data from Django models through the Django ORM |
git_pillar |
Clone a remote git repository and use the filesystem as a pillar directory. |
hiera |
Take in a hiera configuration file location and execute it. |
libvirt |
Load up the libvirt keys into pillar for a given minion if said keys have been generated using the libvirt key runner. |
mongo |
Read pillar data from a mongodb collection. |
pillar_ldap |
This pillar module parses a config file (specified in the salt master config), and executes a series of LDAP searches based on that config. |
puppet |
Execute an unmodified puppet_node_classifier and read the output as YAML. |
reclass_adapter |
|
Full list of builtin master tops modules
Salt Runners
Salt runners are convenience applications executed with the salt-run command.
Where as salt modules are sent out to minions for execution, salt runners are
executed on the salt master.
A Salt runner can be a simple client call, or a complex application.
The use for a Salt runner is to build a frontend hook for running sets of
commands via Salt or creating special formatted output.
Writing Salt Runners
Salt runners can be easily written, the work in a similar way to Salt modules
except they run on the server side.
A runner is a Python module that contains functions, each public function is
a runner that can be executed via the salt-run command.
If a Python module named test.py is created in the runners directory and
contains a function called foo
then the function could be called with:
Examples
The best examples of runners can be found in the Salt source:
https://github.com/saltstack/salt/blob/develop/salt/runners
A simple runner that returns a well-formatted list of the minions that are
responding to Salt calls would look like this:
# Import salt modules
import salt.client
def up():
'''
Print a list of all of the minions that are up
'''
client = salt.client.LocalClient(__opts__['conf_file'])
minions = client.cmd('*', 'test.ping', timeout=1)
for minion in sorted(minions):
print minion
Full list of runner modules
cache |
Return cached data from minions |
doc |
A runner module to collect and display the inline documentation from the |
fileserver |
Directly manage the salt fileserver plugins |
jobs |
A convenience system to manage jobs, both active and already run |
launchd |
|
manage |
General management functions for salt, tools like seeing what hosts are up |
network |
Network tools to run from the Master |
search |
Runner frontend to search system |
state |
Execute overstate functions |
virt |
Control virtual machines via Salt |
winrepo |
Runner to manage Windows software repo |
Full list of builtin wheel modules
config |
Manage the master configuration file |
file_roots |
Read in files from the file_root and save files to the file root |
key |
Wheel system wrapper for key system |
pillar_roots |
The pillar_roots wheel module is used to manage files under the pillar roots directories on the master server. |
Full list of builtin auth modules
keystone |
Provide authentication using OpenStack Keystone |
ldap |
Provide authentication using simple LDAP binds |
pam |
Authenticate against PAM |
stormpath_mod |
Salt Stormpath Authentication |
Full list of builtin output modules
grains |
Special outputter for grains |
highstate |
The return data from the Highstate command is a standard data structure which is parsed by the highstate outputter to deliver a clean and readable set of information about the HighState run on minions. |
json_out |
The JSON output module converts the return data into JSON. |
key |
Salt Key makes use of the outputter system to format information sent to the salt-key command. |
nested |
Recursively display nested data, this is the default outputter. |
no_out |
Display no output. |
no_return |
Display output for minions that did not return |
overstatestage |
Display clean output of an overstate stage |
pprint_out |
The python pretty print system was the default outputter. |
raw |
The raw outputter outputs the data via the python print function and is shown in a raw state. |
txt |
The txt outputter has been developed to make the output from shell commands on minions appear as they do when the command is executed on the minion. |
virt_query |
virt.query outputter |
yaml_out |
Output data in YAML, this outputter defaults to printing in YAML block mode for better readability. |
Python client API
Salt is written to be completely API centric, Salt minions and master can be
built directly into third party applications as a communication layer. The Salt
client API is very straightforward.
A number of client command methods are available depending on the exact
behavior desired.
LocalClient
-
class
salt.client.
LocalClient
(c_path='/etc/salt/master', mopts=None)
LocalClient
is the same interface used by the salt
command-line tool on the Salt Master. LocalClient
is used to send a
command to Salt minions to execute execution modules and return the results to the Salt Master.
Importing and using LocalClient
must be done on the same machine as the
Salt Master and it must be done using the same user that the Salt Master is
running as (unless external_auth
is configured and
authentication credentials are included in the execution.
-
cmd
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
The cmd method will execute and wait for the timeout period for all
minions to reply, then it will return all minion data at once.
Usage:
import salt.client
client = salt.client.LocalClient()
ret = client.cmd('*', 'cmd.run', ['whoami'])
With authentication:
# Master config
...
external_auth:
pam:
fred:
- test.*
...
ret = client.cmd('*', 'test.ping', [], username='fred', password='pw', eauth='pam')
Compound command usage:
ret = client.cmd('*', ['grains.items', 'cmd.run'], [[], ['whoami']])
Parameters: |
- tgt (string or list) -- Which minions to target for the execution. Default is shell
glob. Modified by the
expr_form option.
- fun (string or list of strings) --
The module and function to call on the specified minions of
the form module.function . For example test.ping or
grains.items .
- Compound commands
- Multiple functions may be called in a single publish by
passing a list of commands. This can dramatically lower
overhead and speed up the application communicating with Salt.
This requires that the arg param is a list of lists. The
fun list and the arg list must correlate by index
meaning a function that does not take arguments must still have
a corresponding empty list at the expected index.
- arg (list or list-of-lists) -- A list of arguments to pass to the remote function. If the
function takes no arguments
arg may be omitted except when
executing a compound command.
- timeout -- Seconds to wait after the last minion returns but
before all minions return.
- expr_form --
The type of tgt . Allowed values:
glob - Bash glob completion - Default
pcre - Perl style regular expression
list - Python list of hosts
grain - Match based on a grain comparison
grain_pcre - Grain comparison with a regex
pillar - Pillar data comparison
nodegroup - Match on nodegroup
range - Use a Range server for matching
compound - Pass a compound match string
- ret -- The returner to use. The value passed can be single
returner, or a comma delimited list of returners to call in order
on the minions
- kwargs --
Optional keyword arguments.
Authentication credentials may be passed when using
external_auth .
eauth - the external_auth backend
username and password
token
|
Returns: | A dictionary with the result of the execution, keyed by
minion ID. A compound command will return a sub-dictionary keyed by
function name.
|
-
cmd_async
(tgt, fun, arg=(), expr_form='glob', ret='', kwarg=None, **kwargs)
Execute a command and get back the jid, don't wait for anything
The function signature is the same as cmd()
with the
following exceptions.
-
cmd_cli
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', verbose=False, kwarg=None, **kwargs)
Used by the salt CLI. This method returns minion returns as
the come back and attempts to block until all minions return.
The function signature is the same as cmd()
with the
following exceptions.
Parameters: | verbose -- Print extra information about the running command |
Returns: | A generator |
-
cmd_iter
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
Yields the individual minion returns as they come in
The function signature is the same as cmd()
with the
following exceptions.
-
cmd_iter_no_block
(tgt, fun, arg=(), timeout=None, expr_form='glob', ret='', kwarg=None, **kwargs)
Blocks while waiting for individual minions to return.
The function signature is the same as cmd()
with the
following exceptions.
Returns: | None until the next minion returns. This allows for actions
to be injected in between minion returns. |
Salt Caller
-
class
salt.client.
Caller
(c_path='/etc/salt/minion')
Caller
is the same interface used by the salt-call
command-line tool on the Salt Minion.
Importing and using LocalClient
must be done on the same machine as a
Salt Minion and it must be done using the same user that the Salt Minion is
running as.
Usage:
import salt.client
caller = salt.client.Caller()
caller.function('test.ping')
# Or call objects directly
caller.sminion.functions['cmd.run']('ls -l')
-
function
(fun, *args, **kwargs)
Call a single salt function
RunnerClient
-
class
salt.runner.
RunnerClient
(opts)
RunnerClient
is the same interface used by the salt-run
command-line tool on the Salt Master. It executes runner modules which run on the Salt Master.
Importing and using RunnerClient
must be done on the same machine as
the Salt Master and it must be done using the same user that the Salt
Master is running as.
-
cmd
(fun, arg, kwarg=None)
Execute a runner with the given arguments
-
low
(fun, low)
Pass in the runner function name and the low data structure
WheelClient
-
class
salt.wheel.
Wheel
(opts)
WheelClient
is an interface to Salt's wheel modules. Wheel modules interact with various parts of the Salt
Master.
Importing and using WheelClient
must be done on the same machine as the
Salt Master and it must be done using the same user that the Salt Master is
running as.
-
call_func
(fun, **kwargs)
Execute a master control function
-
master_call
(**kwargs)
Send a function call to a wheel module through the master network interface
Expects that one of the kwargs is key 'fun' whose value is the namestring
of the function to call
Peer Communication
Salt 0.9.0 introduced the capability for Salt minions to publish commands. The
intent of this feature is not for Salt minions to act as independent brokers
one with another, but to allow Salt minions to pass commands to each other.
In Salt 0.10.0 the ability to execute runners from the master was added. This
allows for the master to return collective data from runners back to the
minions via the peer interface.
The peer interface is configured through two options in the master
configuration file. For minions to send commands from the master the peer
configuration is used. To allow for minions to execute runners from the master
the peer_run
configuration is used.
Since this presents a viable security risk by allowing minions access to the
master publisher the capability is turned off by default. The minions can be
allowed access to the master publisher on a per minion basis based on regular
expressions. Minions with specific ids can be allowed access to certain Salt
modules and functions.
Peer Configuration
The configuration is done under the peer
setting in the Salt master
configuration file, here are a number of configuration possibilities.
The simplest approach is to enable all communication for all minions, this is
only recommended for very secure environments.
This configuration will allow minions with IDs ending in example.com access
to the test, ps, and pkg module functions.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
The configuration logic is simple, a regular expression is passed for matching
minion ids, and then a list of expressions matching minion functions is
associated with the named minion. For instance, this configuration will also
allow minions ending with foo.org access to the publisher.
peer:
.*example.com:
- test.*
- ps.*
- pkg.*
.*foo.org:
- test.*
- ps.*
- pkg.*
Peer Runner Communication
Configuration to allow minions to execute runners from the master is done via
the peer_run
option on the master. The peer_run
configuration follows
the same logic as the peer
option. The only difference is that access is
granted to runner modules.
To open up access to all minions to all runners:
This configuration will allow minions with IDs ending in example.com access
to the manage and jobs runner functions.
peer_run:
.*example.com:
- manage.*
- jobs.*
Using Peer Communication
The publish module was created to manage peer communication. The publish module
comes with a number of functions to execute peer communication in different
ways. Currently there are three functions in the publish module. These examples
will show how to test the peer system via the salt-call command.
To execute test.ping on all minions:
# salt-call publish.publish \* test.ping
To execute the manage.up runner:
# salt-call publish.runner manage.up
To match minions using other matchers, use expr_form
:
# salt-call publish.publish 'webserv* and not G@os:Ubuntu' test.ping expr_form='compound'
Client ACL system
The salt client ACL system is a means to allow system users other than root to
have access to execute select salt commands on minions from the master.
The client ACL system is configured in the master configuration file via the
client_acl
configuration option. Under the client_acl
configuration
option the users open to send commands are specified and then a list of regular
expressions which specify the minion functions which will be made available to
specified user. This configuration is much like the peer
configuration:
# Allow thatch to execute anything and allow fred to use ping and pkg
client_acl:
thatch:
- .*
fred:
- ping.*
- pkg.*
Permission Issues
Directories required for client_acl
must be modified to be readable by the
users specified:
chmod 755 /var/cache/salt /var/cache/salt/jobs /var/run/salt
If you are upgrading from earlier versions of salt you must also remove any
existing user keys and re-start the Salt master:
rm /var/cache/salt/.*key
service salt-master restart
Salt Syndic
The Salt Syndic interface is a powerful tool which allows for the construction
of Salt command topologies. A basic Salt setup has a Salt Master commanding a
group of Salt Minions. The Syndic interface is a special passthrough
minion, it is run on a master and connects to another master, then the master
that the Syndic minion is listening to can control the minions attached to
the master running the syndic.
The intent for supporting many layouts is not presented with the intent of
supposing the use of any single topology, but to allow a more flexible method
of controlling many systems.
Configuring the Syndic
Since the Syndic only needs to be attached to a higher level master the
configuration is very simple. On a master that is running a syndic to connect
to a higher level master the syndic_master option needs to be set in the
master config file. The syndic_master option contains the hostname or IP
address of the master server that can control the master that the syndic is
running on.
The master that the syndic connects to sees the syndic as an ordinary minion,
and treats it as such. the higher level master will need to accept the syndic's
minion key like any other minion. This master will also need to set the
order_masters value in the configuration to True. The order_masters option in
the config on the higher level master is very important, to control a syndic
extra information needs to be sent with the publications, the order_masters
option makes sure that the extra data is sent out.
To sum up, you have those configuration options available on the master side:
- syndic_master: MasterOfMaster ip/address
- syndic_master_port: MasterOfMaster ret_port
- syndic_log_file: path to the logfile (absolute or not)
- syndic_pidfile: path to the pidfile (absolute or not)
Running the Syndic
The Syndic is a separate daemon that needs to be started on the master that is
controlled by a higher master. Starting the Syndic daemon is the same as
starting the other Salt daemons.
File Server Backends
Salt version 0.12.0 introduced the ability for the Salt Master to integrate
different file server backends. File server backends allows the Salt file
server to act as a transparent bridge to external resources. The primary
example of this is the git backend which allows for all of the Salt formulas
and files to be maintained in a remote git repository.
The fileserver backend system can accept multiple backends as well. This makes
it possible to have the environments listed in the file_roots configuration
available in addition to other backends, or the ability to mix multiple
backends.
This feature is managed by the fileserver_backend option in the master
config. The desired backend systems are listed in order of search priority:
fileserver_backend:
- roots
- git
If this configuration the environments and files defined in the file_roots
configuration will be searched first, if the referenced environment and file
is not found then the git backend will be searched.
Environments
The concept of environments is followed in all backend systems. The
environments in the classic roots backend are defined in the file_roots
option. Environments map differently based on the backend, for instance the
git backend translated branches and tags in git to environments. This makes
it easy to define environments in git by just setting a tag or forking a
branch.
Dynamic Module Distribution
Salt Python modules can be distributed automatically via the Salt file server.
Under the root of any environment defined via the file_roots
option on the master server directories corresponding to the type of module can
be used.
- Module sync
- Automatically transfer and load modules, grains, renderers, returners,
states, etc from the master to the minions.
The directories are prepended with an underscore:
_modules
_grains
_renderers
_returners
_states
The contents of these directories need to be synced over to the minions after
Python modules have been created in them. There are a number of ways to sync
the modules.
Sync Via States
The minion configuration contains an option autoload_dynamic_modules
which defaults to True. This option makes the state system refresh all
dynamic modules when states are run. To disable this behavior set
autoload_dynamic_modules
to False in the minion config.
When dynamic modules are autoloaded via states, modules only pertinent to
the environments matched in the master's top file are downloaded.
This is important to remember, because modules can be manually loaded from
any specific environment that environment specific modules will be loaded
when a state run is executed.
Sync Via the saltutil Module
The saltutil module has a number of functions that can be used to sync all
or specific dynamic modules. The saltutil module function saltutil.sync_all
will sync all module types over to a minion. For more information see:
salt.modules.saltutil
File Server Configuration
The Salt file server is a high performance file server written in ZeroMQ. It
manages large files quickly and with little overhead, and has been optimized
to handle small files in an extremely efficient manner.
The Salt file server is an environment aware file server. This means that
files can be allocated within many root directories and accessed by
specifying both the file path and the environment to search. The
individual environments can span across multiple directory roots
to create overlays and to allow for files to be organized in many flexible
ways.
Environments
The Salt file server defaults to the mandatory base
environment. This
environment MUST be defined and is used to download files when no
environment is specified.
Environments allow for files and sls data to be logically separated, but
environments are not isolated from each other. This allows for logical
isolation of environments by the engineer using Salt, but also allows
for information to be used in multiple environments.
Directory Overlay
The environment
setting is a list of directories to publish files from.
These directories are searched in order to find the specified file and the
first file found is returned.
This means that directory data is prioritized based on the order in which they
are listed. In the case of this file_roots
configuration:
file_roots:
base:
- /srv/salt/base
- /srv/salt/failover
If a file's URI is salt://httpd/httpd.conf
, it will first search for the
file at /srv/salt/base/httpd/httpd.conf
. If the file is found there it
will be returned. If the file is not found there, then
/srv/salt/failover/httpd/httpd.conf
will be used for the source.
This allows for directories to be overlaid and prioritized based on the order
they are defined in the configuration.
Local File Server
The file server can be rerouted to run from the minion. This is primarily to
enable running Salt states without a Salt master. To use the local file server
interface, copy the file server data to the minion and set the file_roots
option on the minion to point to the directories copied from the master.
Once the minion file_roots
option has been set, change the file_client
option to local to make sure that the local file server interface is used.
Salt File Server
Salt comes with a simple file server suitable for distributing files to the
Salt minions. The file server is a stateless ZeroMQ server that is built into
the Salt master.
The main intent of the Salt file server is to present files for use in the
Salt state system. With this said, the Salt file server can be used for any
general file transfer from the master to the minions.
The cp Module
The cp module is the home of minion side file server operations. The cp module
is used by the Salt state system, salt-cp and can be used to distribute files
presented by the Salt file server.
Environments
Since the file server is made to work with the Salt state system, it supports
environments. The environments are defined in the master config file and
when referencing an environment the file specified will be based on the root
directory of the environment.
get_file
The cp.get_file function can be used on the minion to download a file from
the master, the syntax looks like this:
# salt '*' cp.get_file salt://vimrc /etc/vimrc
This will instruct all Salt minions to download the vimrc file and copy it
to /etc/vimrc
Template rendering can be enabled on both the source and destination file names
like so:
# salt '*' cp.get_file "salt://{{grains.os}}/vimrc" /etc/vimrc template=jinja
This example would instruct all Salt minions to download the vimrc from a
directory with the same name as their OS grain and copy it to /etc/vimrc
For larger files, the cp.get_file module also supports gzip compression.
Because gzip is CPU-intensive, this should only be used in
scenarios where the compression ratio is very high (e.g. pretty-printed JSON
or YAML files).
Use the gzip named argument to enable it. Valid values are 1..9,
where 1 is the lightest compression and 9 the heaviest. 1 uses the least CPU
on the master (and minion), 9 uses the most.
# salt '*' cp.get_file salt://vimrc /etc/vimrc gzip=5
Finally, note that by default cp.get_file does not create new destination
directories if they do not exist. To change this, use the makedirs argument:
# salt '*' cp.get_file salt://vimrc /etc/vim/vimrc makedirs=True
In this example, /etc/vim/ would be created if it didn't already exist.
get_dir
The cp.get_dir function can be used on the minion to download an entire
directory from the master. The syntax is very similar to get_file:
# salt '*' cp.get_dir salt://etc/apache2 /etc
cp.get_dir supports template rendering and gzip compression arguments just
like get_file:
# salt '*' cp.get_dir salt://etc/{{pillar.webserver}} /etc gzip=5 template=jinja
File Server Client API
A client API is available which allows for modules and applications to be
written which make use of the Salt file server.
The file server uses the same authentication and encryption used by the rest
of the Salt system for network communication.
FileClient Class
The FileClient class is used to set up the communication from the minion to
the master. When creating a FileClient object the minion configuration needs
to be passed in. When using the FileClient from within a minion module the
built in __opts__
data can be passed:
import salt.minion
def get_file(path, dest, env='base'):
'''
Used to get a single file from the Salt master
CLI Example:
salt '*' cp.get_file salt://vimrc /etc/vimrc
'''
# Create the FileClient object
client = salt.minion.FileClient(__opts__)
# Call get_file
return client.get_file(path, dest, False, env)
Using the FileClient class outside of a minion module where the __opts__
data is not available, it needs to be generated:
import salt.minion
import salt.config
def get_file(path, dest, env='base'):
'''
Used to get a single file from the Salt master
'''
# Get the configuration data
opts = salt.config.minion_config('/etc/salt/minion')
# Create the FileClient object
client = salt.minion.FileClient(opts)
# Call get_file
return client.get_file(path, dest, False, env)
Full list of builtin fileserver modules
gitfs |
The backend for the git based file server system. |
hgfs |
The backed for the mercurial based file server system. |
roots |
The default file server backend |
s3fs |
The backend for a fileserver based on Amazon S3 |
Configuration file examples
Configuring the Salt Master
The Salt system is amazingly simple and easy to configure, the two components
of the Salt system each have a respective configuration file. The
salt-master is configured via the master configuration file, and the
salt-minion is configured via the minion configuration file.
The configuration file for the salt-master is located at
/etc/salt/master
. The available options are as follows:
Primary Master Configuration
interface
Default: 0.0.0.0
(all interfaces)
The local interface to bind to.
publish_port
Default: 4505
The network port to set up the publication interface
user
Default: root
The user to run the Salt processes
max_open_files
Default: max_open_files
Each minion connecting to the master uses AT LEAST one file descriptor, the
master subscription connection. If enough minions connect you might start
seeing on the console(and then salt-master crashes):
Too many open files (tcp_listener.cpp:335)
Aborted (core dumped)
By default this value will be the one of ulimit -Hn, i.e., the hard limit for
max open files.
If you wish to set a different value than the default one, uncomment and
configure this setting. Remember that this value CANNOT be higher than the
hard limit. Raising the hard limit depends on your OS and/or distribution,
a good way to find the limit is to search the internet for(for example):
raise max open files hard limit debian
worker_threads
Default: 5
The number of threads to start for receiving commands and replies from minions.
If minions are stalling on replies because you have many minions, raise the
worker_threads value.
Worker threads should not be put below 3 when using the peer system, but can
drop down to 1 worker otherwise.
ret_port
Default: 4506
The port used by the return server, this is the server used by Salt to receive
execution returns and command executions.
pidfile
Default: /var/run/salt-master.pid
Specify the location of the master pidfile
pidfile: /var/run/salt-master.pid
root_dir
Default: /
The system root directory to operate from, change this to make Salt run from
an alternative root
pki_dir
Default: /etc/salt/pki
The directory to store the pki authentication keys.
cachedir
Default: /var/cache/salt
The location used to store cache information, particularly the job information
for executed salt commands.
cachedir: /var/cache/salt
keep_jobs
Default: 24
Set the number of hours to keep old job information
job_cache
Default: True
The master maintains a job cache, while this is a great addition it can be
a burden on the master for larger deployments (over 5000 minions).
Disabling the job cache will make previously executed jobs unavailable to
the jobs system and is not generally recommended. Normally it is wise to make
sure the master has access to a faster IO system or a tmpfs is mounted to the
jobs dir
ext_job_cache
Default: ''
Used to specify a default returner for all minions, when this option is set
the specified returner needs to be properly configured and the minions will
always default to sending returns to this returner. This will also disable the
local job cache on the master
minion_data_cache
Default: True
The minion data cache is a cache of information about the minions stored on the
master, this information is primarily the pillar and grains data. The data is
cached in the Master cachedir under the name of the minion and used to pre
determine what minions are expected to reply from executions.
enforce_mine_cache
Default: False
By-default when disabling the minion_data_cache mine will stop working since
it is based on cached data, by enabling this option we explicitly enabling
only the cache for the mine system.
enforce_mine_cache: False
sock_dir
Default:: /tmp/salt-unix
Set the location to use for creating Unix sockets for master process
communication
Master Security Settings
open_mode
Default: False
Open mode is a dangerous security feature. One problem encountered with pki
authentication systems is that keys can become "mixed up" and authentication
begins to fail. Open mode turns off authentication and tells the master to
accept all authentication. This will clean up the pki keys received from the
minions. Open mode should not be turned on for general use. Open mode should
only be used for a short period of time to clean up pki keys. To turn on open
mode set this value to True
.
auto_accept
Default: False
Enable auto_accept. This setting will automatically accept all incoming
public keys from the minions
autosign_file
Default not defined
If the autosign_file is specified incoming keys specified in the autosign_file
will be automatically accepted. Matches will be searched for first by string
comparison, then by globbing, then by full-string regex matching. This is
insecure!
client_acl
Default: {}
Enable user accounts on the master to execute specific modules. These modules
can be expressed as regular expressions
client_acl:
fred:
- test.ping
- pkg.*
client_acl_blacklist
Default: {}
Blacklist users or modules
This example would blacklist all non sudo users, including root from
running any commands. It would also blacklist any use of the "cmd"
module.
This is completely disabled by default.
client_acl_blacklist:
users:
- root
- '^(?!sudo_).*$' # all non sudo users
modules:
- cmd
external_auth
Default: {}
The external auth system uses the Salt auth modules to authenticate and
validate users to access areas of the Salt system.
external_auth:
pam:
fred:
- test.*
token_expire
Default: 43200
Time (in seconds) for a newly generated token to live. Default: 12 hours
file_recv
Default: False
Allow minions to push files to the master. This is disabled by default, for
security purposes.
Master Module Management
runner_dirs
Default: []
Set additional directories to search for runner modules
cython_enable
Default: False
Set to true to enable cython modules (.pyx files) to be compiled on the fly on
the Salt master
Master State System Settings
state_verbose
Default: False
state_verbose allows for the data returned from the minion to be more
verbose. Normally only states that fail or states that have changes are
returned, but setting state_verbose to True
will return all states that
were checked
state_output
Default: full
The state_output setting changes if the output is the full multi line
output for each changed state if set to 'full', but if set to 'terse'
the output will be shortened to a single line. If set to 'mixed', the output
will be terse unless a state failed, in which case that output will be full.
If set to 'changes', the output will be full unless the state didn't change.
state_top
Default: top.sls
The state system uses a "top" file to tell the minions what environment to
use and what modules to use. The state_top file is defined relative to the
root of the base environment
external_nodes
Default: None
The external_nodes option allows Salt to gather data that would normally be
placed in a top file from and external node controller. The external_nodes
option is the executable that will return the ENC data. Remember that Salt
will look for external nodes AND top files and combine the results if both
are enabled and available!
external_nodes: cobbler-ext-nodes
renderer
Default: yaml_jinja
The renderer to use on the minions to render the state data
failhard
Default:: False
Set the global failhard flag, this informs all states to stop running states
at the moment a single state fails
test
Default:: False
Set all state calls to only test if they are going to actually make changes
or just post what changes are going to be made
Master File Server Settings
fileserver_backend
Default:
fileserver_backend:
- roots
Salt supports a modular fileserver backend system, this system allows the salt
master to link directly to third party systems to gather and manage the files
available to minions. Multiple backends can be configured and will be searched
for the requested file in the order in which they are defined here. The default
setting only enables the standard backend roots
, which is configured using
the file_roots
option.
Example:
fileserver_backend:
- roots
- gitfs
file_roots
Default:
Salt runs a lightweight file server written in ZeroMQ to deliver files to
minions. This file server is built into the master daemon and does not
require a dedicated port.
The file server works on environments passed to the master. Each environment
can have multiple root directories. The subdirectories in the multiple file
roots cannot match, otherwise the downloaded files will not be able to be
reliably ensured. A base environment is required to house the top file.
Example:
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
hash_type
Default: md5
The hash_type is the hash to use when discovering the hash of a file on
the master server. The default is md5, but sha1, sha224, sha256, sha384
and sha512 are also supported.
file_buffer_size
Default: 1048576
The buffer size in the file server in bytes
file_buffer_size: 1048576
Pillar Configuration
pillar_roots
Default:
Set the environments and directories used to hold pillar sls data. This
configuration is the same as file_roots
:
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
ext_pillar
The ext_pillar option allows for any number of external pillar interfaces to be
called when populating pillar data. The configuration is based on ext_pillar
functions. The available ext_pillar functions can be found herein:
https://github.com/saltstack/salt/blob/develop/salt/pillar
By default, the ext_pillar interface is not configured to run.
Default:: None
ext_pillar:
- hiera: /etc/hiera.yaml
- cmd_yaml: cat /etc/salt/yaml
- reclass:
inventory_base_uri: /etc/reclass
There are additional details at Pillars
Syndic Server Settings
A Salt syndic is a Salt master used to pass commands from a higher Salt master to
minions below the syndic. Using the syndic is simple. If this is a master that
will have syndic servers(s) below it, set the "order_masters" setting to True. If this
is a master that will be running a syndic daemon for passthrough the
"syndic_master" setting needs to be set to the location of the master server
Do not not forget that in other word it means that it shares with the local minion it's ID and PKI_DIR.
order_masters
Default: False
Extra data needs to be sent with publications if the master is controlling a
lower level master via a syndic minion. If this is the case the order_masters
value must be set to True
syndic_master
Default: None
If this master will be running a salt-syndic to connect to a higher level
master, specify the higher level master with this configuration value
syndic_master: masterofmasters
syndic_master_port
Default: 4506
If this master will be running a salt-syndic to connect to a higher level
master, specify the higher level master port with this configuration value
syndic_log_file
Default: syndic.log
If this master will be running a salt-syndic to connect to a higher level
master, specify the log_file of the syndic daemon.
syndic_log_file: salt-syndic.log
syndic_pidfile
Default: salt-syndic.pid
If this master will be running a salt-syndic to connect to a higher level
master, specify the pidfile of the syndic daemon.
syndic_pidfile: syndic.pid
Peer Publish Settings
Salt minions can send commands to other minions, but only if the minion is
allowed to. By default "Peer Publication" is disabled, and when enabled it
is enabled for specific minions and specific commands. This allows secure
compartmentalization of commands based on individual minions.
peer
Default: {}
The configuration uses regular expressions to match minions and then a list
of regular expressions to match functions. The following will allow the
minion authenticated as foo.example.com to execute functions from the test
and pkg modules
peer:
foo.example.com:
- test.*
- pkg.*
This will allow all minions to execute all commands:
This is not recommended, since it would allow anyone who gets root on any
single minion to instantly have root on all of the minions!
peer_run
Default: {}
The peer_run option is used to open up runners on the master to access from the
minions. The peer_run configuration matches the format of the peer
configuration.
The following example would allow foo.example.com to execute the manage.up
runner:
peer_run:
foo.example.com:
- manage.up
Node Groups
Default: {}
Node groups allow for logical groupings of minion nodes.
A group consists of a group name and a compound target.
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com or bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
Master Logging Settings
log_file
Default: /var/log/salt/master
The master log can be sent to a regular file, local path name, or network
location. See also log_file
.
Examples:
log_file: /var/log/salt/master
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
Default: warning
The level of messages to send to the console. See also log_level
.
log_level_logfile
Default: warning
The level of messages to send to the log file. See also
log_level_logfile
.
log_level_logfile: warning
log_datefmt
Default: %H:%M:%S
The date and time format used in console log messages. See also
log_datefmt
.
log_datefmt_logfile
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also
log_datefmt_logfile
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also
log_fmt_console
.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also
log_fmt_logfile
.
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
Default: {}
This can be used to control logging levels more specifically. See also
log_granular_levels
.
Include Configuration
default_include
Default: master.d/*.conf
The master can include configuration from other files. Per default the
master will automatically include all config files from master.d/*.conf
where master.d
is relative to the directory of the master configuration
file.
include
Default: not defined
The master can include configuration from other files. To enable this,
pass a list of paths to this option. The paths can be either relative or
absolute; if relative, they are considered to be relative to the directory
the main minion configuration file lives in. Paths can make use of
shell-style globbing. If no files are matched by a path passed to this
option then the master will log a warning message.
# Include files from a master.d directory in the same
# directory as the master config file
include: master.d/*
# Include a single extra file into the configuration
include: /etc/roles/webserver
# Include several files and the master.d directory
include:
- extra_config
- master.d/*
- /etc/roles/webserver
Configuring the Salt Minion
The Salt system is amazingly simple and easy to configure, the two components
of the Salt system each have a respective configuration file. The
salt-master is configured via the master configuration file, and the
salt-minion is configured via the minion configuration file.
The Salt Minion configuration is very simple, typically the only value that
needs to be set is the master value so the minion can find its master.
Minion Primary Configuration
master
Default: salt
The hostname or ipv4 of the master.
master_port
Default: 4506
The port of the master ret server, this needs to coincide with the ret_port
option on the Salt master.
user
Default: root
The user to run the Salt processes
pidfile
Default: /var/run/salt-minion.pid
The location of the daemon's process ID file
pidfile: /var/run/salt-minion.pid
pki_dir
Default: /etc/salt/pki
The directory used to store the minion's public and private keys.
id
Default: the system's hostname
See also
Salt Walkthrough
The Setting up a Salt Minion section contains detailed
information on how the hostname is determined.
Explicitly declare the id for this minion to use. Since Salt uses detached ids
it is possible to run multiple minions on the same machine but with different
ids. This can be useful for Salt compute clusters.
append_domain
Default: None
Append a domain to a hostname in the event that it does not exist. This is
useful for systems where socket.getfqdn()
does not actually result in a
FQDN (for instance, Solaris).
cachedir
Default: /var/cache/salt
The location for minion cache data.
cachedir: /var/cache/salt
verify_env
Default: True
Verify and set permissions on configuration directories at startup.
cache_jobs
Default: False
The minion can locally cache the return data from jobs sent to it, this can be
a good way to keep track of the minion side of the jobs the minion has
executed. By default this feature is disabled, to enable set cache_jobs to
True
.
sock_dir
Default: /var/run/salt/minion
The directory where Unix sockets will be kept.
sock_dir: /var/run/salt/minion
backup_mode
Default: []
Backup files replaced by file.managed and file.recurse under cachedir.
acceptance_wait_time
Default: 10
The number of seconds to wait until attempting to re-authenticate with the
master.
random_reauth_delay
When the master key changes, the minion will try to re-auth itself to
receive the new master key. In larger environments this can cause a syn-flood
on the master because all minions try to re-auth immediately. To prevent this
and have a minion wait for a random amount of time, use this optional
parameter. The wait-time will be a random number of seconds between
0 and the defined value.
acceptance_wait_time_max
Default: None
The maximum number of seconds to wait until attempting to re-authenticate
with the master. If set, the wait will increase by acceptance_wait_time
seconds each iteration.
acceptance_wait_time_max: None
dns_check
Default: True
When healing, a dns_check is run. This is to make sure that the originally
resolved dns has not changed. If this is something that does not happen in your
environment, set this value to False
.
ipc_mode
Default: ipc
Windows platforms lack POSIX IPC and must rely on slower TCP based inter-
process communications. Set ipc_mode to tcp
on such systems.
tcp_pub_port
Default: 4510
Publish port used when ipc_mode
is set to tcp
.
tcp_pull_port
Default: 4511
Pull port used when ipc_mode
is set to tcp
.
Minion Module Management
disable_modules
Default: []
(all modules are enabled by default)
The event may occur in which the administrator desires that a minion should not
be able to execute a certain module. The sys module is built into the minion
and cannot be disabled.
This setting can also tune the minion, as all modules are loaded into ram
disabling modules will lover the minion's ram footprint.
disable_modules:
- test
- solr
disable_returners
Default: []
(all returners are enabled by default)
If certain returners should be disabled, this is the place
disable_returners:
- mongo_return
module_dirs
Default: []
A list of extra directories to search for Salt modules
module_dirs:
- /var/lib/salt/modules
returner_dirs
Default: []
A list of extra directories to search for Salt returners
returners_dirs:
- /var/lib/salt/returners
states_dirs
Default: []
A list of extra directories to search for Salt states
states_dirs:
- /var/lib/salt/states
render_dirs
Default: []
A list of extra directories to search for Salt renderers
render_dirs:
- /var/lib/salt/renderers
cython_enable
Default: False
Set this value to true to enable auto-loading and compiling of .pyx
modules,
This setting requires that gcc
and cython
are installed on the minion
providers
Default: (empty)
A module provider can be statically overwritten or extended for the minion via
the providers
option. This can be done on an individual basis in an
SLS file, or globally here in the minion config, like
below.
providers:
pkg: yumpkg5
service: systemd
State Management Settings
renderer
Default: yaml_jinja
The default renderer used for local state executions
state_verbose
Default: False
state_verbose allows for the data returned from the minion to be more
verbose. Normally only states that fail or states that have changes are
returned, but setting state_verbose to True
will return all states that
were checked
state_output
Default: full
The state_output setting changes if the output is the full multi line
output for each changed state if set to 'full', but if set to 'terse'
the output will be shortened to a single line.
autoload_dynamic_modules
Default: True
autoload_dynamic_modules Turns on automatic loading of modules found in the
environments on the master. This is turned on by default, to turn of
auto-loading modules when states run set this value to False
autoload_dynamic_modules: True
Default: True
clean_dynamic_modules keeps the dynamic modules on the minion in sync with
the dynamic modules on the master, this means that if a dynamic module is
not on the master it will be deleted from the minion. By default this is
enabled and can be disabled by changing this value to False
clean_dynamic_modules: True
environment
Default: None
Normally the minion is not isolated to any single environment on the master
when running states, but the environment can be isolated on the minion side
by statically setting it. Remember that the recommended way to manage
environments is to isolate via the top file.
File Directory Settings
file_client
Default: remote
The client defaults to looking on the master server for files, but can be
directed to look on the minion by setting this parameter to local
.
file_roots
Default:
When using a local file_client
, this parameter is used to setup
the fileserver's environments. This parameter operates identically to the
master config parameter of the same name
.
file_roots:
base:
- /srv/salt
dev:
- /srv/salt/dev/services
- /srv/salt/dev/states
prod:
- /srv/salt/prod/services
- /srv/salt/prod/states
hash_type
Default: md5
The hash_type is the hash to use when discovering the hash of a file on the
local fileserver. The default is md5, but sha1, sha224, sha256, sha384 and
sha512 are also supported.
pillar_roots
Default:
When using a local file_client
, this parameter is used to setup
the pillar environments.
pillar_roots:
base:
- /srv/pillar
dev:
- /srv/pillar/dev
prod:
- /srv/pillar/prod
Security Settings
open_mode
Default: False
Open mode can be used to clean out the PKI key received from the Salt master,
turn on open mode, restart the minion, then turn off open mode and restart the
minion to clean the keys.
Thread Settings
Default: True
Disable multiprocessing support by default when a minion receives a
publication a new process is spawned and the command is executed therein.
Minion Logging Settings
log_file
Default: /var/log/salt/minion
The minion log can be sent to a regular file, local path name, or network
location. See also log_file
.
Examples:
log_file: /var/log/salt/minion
log_file: file:///dev/log
log_file: udp://loghost:10514
log_level
Default: warning
The level of messages to send to the console. See also log_level
.
log_level_logfile
Default: warning
The level of messages to send to the log file. See also
log_level_logfile
.
log_level_logfile: warning
log_datefmt
Default: %H:%M:%S
The date and time format used in console log messages. See also
log_datefmt
.
log_datefmt_logfile
Default: %Y-%m-%d %H:%M:%S
The date and time format used in log file messages. See also
log_datefmt_logfile
.
log_datefmt_logfile: '%Y-%m-%d %H:%M:%S'
log_fmt_console
Default: [%(levelname)-8s] %(message)s
The format of the console logging messages. See also
log_fmt_console
.
log_fmt_console: '[%(levelname)-8s] %(message)s'
log_fmt_logfile
Default: %(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s
The format of the log file logging messages. See also
log_fmt_logfile
.
log_fmt_logfile: '%(asctime)s,%(msecs)03.0f [%(name)-17s][%(levelname)-8s] %(message)s'
log_granular_levels
Default: {}
This can be used to control logging levels more specifically. See also
log_granular_levels
.
Include Configuration
default_include
Default: minion.d/*.conf
The minion can include configuration from other files. Per default the
minion will automatically include all config files from minion.d/*.conf
where minion.d is relative to the directory of the minion configuration
file.
include
Default: not defined
The minion can include configuration from other files. To enable this,
pass a list of paths to this option. The paths can be either relative or
absolute; if relative, they are considered to be relative to the directory
the main minion configuration file lives in. Paths can make use of
shell-style globbing. If no files are matched by a path passed to this
option then the minion will log a warning message.
# Include files from a minion.d directory in the same
# directory as the minion config file
include: minion.d/*.conf
# Include a single extra file into the configuration
include: /etc/roles/webserver
# Include several files and the minion.d directory
include:
- extra_config
- minion.d/*
- /etc/roles/webserver
Frozen Build Update Settings
These options control how salt.modules.saltutil.update()
works with esky
frozen apps. For more information look at https://github.com/cloudmatrix/esky/.
update_url
Default: False
(Update feature is disabled)
The url to use when looking for application updates. Esky depends on directory
listings to search for new versions. A webserver running on your Master is a
good starting point for most setups.
update_url: 'http://salt.example.com/minion-updates'
update_restart_services
Default: []
(service restarting on update is disabled)
A list of services to restart when the minion software is updated. This would
typically just be a list containing the minion's service name, but you may
have other services that need to go with it.
update_restart_services: ['salt-minion']
Salt code and internals
Reference documentation on Salt's internal code.
Contents
Exceptions
Salt-specific exceptions should be thrown as often as possible so the various
interfaces to Salt (CLI, API, etc) can handle those errors appropriately and
display error messages appropriately.
salt.exceptions
This module is a central location for all salt exceptions
-
exception
salt.exceptions.
AuthenticationError
If sha256 signature fails during decryption
-
exception
salt.exceptions.
CommandExecutionError
Used when a module runs a command which returns an error and wants
to show the user the output gracefully instead of dying
-
exception
salt.exceptions.
CommandNotFoundError
Used in modules or grains when a required binary is not available
-
exception
salt.exceptions.
EauthAuthenticationError
Thrown when eauth authentication fails
-
exception
salt.exceptions.
LoaderError
Problems loading the right renderer
-
exception
salt.exceptions.
MasterExit
Rise when the master exits
-
exception
salt.exceptions.
MinionError
Minion problems reading uris such as salt:// or http://
-
exception
salt.exceptions.
PkgParseError
Used when of the pkg modules cannot correctly parse the output from
the CLI tool (pacman, yum, apt, aptitude, etc)
-
exception
salt.exceptions.
SaltClientError
Problem reading the master root key
-
exception
salt.exceptions.
SaltException
Base exception class; all Salt-specific exceptions should subclass this
-
exception
salt.exceptions.
SaltInvocationError
Used when the wrong number of arguments are sent to modules or invalid
arguments are specified on the command line
-
exception
salt.exceptions.
SaltMasterError
Problem reading the master root key
-
exception
salt.exceptions.
SaltRenderError
Used when a renderer needs to raise an explicit error
-
exception
salt.exceptions.
SaltReqTimeoutError
Thrown when a salt master request call fails to return within the timeout
-
exception
salt.exceptions.
SaltSystemExit
(code=0, msg=None)
This exception is raised when an unsolvable problem is found. There's
nothing else to do, salt should just exit.
-
exception
salt.exceptions.
TimedProcTimeoutError
Thrown when a timed subprocess does not terminate within the timeout,
or if the specified timeout is not an int or a float
Network Topology
Salt is based on a powerful, asynchronous, network topology using ZeroMQ. Many
ZeroMQ systems are in place to enable communication. The central idea is to
have the fastest communication possible.
Servers
The Salt Master runs 2 network services. First is the ZeroMQ PUB system. This
service by default runs on port 4505
and can be configured via the
publish_port
option in the master configuration.
Second is the ZeroMQ REP system. This is a separate interface used for all
bi-directional communication with minions. By default this system binds to
port 4506
and can be configured via the ret_port
option in the master.
PUB/SUB
The commands sent out via the salt client are broadcast out to the minions via
ZeroMQ PUB/SUB. This is done by allowing the minions to maintain a connection
back to the Salt Master and then all connections are informed to download the
command data at once. The command data is kept extremely small (usually less
than 1K) so it is not a burden on the network.
Return
The PUB/SUB system is a one way communication, so once a publish is sent out
the PUB interface on the master has no further communication with the minion.
The minion, after running the command, then sends the command's return data
back to the master via the ret_port
.
Windows Software Repository
The Salt Windows Software Repository provides a package manager and software
repository similar to what is provided by yum and apt on Linux.
It permits the installation of software using the installers on remote
windows machines. In many senses, the operation is similar to that of
the other package managers salt is aware of:
- the
pkg.installed
and similar states work on Windows.
- the
pkg.install
and similar module functions work on Windows.
- each windows machine needs to have
pkg.refresh_db
executed
against it to pick up the latest version of the package database.
High level differences to yum and apt are:
- The repository metadata (sls files) is hosted through either salt or
git.
- Packages can be downloaded from within the salt repository, a git
repository or from http(s) or ftp urls.
- No dependencies are managed. Dependencies between packages needs to
be managed manually.
Operation
The install state/module function of the windows package manager works
roughly as follows:
- Execute
pkg.list_pkgs
and store the result
- Check if any action needs to be taken. (ie compare required package
and version against
pkg.list_pkgs
results)
- If so, run the installer command.
- Execute
pkg.list_pkgs
and compare to the result stored from
before installation.
- Sucess/Failure/Changes will be reported based on the differences
between the original and final
pkg.list_pkgs
results.
If there are any problems in using the package manager it is likely to
be due to the data in your sls files not matching the difference
between the pre and post pkg.list_pkgs
results.
Usage
By default, the Windows software repository is found at /srv/salt/win/repo
This can be changed in the master config file (default location is
/etc/salt/master
) by modifying the win_repo
variable. Each piece of
software should have its own directory which contains the installers and a
package definition file. This package definition file is a YAML file named
init.sls
.
The package definition file should look similar to this example for Firefox:
/srv/salt/win/repo/firefox/init.sls
firefox:
17.0.1:
installer: 'salt://win/repo/firefox/English/Firefox Setup 17.0.1.exe'
full_name: Mozilla Firefox 17.0.1 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
16.0.2:
installer: 'salt://win/repo/firefox/English/Firefox Setup 16.0.2.exe'
full_name: Mozilla Firefox 16.0.2 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
15.0.1:
installer: 'salt://win/repo/firefox/English/Firefox Setup 15.0.1.exe'
full_name: Mozilla Firefox 15.0.1 (x86 en-US)
locale: en_US
reboot: False
install_flags: ' -ms'
uninstaller: '%ProgramFiles(x86)%/Mozilla Firefox/uninstall/helper.exe'
uninstall_flags: ' /S'
More examples can be found here: https://github.com/saltstack/salt-winrepo
The version number and full_name
need to match the output from pkg.list_pkgs
so that the status can be verfied when running highstate.
Note: It is still possible to successfully install packages using pkg.install
even if they don't match which can make this hard to troubleshoot.
salt 'test-2008' pkg.list_pkgs
test-2008
----------
7-Zip 9.20 (x64 edition):
9.20.00.0
Microsoft .NET Framework 4 Client Profile:
4.0.30319,4.0.30319
Microsoft .NET Framework 4 Extended:
4.0.30319,4.0.30319
Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
9.0.21022
Mozilla Firefox 17.0.1 (x86 en-US):
17.0.1
Mozilla Maintenance Service:
17.0.1
NSClient++ (x64):
0.3.8.76
Notepad++:
6.4.2
Salt Minion 0.16.0:
0.16.0
If any of these preinstalled packages already exist in winrepo the full_name
will be automatically renamed to their package name during the next update
(running highstate or installing another package).
test-2008:
----------
7zip:
9.20.00.0
Microsoft .NET Framework 4 Client Profile:
4.0.30319,4.0.30319
Microsoft .NET Framework 4 Extended:
4.0.30319,4.0.30319
Microsoft Visual C++ 2008 Redistributable - x64 9.0.21022:
9.0.21022
Mozilla Maintenance Service:
17.0.1
Notepad++:
6.4.2
Salt Minion 0.16.0:
0.16.0
firefox:
17.0.1
nsclient:
0.3.9.328
Add msiexec: True
if using an MSI installer requiring the use of msiexec
/i
to install and msiexec /x
to uninstall.
The install_flags
and uninstall_flags
are flags passed to the software
installer to cause it to perform a silent install. These can often be found by
adding /?
or /h
when running the installer from the command line. A
great resource for finding these silent install flags can be found on the WPKG
project's wiki:
7zip:
9.20.00.0:
installer: salt://win/repo/7zip/7z920-x64.msi
full_name: 7-Zip 9.20 (x64 edition)
reboot: False
install_flags: ' /q '
msiexec: True
uninstaller: salt://win/repo/7zip/7z920-x64.msi
uninstall_flags: ' /qn'
Generate Repo Cache File
Once the sls file has been created, generate the repository cache file with the winrepo runner:
Then update the repository cache file on your minions, exactly how it's done for the Linux package managers:
Install Windows Software
Now you can query the available version of Firefox using the Salt pkg module.
salt '*' pkg.available_version firefox
{'davewindows': {'15.0.1': 'Mozilla Firefox 15.0.1 (x86 en-US)',
'16.0.2': 'Mozilla Firefox 16.0.2 (x86 en-US)',
'17.0.1': 'Mozilla Firefox 17.0.1 (x86 en-US)'}}
As you can see, there are three versions of Firefox available for installation.
salt '*' pkg.install firefox
The above line will install the latest version of Firefox.
salt '*' pkg.install firefox version=16.0.2
The above line will install version 16.0.2 of Firefox.
If a different version of the package is already installed it will
be replaced with the version in winrepo (only if the package itself supports
live updating)
Uninstall Windows Software
Uninstall software using the pkg module:
salt '*' pkg.remove firefox
salt '*' pkg.purge firefox
pkg.purge
just executes pkg.remove
on Windows. At some point in the
future pkg.purge
may direct the installer to remove all configs and
settings for software packages that support that option.
Standalone Minion Salt Windows Repo Module
In order to facilitate managing a Salt Windows software repo with Salt on a
Standalone Minion on Windows, a new module named winrepo has been added to
Salt. wirepo matches what is available in the salt runner and allows you to
manage the Windows software repo contents. Example: salt '*'
winrepo.genrepo
Git Hosted Repo
Windows software package definitions can also be hosted in one or more git
repositories. The default repo is one hosted on Github.com by SaltStack,Inc., which
includes package definitions for open source software. This repo points to the
HTTP or ftp locations of the installer files. Anyone is welcome to send a pull
request to this repo to add new package definitions. Browse the repo
here: https://github.com/saltstack/salt-winrepo .
Configure which git repos the master can search for package definitions by
modifying or extending the win_gitrepos
configuration option list in the
master config.
Checkout each git repo in win_gitrepos
, compile your package repository
cache and then refresh each minion's package cache:
salt-run winrepo.update_git_repos
salt-run winrepo.genrepo
salt '*' pkg.refresh_db
Troubleshooting
Incorrect name/version
If the package seems to install properly, but salt reports a failure
then it is likely you have a version or full_name
mismatch.
Check the exact full_name
and version used by the package. Use
pkg.list_pkgs
to check that the names and version exactly match
what is installed.
Changes to sls files not being picked up
Ensure you have (re)generated the repository cache file and then
updated the repository cache on the relevant minions:
salt-run winrepo.genrepo
salt 'MINION' pkg.refresh_db
Packages management under Windows 2003
On windows server 2003, you need to install optional windows component
"wmi windows installer provider" to have full list of installed packages.
If you don't have this, salt-minion can't report some installed software.
Command Line Reference
Salt can be controlled by a command line client by the root user on the Salt
master. The Salt command line client uses the Salt client API to communicate
with the Salt master server. The Salt client is straightforward and simple
to use.
Using the Salt client commands can be easily sent to the minions.
Each of these commands accepts an explicit --config option to point to either
the master or minion configuration file. If this option is not provided and
the default configuration file does not exist then Salt falls back to use the
environment variables SALT_MASTER_CONFIG
and SALT_MINION_CONFIG
.
Using the Salt Command
The Salt command needs a few components to send information to the Salt
minions. The target minions need to be defined, the function to call and any
arguments the function requires.
Defining the Target Minions
The first argument passed to salt, defines the target minions, the target
minions are accessed via their hostname. The default target type is a bash
glob:
Salt can also define the target minions with regular expressions:
salt -E '.*' cmd.run 'ls -l | grep foo'
Or to explicitly list hosts, salt can take a list:
salt -L foo.bar.baz,quo.qux cmd.run 'ps aux | grep foo'
More Powerful Targets
The simple target specifications, glob, regex and list will cover many use
cases, and for some will cover all use cases, but more powerful options exist.
Targeting with Grains
The Grains interface was built into Salt to allow minions to be targeted by
system properties. So minions running on a particular operating system can
be called to execute a function, or a specific kernel.
Calling via a grain is done by passing the -G option to salt, specifying
a grain and a glob expression to match the value of the grain. The syntax for
the target is the grain key followed by a globexpression: "os:Arch*".
salt -G 'os:Fedora' test.ping
Will return True from all of the minions running Fedora.
To discover what grains are available and what the values are, execute the
grains.item salt function:
Targeting with Executions
As of 0.8.8 targeting with executions is still under heavy development and this
documentation is written to reference the behavior of execution matching in the
future.
Execution matching allows for a primary function to be executed, and then based
on the return of the primary function the main function is executed.
Execution matching allows for matching minions based on any arbitrary running
data on the minions.
Compound Targeting
Multiple target interfaces can be used in conjunction to determine the command
targets. These targets can then be combined using and or or statements. This
is well defined with an example:
salt -C 'G@os:Debian and webser* or E@db.*' test.ping
In this example any minion who's id starts with webser
and is running
Debian, or any minion who's id starts with db will be matched.
The type of matcher defaults to glob, but can be specified with the
corresponding letter followed by the @
symbol. In the above example a grain
is used with G@
as well as a regular expression with E@
. The
webser*
target does not need to be prefaced with a target type specifier
because it is a glob.
Node Group Targeting
Often the convenience of having a predefined group of minions to execute
targets on is desired. This can be accomplished with the new nodegroups
feature. Nodegroups allow for predefined compound targets to be declared in
the master configuration file:
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
Calling the Function
The function to call on the specified target is placed after the target
specification.
Functions may also accept arguments, space-delimited:
salt '*' cmd.exec_code python 'import sys; print sys.version'
Optional, keyword arguments are also supported:
salt '*' pip.install salt timeout=5 upgrade=True
They are always in the form of kwarg=argument
.
Arguments are formatted as YAML:
salt '*' cmd.run 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Note: dictionaries must have curly braces around them (like the env
keyword argument above). This was changed in 0.15.1: in the above example,
the first argument used to be parsed as the dictionary
{'echo "Hello': '$FIRST_NAME"'}
. This was generally not the expected
behavior.
If you want to test what parameters are actually passed to a module, use the
test.arg_repr
command:
salt '*' test.arg_repr 'echo "Hello: $FIRST_NAME"' env='{FIRST_NAME: "Joe"}'
Finding available minion functions
The Salt functions are self documenting, all of the function documentation can
be retried from the minions via the sys.doc()
function:
Compound Command Execution
If a series of commands needs to be sent to a single target specification then
the commands can be sent in a single publish. This can make gathering
groups of information faster, and lowers the stress on the network for repeated
commands.
Compound command execution works by sending a list of functions and arguments
instead of sending a single function and argument. The functions are executed
on the minion in the order they are defined on the command line, and then the
data from all of the commands are returned in a dictionary. This means that
the set of commands are called in a predictable way, and the returned data can
be easily interpreted.
Executing compound commands if done by passing a comma delimited list of
functions, followed by a comma delimited list of arguments:
salt '*' cmd.run,test.ping,test.echo 'cat /proc/cpuinfo',,foo
The trick to look out for here, is that if a function is being passed no
arguments, then there needs to be a placeholder for the absent arguments. This
is why in the above example, there are two commas right next to each other.
test.ping
takes no arguments, so we need to add another comma, otherwise
Salt would attempt to pass "foo" to test.ping
.
If you need to pass arguments that include commas, then make sure you add
spaces around the commas that separate arguments. For example:
salt '*' cmd.run,test.ping,test.echo 'echo "1,2,3"' , , foo
You may change the arguments separator using the --args-separator
option:
salt --args-separator=:: '*' some.fun,test.echo params with , comma :: foo
salt
Synopsis
salt '*' [ options ] sys.doc
salt -E '.*' [ options ] sys.doc cmd
salt -G 'os:Arch.*' [ options ] test.ping
salt -C 'G@os:Arch.* and webserv* or G@kernel:FreeBSD' [ options ] test.ping
Description
Salt allows for commands to be executed across a swath of remote systems in
parallel. This means that remote systems can be both controlled and queried
with ease.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-t
TIMEOUT
,
--timeout
=TIMEOUT
The timeout in seconds to wait for replies from the Salt minions. The
timeout number specifies how long the command line client will wait to
query the minions and check on running jobs. Default: 5
-
-s
,
--static
By default as of version 0.9.8 the salt command returns data to the
console as it is received from minions, but previous releases would return
data only after all data was received. To only return the data with a hard
timeout and after all minions have returned then use the static option.
-
--async
Instead of waiting for the job to run on minions only print the jod id of
the started execution and complete.
-
--state-output
=STATE_OUTPUT
-
Override the configured state_output value for minion output. Default:
full
-
--subset
=SUBSET
Execute the routine on a random subset of the targeted minions. The
minions will be verified that they have the named function before
executing.
-
-v
VERBOSE
,
--verbose
Turn on verbosity for the salt call, this will cause the salt command to
print out extra data like the job id.
-
-b
BATCH
,
--batch-size
=BATCH
Instead of executing on all targeted minions at once, execute on a
progressive set of minions. This option takes an argument in the form of
an explicit number of minions to execute at once, or a percentage of
minions to execute on.
-
-a
EAUTH
,
--auth
=EAUTH
Pass in an external authentication medium to validate against. The
credentials will be prompted for. Can be used with the -T option.
-
-T
,
--make-token
Used in conjunction with the -a option. This creates a token that allows
for the authenticated user to send commands without needing to
re-authenticate.
-
--return
=RETURNER
Chose an alternative returner to call on the minion, if an alternative
returner is used then the return will not come back to the command line
but will be sent to the specified return system.
-
-d
,
--doc
,
--documentation
Return the documentation for the module functions available on the minions
-
--args-separator
=ARGS_SEPARATOR
Set the special argument used as a delimiter between command arguments of
compound commands. This is useful when one wants to pass commas as
arguments to some of the commands in a compound command.
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/master.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
Target Selection
-
-E
,
--pcre
The target expression will be interpreted as a PCRE regular expression
rather than a shell glob.
-
-L
,
--list
The target expression will be interpreted as a comma-delimited list;
example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-
-G
,
--grain
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of
regular expression. To use regular expression matching with grains, use
the --grain-pcre option.
-
--grain-pcre
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<
regular expression>'; example: 'os:Arch.*'
-
-N
,
--nodegroup
Use a predefined compound target defined in the Salt master configuration
file.
-
-R
,
--range
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the
location of the range server is referenced in the master configuration
file.
-
-C
,
--compound
Utilize many target definitions to make the call very granular. This option
takes a group of targets separated by and
or or
. The default matcher is a
glob as usual. If something other than a glob is used, preface it with the
letter denoting the type; example: 'webserv* and G@os:Debian or E@db*'
Make sure that the compound target is encapsulated in quotes.
-
-X
,
--exsel
Instead of using shell globs, use the return code of a function.
-
-I
,
--pillar
Instead of using shell globs to evaluate the target, use a pillar value to
identify targets. The syntax for the target is the pillar key followed by
a glob expression: "role:production*"
-
-S
,
--ipcidr
Match based on Subnet (CIDR notation) or IPv4 address.
Output Options
-
--out
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains
, highstate
, json
, key
, overstatestage
, pprint
, raw
, txt
, yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
-
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
Print the output indented by the provided value in spaces. Negative values
disable indentation. Only applicable in outputters that support
indentation.
-
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
Write the output to the specified file.
-
--no-color
Disable all colored output
-
--force-color
Force colored output
See also
salt(7)
salt-master(1)
salt-minion(1)
salt-master
The Salt master daemon, used to control the Salt minions
Synopsis
salt-master [ options ]
Description
The master daemon controls the Salt minions
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-u
USER
,
--user
=USER
Specify user to run salt-master
-
-d
,
--daemon
Run salt-master as a daemon
-
--pid-file
PIDFILE
Specify the location of the pidfile. Default: /var/run/salt-master.pid
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/master.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
See also
salt(1)
salt(7)
salt-minion(1)
salt-minion
The Salt minion daemon, receives commands from a remote Salt master.
Synopsis
salt-minion [ options ]
Description
The Salt minion receives commands from the central Salt master and replies with
the results of said commands.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-u
USER
,
--user
=USER
Specify user to run salt-minion
-
-d
,
--daemon
Run salt-minion as a daemon
-
--pid-file
PIDFILE
Specify the location of the pidfile. Default: /var/run/salt-minion.pid
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/minion.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
See also
salt(1)
salt(7)
salt-master(1)
salt-key
Synopsis
salt-key [ options ]
Description
Salt-key executes simple management of Salt server public keys used for
authentication.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-q
,
--quiet
Suppress output
-
-y
,
--yes
Answer 'Yes' to all questions presented, defaults to False
Logging Options
Logging options which override any settings defined on the configuration files.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/minion.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
Output Options
-
--out
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains
, highstate
, json
, key
, overstatestage
, pprint
, raw
, txt
, yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
-
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
Print the output indented by the provided value in spaces. Negative values
disable indentation. Only applicable in outputters that support
indentation.
-
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
Write the output to the specified file.
-
--no-color
Disable all colored output
-
--force-color
Force colored output
Actions
-
-l
ARG
,
--list
=ARG
List the public keys. The args "pre", "un", and "unaccepted" will list
unaccepted/unsigned keys. "acc" or "accepted" will list accepted/signed
keys. "rej" or "rejected" will list rejected keys. Finally, "all" will list
all keys.
-
-L
,
--list-all
List all public keys on this Salt master: accepted, pending,
and rejected.
-
-a
ACCEPT
,
--accept
=ACCEPT
Accept the named minion public key for command execution.
-
-A
,
--accept-all
Accepts all pending public keys.
-
-r
REJECT
,
--reject
=REJECT
Reject the named minion public key.
-
-R
,
--reject-all
Rejects all pending public keys.
-
-p
PRINT
,
--print
=PRINT
Print the specified public key
-
-P
,
--print-all
Print all public keys
-
-d
DELETE
,
--delete
=DELETE
Delete the named minion key or minion keys matching a glob for command
execution.
-
-D
,
--delete-all
Delete all keys
-
-f
FINGER
,
--finger
=FINGER
Print the named key's fingerprint
-
-F
,
--finger-all
Print all key's fingerprints
Key Generation Options
-
--gen-keys
=GEN_KEYS
Set a name to generate a keypair for use with salt
-
--gen-keys-dir
=GEN_KEYS_DIR
Set the directory to save the generated keypair. Only works
with 'gen_keys_dir' option; default is the current directory.
-
--keysize
=KEYSIZE
Set the keysize for the generated key, only works with
the '--gen-keys' option, the key size must be 2048 or
higher, otherwise it will be rounded up to 2048. The
default is 2048.
See also
salt(7)
salt-master(1)
salt-minion(1)
salt-cp
Copy a file to a set of systems
Synopsis
salt-cp '*' [ options ] SOURCE DEST
salt-cp -E '.*' [ options ] SOURCE DEST
salt-cp -G 'os:Arch.*' [ options ] SOURCE DEST
Description
Salt copy copies a local file out to all of the Salt minions matched by the
given target.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-t
TIMEOUT
,
--timeout
=TIMEOUT
The timeout in seconds to wait for replies from the Salt minions. The
timeout number specifies how long the command line client will wait to
query the minions and check on running jobs. Default: 5
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/master.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
Target Selection
-
-E
,
--pcre
The target expression will be interpreted as a PCRE regular expression
rather than a shell glob.
-
-L
,
--list
The target expression will be interpreted as a comma-delimited list;
example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-
-G
,
--grain
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of
regular expression. To use regular expression matching with grains, use
the --grain-pcre option.
-
--grain-pcre
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<
regular expression>'; example: 'os:Arch.*'
-
-N
,
--nodegroup
Use a predefined compound target defined in the Salt master configuration
file.
-
-R
,
--range
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the
location of the range server is referenced in the master configuration
file.
See also
salt(1)
salt-master(1)
salt-minion(1)
salt-call
Description
The salt-call command is used to run module functions locally on a minion
instead of executing them from the master.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-g
,
--grains
Return the information generated by the Salt grains
-
-m
MODULE_DIRS
,
--module-dirs
=MODULE_DIRS
Specify an additional directories to pull modules from, multiple
directories can be delimited by commas
-
-d
,
--doc
,
--documentation
Return the documentation for the specified module or for all modules if
none are specified
-
--master
=MASTER
Specify the master to use. The minion must be authenticated with the
master. If this option is omitted, the master options from the minion
config will be used. If multi masters are set up the first listed master
that responds will be used.
-
--return
RETURNER
Set salt-call to pass the return data to one or many returner interfaces.
To use many returner interfaces specify a comma delimited list of
returners.
-
--local
Run salt-call locally, as if there was no master running.
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
info
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/minion.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
info
.
Output Options
-
--out
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains
, highstate
, json
, key
, overstatestage
, pprint
, raw
, txt
, yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
-
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
Print the output indented by the provided value in spaces. Negative values
disable indentation. Only applicable in outputters that support
indentation.
-
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
Write the output to the specified file.
-
--no-color
Disable all colored output
-
--force-color
Force colored output
See also
salt(1)
salt-master(1)
salt-minion(1)
salt-run
Execute a Salt runner
Description
salt-run is the frontend command for executing Salt Runners
.
Salt runners are simple modules used to execute convenience functions on the
master
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-t
TIMEOUT
,
--timeout
=TIMEOUT
The timeout in seconds to wait for replies from the Salt minions. The
timeout number specifies how long the command line client will wait to
query the minions and check on running jobs. Default: 1
-
-d
,
--doc
,
--documentation
Display documentation for runners, pass a module or a runner to see
documentation on only that module/runner.
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/master.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
See also
salt(1)
salt-master(1)
salt-minion(1)
salt-ssh
Synopsis
salt-ssh '*' [ options ] sys.doc
salt-ssh -E '.*' [ options ] sys.doc cmd
Description
Salt ssh allows for salt routines to be executed using only ssh for transport
Options
-
-r
,
--raw
,
--raw-shell
Execute a raw shell command.
-
--roster-file
Define which roster system to use, this defines if a database backend,
scanner, or custom roster system is used. Default is the flat file roster.
-
--refresh
,
--refresh-cache
Force a refresh of the master side data cache of the target's data. This
is needed if a target's grains have been changed and the auto refresh
timeframe has not been reached.
-
--max-procs
Set the number of concurrent minions to communicate with. This value
defines how many processes are opened up at a time to manage connections,
the more running process the faster communication should be, default
is 25.
-
--passwd
Set te default password to attempt to use when authenticating.
-
--key-deploy
Set this flag to attempt to deploy the authorized ssh key with all
minions. This combined with --passwd can make initial deployment of keys
very fast and easy.
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
Target Selection
-
-E
,
--pcre
The target expression will be interpreted as a PCRE regular expression
rather than a shell glob.
-
-L
,
--list
The target expression will be interpreted as a comma-delimited list;
example: server1.foo.bar,server2.foo.bar,example7.quo.qux
-
-G
,
--grain
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<glob
expression>'; example: 'os:Arch*'
This was changed in version 0.9.8 to accept glob expressions instead of
regular expression. To use regular expression matching with grains, use
the --grain-pcre option.
-
--grain-pcre
The target expression matches values returned by the Salt grains system on
the minions. The target expression is in the format of '<grain value>:<
regular expression>'; example: 'os:Arch.*'
-
-N
,
--nodegroup
Use a predefined compound target defined in the Salt master configuration
file.
-
-R
,
--range
Instead of using shell globs to evaluate the target, use a range expression
to identify targets. Range expressions look like %cluster.
Using the Range option requires that a range server is set up and the
location of the range server is referenced in the master configuration
file.
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/ssh.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
Output Options
-
--out
Pass in an alternative outputter to display the return of data. This
outputter can be any of the available outputters:
grains
, highstate
, json
, key
, overstatestage
, pprint
, raw
, txt
, yaml
Some outputters are formatted only for data returned from specific
functions; for instance, the grains
outputter will not work for non-grains
data.
If an outputter is used that does not support the data passed into it, then
Salt will fall back on the pprint
outputter and display the return data
using the Python pprint
standard library module.
Note
If using --out=json
, you will probably want --static
as well.
Without the static option, you will get a JSON string for each minion.
This is due to using an iterative outputter. So if you want to feed it
to a JSON parser, use --static
as well.
-
--out-indent
OUTPUT_INDENT
,
--output-indent
OUTPUT_INDENT
Print the output indented by the provided value in spaces. Negative values
disable indentation. Only applicable in outputters that support
indentation.
-
--out-file
=OUTPUT_FILE
,
--output-file
=OUTPUT_FILE
Write the output to the specified file.
-
--no-color
Disable all colored output
-
--force-color
Force colored output
See also
salt(7)
salt-master(1)
salt-minion(1)
salt-syndic
The Salt syndic daemon, a special minion that passes through commands from a
higher master
Synopsis
salt-syndic [ options ]
Description
The Salt syndic daemon, a special minion that passes through commands from a
higher master.
Options
-
--version
Print the version of Salt that is running.
-
--versions-report
Show program's dependencies and version number, and then exit
-
-h
,
--help
Show the help message and exit
-
-c
CONFIG_DIR
,
--config-dir
=CONFIG_dir
The location of the Salt configuration directory. This directory contains
the configuration files for Salt master and minions. The default location
on most systems is /etc/salt
.
-
-u
USER
,
--user
=USER
Specify user to run salt-syndic
-
-d
,
--daemon
Run salt-syndic as a daemon
-
--pid-file
PIDFILE
Specify the location of the pidfile. Default: /var/run/salt-syndic.pid
Logging Options
Logging options which override any settings defined on the configuration files.
-
-l
LOG_LEVEL
,
--log-level
=LOG_LEVEL
Console logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
-
--log-file
=LOG_FILE
Log file path. Default: /var/log/salt/master.
-
--log-file-level
=LOG_LEVEL_LOGFILE
Logfile logging log level. One of all
, garbage
, trace
,
debug
, info
, warning
, error
, quiet
. Default:
warning
.
See also
salt(1)
salt-master(1)
salt-minion(1)
Release notes and upgrade instructions
Salt 0.10.0 Release Notes
0.10.0 has arrived! This release comes with MANY bug fixes, and new
capabilities which greatly enhance performance and reliability. This
release is primarily a bug fix release with many new tests and many repaired
bugs. This release also introduces a few new key features which were brought
in primarily to repair bugs and some limitations found in some of the
components of the original architecture.
Major Features
Event System
The Salt Master now comes equipped with a new event system. This event system
has replaced some of the back end of the Salt client and offers the beginning of
a system which will make plugging external applications into Salt. The event
system relies on a local ZeroMQ publish socket and other processes can connect
to this socket and listen for events. The new events can be easily managed via
Salt's event library.
Unprivileged User Updates
Some enhancements have been added to Salt for running as a user other than
root. These new additions should make switching the user that the Salt Master
is running as very painless, simply change the user
option in the master
configuration and restart the master, Salt will take care of all of the
particulars for you.
Peer Runner Execution
Salt has long had the peer communication system used to allow minions to send
commands via the salt master. 0.10.0 adds a new capability here, now the
master can be configured to allow for minions to execute Salt runners via
the peer_run
option in the salt master configuration.
YAML Parsing Updates
In the past the YAML parser for sls files would return the incorrect numbers
when the file mode was set with a preceding 0. The YAML parser used in Salt
has been modified to no longer convert these number into octal but to keep
them as the correct value so that sls files can be a little cleaner to write.
State Call Data Files
It was requested that the minion keep a local cache of the most recent executed
state run. This has been added and now with state runs the data is stored in a
msgpack file in the minion's cachedir.
Turning Off the Job Cache
A new option has been added to the master configuration file. In previous
releases the Salt client would look over the Salt job cache to read in
the minion return data. With the addition of the event system the Salt client
can now watch for events directly from the master worker processes.
This means that the job cache is no longer a hard requirement. Keep in mind
though, that turning off the job cache means that historic job execution data
cannot be retrieved.
Test Updates
Minion Swarms Are Faster
To continue our efforts with testing Salt's ability to scale the minionswarm
script has been updated. The minionswarm can now start up minions much faster
than it could before and comes with a new feature allowing modules to be
disabled, thus lowering the minion's footprint when making a swarm. These new
updates have allows us to test
# python minionswarm.py -m 20 --master salt-master
Many Fixes
To get a good idea for the number of bugfixes this release offers take a look
at the closed tickets for 0.10.0, this is a very substantial update:
https://github.com/saltstack/salt/issues?milestone=12&state=closed
Master and Minion Stability Fixes
As Salt deployments grow new ways to break Salt are discovered. 0.10.0 comes
with a number of fixes for the minions and master greatly improving Salt
stability.
Salt 0.10.2 Release Notes
0.10.2 is out! This release comes with enhancements to the pillar interface,
cleaner ways to access the salt-call capabilities in the API, minion data
caching and the event system has been added to salt minions.
There have also been updates to the ZeroMQ functions, many more tests
(thanks to sponsors, the code sprint and many contributors) and a swath
of bug fixes.
Major Features
Ext Pillar Modules
The ranks of available Salt modules directories sees a new member in 0.10.2.
With the popularity of pillar a higher demand has arisen for ext_pillar
interfaces to be more like regular Salt module additions. Now ext_pillar
interfaces can be added in the same way as other modules, just drop it into
the pillar directory in the salt source.
Minion Events
In 0.10.0 an event system was added to the Salt master. 0.10.2 adds the event
system to the minions as well. Now event can be published on a local minion
as well.
The minions can also send events back up to the master. This means that Salt is
able to communicate individual events from the minions back up to the Master
which are not associated with command.
Minion Data Caching
When pillar was introduced the landscape for available data was greatly
enhanced. The minion's began sending grain data back to the master on a
regular basis.
The new config option on the master called minion_data_cache
instructs the
Salt master to maintain a cache of the minion's grains and pillar data in the
cachedir. This option is turned off by default to avoid hitting the disk more,
but when enabled the cache is used to make grain matching from the salt command
more powerful, since the minions that will match can be predetermined.
Backup Files
By default all files replaced by the file.managed and file.recurse states we
simply deleted. 0.10.2 adds a new option. By setting the backup option to
minion
the files are backed up before they are replaced.
The backed up files are located in the cachedir under the file_backup
directory. On a default system this will be at:
/var/cache/salt/file_backup
Configuration files
salt-master
and salt-minion
automatically load additional configuration
files from master.d/*.conf
respective minion.d/*.conf
where
master.d
/minion.d
is a directory in the same directory as the main
configuration file.
Salt Key Verification
A number of users complained that they had inadvertently deleted the wrong salt
authentication keys. 0.10.2 now displays what keys are going to be deleted
and verifies that they are the keys that are intended for deletion.
Key auto-signing
If autosign_file
is specified in the configuration file incoming keys
will be compared to the list of keynames in autosign_file
. Regular
expressions as well as globbing is supported.
The file must only be writable by the user otherwise the file will be
ignored. To relax the permission and allow group write access set the
permissive_pki_access
option.
Module changes
Improved OpenBSD support
New modules for managing services and packages were provided by Joshua
Elsasser to further improve the support for OpenBSD.
Existing modules like the disk module were also improved to support
OpenBSD.
SQL Modules
The MySQL and PostgreSQL modules have both received a number of additions thanks
to the work of Avi Marcus and Roman Imankulov.
ZFS Support on FreeBSD
A new ZFS module has been added by Kurtis Velarde for FreeBSD supporting
various ZFS operations like creating, extending or removing zpools.
Augeas
A new Augeas module by Ulrich Dangel for editing and verifying config files.
Native Debian Service module
The support for the Debian was further improved with an new service module
for Debian by Ahmad Khayyat supporting disable and enable.
Cassandra
Cassandra support has been added by Adam Garside. Currently only
status and diagnostic information are supported.
Networking
The networking support for RHEL has been improved and supports bonding
support as well as zeroconf configuration.
Monit
Basic monit support by Kurtis Velarde to control services via monit.
nzbget
Basic support for controlling nzbget by Joseph Hall
Bluetooth
Baisc bluez
support for managing and controlling Bluetooth devices.
Supports scanning as well as pairing/unpairing by Joseph Hall.
Test Updates
Consistency Testing
Another testing script has been added. A bug was found in pillar when many
minions generated pillar data at the same time. The new consist.py
script
is the tests directory was created to reproduce bugs where data should always
be consistent.
Many Fixes
To get a good idea for the number of bugfixes this release offers take a look
at the closed tickets for 0.10.2, this is a very substantial update:
https://github.com/saltstack/salt/issues?milestone=24&page=1&state=closed
Master and Minion Stability Fixes
As Salt deployments grow new ways to break Salt are discovered. 0.10.2 comes
with a number of fixes for the minions and master greatly improving Salt
stability.
Salt 0.10.3 Release Notes
The latest taste of Salt has come, this release has many fixes and feature
additions. Modifications have been made to make ZeroMQ connections more
reliable, the beginning of the ACL system is in place, a new command line
parsing system has been added, dynamic module distribution has become more
environment aware, the new master_finger option and many more!
Major Features
ACL System
The new ACL system has been introduced. The ACL system allows for system users
other than root to execute salt commands. Users can be allowed to execute
specific commands in the same way that minions are opened up to the peer
system.
The configuration value to open up the ACL system is called client_acl
and is configured like so:
client_acl:
fred:
- test..*
- pkg.list_pkgs
Where fred is allowed access to functions in the test module and to the
pkg.list_pkgs
function.
Master Finger Option
The master_finger option has been added to improve the security of minion
provisioning. The master_finger option allows for the fingerprint of the
master public key to be set in the configuration file to double verify that the
master is valid. This option was added in response to a motivation to
pre-authenticate the master when provisioning new minions to help prevent
man in the middle attacks in some situations.
Salt Key Fingerprint Generation
The ability to generate fingerprints of keys used by Salt has been added to
salt-key
. The new option finger accepts the name of the key to generate
and display a fingerprint for.
Will display the fingerprints for the master public and private keys.
Parsing System
Pedro Algavio, aka s0undt3ch, has added a substantial update to the command
line parsing system that makes the help message output much cleaner and easier
to search through. Salt parsers now have --versions-report besides usual
--version info which you can provide when reporting any issues found.
Key Generation
We have reduced the requirements needed for salt-key to generate minion keys.
You're no longer required to have salt configured and it's common directories
created just to generate keys. This might prove useful if you're batch creating
keys to pre-load on minions.
Startup States
A few configuration options have been added which allow for states to be run
when the minion daemon starts. This can be a great advantage when deploying
with Salt because the minion can apply states right when it first runs. To
use startup states set the startup_states
configuration option on the
minion to highstate.
New Exclude Declaration
Some users have asked about adding the ability to ensure that other sls files
or ids are excluded from a state run. The exclude statement will delete all of
the data loaded from the specified sls file or will delete the specified id:
exclude:
- sls: http
- id: /etc/vimrc
Max Open Files
While we're currently unable to properly handle ZeroMQ's abort signals when the
max open files is reached, due to the way that's handled on ZeroMQ's, we have
minimized the chances of this happening without at least warning the user.
More State Output Options
Some major changes have been made to the state output system. In the past state
return data was printed in a very verbose fashion and only states that failed
or made changes were printed by default. Now two options can be passed to the
master and minion configuration files to change the behavior of the state
output. State output can be set to verbose (default) or non-verbose with the
state_verbose
option:
It is noteworthy that the state_verbose option used to be set to False by
default but has been changed to True by default in 0.10.3 due to many
requests for the change.
Te next option to be aware of new and called state_output
. This option
allows for the state output to be set to full (default) or terse.
The full output is the standard state output, but the new terse output
will print only one line per state making the output much easier to follow when
executing a large state system.
state.file.append Improvements
The salt state file.append() tries not to append existing text. Previously
the matching check was being made line by line. While this kind of check might
be enough for most cases, if the text being appended was multi-line, the check
would not work properly. This issue is now properly handled, the match is done
as a whole ignoring any white space addition or removal except inside commas.
For those thinking that, in order to properly match over multiple lines, salt
will load the whole file into memory, that's not true. For most cases this is
not important but an erroneous order to read a 4GB file, if not properly
handled, like salt does, could make salt chew that amount of memory. Salt has
a buffered file reader which will keep in memory a maximum of 256KB and
iterates over the file in chunks of 32KB to test for the match, more than
enough, if not, explain your usage on a ticket. With this change, also
salt.modules.file.contains(), salt.modules.file.contains_regex(),
salt.modules.file.contains_glob() and salt.utils.find now do the searching
and/or matching using the buffered chunks approach explained above.
Two new keyword arguments were also added, makedirs and source.
The first, makedirs will create the necessary directories in order to append
to the specified file, of course, it only applies if we're trying to append to
a non-existing file on a non-existing directory:
/tmp/salttest/file-append-makedirs:
file.append:
text: foo
makedirs: True
The second, source, allows one to append the contents of a file instead of
specifying the text.
/tmp/salttest/file-append-source:
file.append:
- source: salt://testfile
Security Fix
A timing vulnerability was uncovered in the code which decrypts the AES
messages sent over the network. This has been fixed and upgrading is
strongly recommended.
Salt 0.10.4 Release Notes
Salt 0.10.4 is a monumental release for the Salt team, with two new module
systems, many additions to allow granular access to Salt, improved platform
support and much more.
This release is also exciting because we have been able to shorten the release
cycle back to under a month. We are working hard to keep up the aggressive pace
and look forward to having releases happen more frequently!
This release also includes a serious security fix and all users are very
strongly recommended to upgrade. As usual, upgrade the master first, and then
the minion to ensure that the process is smooth.
Major Features
External Authentication System
The new external authentication system allows for Salt to pass through
authentication to any authentication system to determine if a user has
permission to execute a Salt command. The Unix PAM system is the first
supported system with more to come!
The external authentication system allows for specific users to be granted
access to execute specific functions on specific minions. Access is configured
in the master configuration file, and uses the new access control system:
external_auth:
pam:
thatch:
- 'web*':
- test.*
- network.*
The configuration above allows the user thatch to execute functions in the
test and network modules on minions that match the web* target.
Access Control System
All Salt systems can now be configured to grant access to non-administrative
users in a granular way. The old configuration continues to work. Specific
functions can be opened up to specific minions from specific users in the case
of external auth and client ACLs, and for specific minions in the case of the
peer system.
Access controls are configured like this:
client_acl:
fred:
- web\*:
- pkg.list_pkgs
- test.*
- apache.*
Target by Network
A new matcher has been added to the system which allows for minions to be
targeted by network. This new matcher can be called with the -S flag on the
command line and is available in all places that the matcher system is
available. Using it is simple:
$ salt -S '192.168.1.0/24' test.ping
$ salt -S '192.168.1.100' test.ping
Nodegroup Nesting
Previously a nodegroup was limited by not being able to include another
nodegroup, this restraint has been lifted and now nodegroups will be expanded
within other nodegroups with the N@ classifier.
Salt Key Delete by Glob
The ability to delete minion keys by glob has been added to salt-key
. To
delete all minion keys whose minion name starts with 'web':
Master Tops System
The external_nodes system has been upgraded to allow for modular subsystems
to be used to generate the top file data for a highstate run.
The external_nodes option still works but will be deprecated in the future in
favor of the new master_tops option.
Example of using master_tops:
master_tops:
ext_nodes: cobbler-external-nodes
Next Level Solaris Support
A lot of work has been put into improved Solaris support by Romeo Theriault.
Packaging modules (pkgadd/pkgrm and pkgutil) and states, cron support and user
and group management have all been added and improved upon. These additions
along with SMF (Service Management Facility) service support and improved
Solaris grain detection in 0.10.3 add up to Salt becoming a great tool
to manage Solaris servers with.
Security
A vulnerability in the security handshake was found and has been repaired, old
minions should be able to connect to a new master, so as usual, the master
should be updated first and then the minions.
Pillar Updates
The pillar communication has been updated to add some extra levels of
verification so that the intended minion is the only one allowed to gather the
data. Once all minions and the master are updated to salt 0.10.4 please
activate pillar 2 by changing the pillar_version in the master config to
2. This will be set to 2 by default in a future release.
Salt 0.10.5 Release Notes
Salt 0.10.5 is ready, and comes with some great new features. A few more
interfaces have been modularized, like the outputter system. The job cache
system has been made more powerful and can now store and retrieve jobs archived
in external databases. The returner system has been extended to allow minions
to easily retrieve data from a returner interface.
As usual, this is an exciting release, with many noteworthy additions!
Major Features
External Job Cache
The external job cache is a system which allows for a returner interface to
also act as a job cache. This system is intended to allow users to store
job information in a central location for longer periods of time and to make
the act of looking up information from jobs executed on other minions easier.
Currently the external job cache is supported via the mongo and redis
returners:
ext_job_cache: redis
redis.host: salt
Once the external job cache is turned on the new ret module can be used on
the minions to retrieve return information from the job cache. This can be a
great way for minions to respond and react to other minions.
OpenStack Additions
OpenStack integration with Salt has been moving forward at a blistering pace.
The new nova, glance and keystone modules represent the beginning of
ongoing OpenStack integration.
The Salt team has had many conversations with core OpenStack developers and
is working on linking to OpenStack in powerful new ways.
Wheel System
A new API was added to the Salt Master which allows the master to be managed
via an external API. This new system allows Salt API to easily hook into the
Salt Master and manage configs, modify the state tree, manage the pillar and
more. The main motivation for the wheel system is to enable features needed
in the upcoming web UI so users can manage the master just as easily as they
manage minions.
The wheel system has also been hooked into the external auth system. This
allows specific users to have granular access to manage components of the
Salt Master.
Render Pipes
Jack Kuan has added a substantial new feature. The render pipes system allows
Salt to treat the render system like unix pipes. This new system enables sls
files to be passed through specific render engines. While the default renderer
is still recommended, different engines can now be more easily merged. So to
pipe the output of Mako used in YAML use this shebang line:
#!mako|yaml
Salt Key Overhaul
The Salt Key system was originally developed as only a CLI interface, but as
time went on it was pressed into becoming a clumsy API. This release marks a
complete overhaul of Salt Key. Salt Key has been rewritten to function purely
from an API and to use the outputter system. The benefit here is that the
outputter system works much more cleanly with Salt Key now, and the internals
of Salt Key can be used much more cleanly.
Modular Outputters
The outputter system is now loaded in a modular way. This means that output
systems can be more easily added by dropping a python file down on the master
that contains the function output.
Gzip from Fileserver
Gzip compression has been added as an option to the cp.get_file and cp.get_dir
commands. This will make file transfers more efficient and faster, especially
over slower network links.
Unified Module Configuration
In past releases of Salt, the minions needed to be configured for certain
modules to function. This was difficult because it required pre-configuring the
minions. 0.10.5 changes this by making all module configs on minions search the
master config file for values.
Now if a single database server is needed, then it can be defined in the master
config and all minions will become aware of the configuration value.
Salt Call Enhancements
The salt-call
command has been updated in a few ways. Now, salt-call
can take the --return option to send the data to a returner. Also,
salt-call
now reports executions in the minion proc system, this allows the
master to be aware of the operation salt-call is running.
Death to pub_refresh and sub_timeout
The old configuration values pub_refresh and sub_timeout have been removed.
These options were in place to alleviate problems found in earlier versions of
ZeroMQ which have since been fixed. The continued use of these options has
proven to cause problems with message passing and have been completely removed.
Git Revision Versions
When running Salt directly from git (for testing or development, of course)
it has been difficult to know exactly what code is being executed. The new
versioning system will detect the git revision when building and how many
commits have been made since the last release. A release from git will look
like this:
0.10.4-736-gec74d69
Svn Module Addition
Anthony Cornehl (twinshadow) contributed a module that adds Subversion support
to Salt. This great addition helps round out Salt's VCS support.
Noteworthy Changes
Arch Linux Defaults to Systemd
Arch Linux recently changed to use systemd by default and discontinued support
for init scripts. Salt has followed suit and defaults to systemd now for
managing services in Arch.
Salt, Salt Cloud and Openstack
With the releases of Salt 0.10.5 and Salt Cloud 0.8.2, OpenStack becomes the
first (non-OS) piece of software to include support both on the user level
(with Salt Cloud) and the admin level (with Salt). We are excited to continue
to extend support of other platforms at this level.
Salt 0.11.0 Release Notes
Salt 0.11.0 is here, with some highly sought after and exciting features.
These features include the new overstate system, the reactor system, a new
state run scope component called __context__, the beginning of the search
system (still needs a great deal of work), multiple package states, the MySQL
returner and a better system to arbitrarily reference outputters.
It is also noteworthy that we are changing how we mark release numbers. For the
life of the project we have been pushing every release with features and fixes
as point releases. We will now be releasing point releases for only bug fixes
on a more regular basis and major feature releases on a slightly less regular
basis. This means that the next release will be a bugfix only release with a
version number of 0.11.1. The next feature release will be named 0.12.0 and
will mark the end of life for the 0.11 series.
Major Features
OverState
The overstate system is a simple way to manage rolling state executions across
many minions. The overstate allows for a state to depend on the successful
completion of another state.
Reactor System
The new reactor system allows for a reactive logic engine to be created which
can respond to events within a salted environment. The reactor system uses sls
files to match events fired on the master with actions, enabling Salt
to react to problems in an infrastructure.
Your load-balanced group of webservers is under extra load? Spin up a new VM
and add it to the group. Your fileserver is filling up? Send a notification to
your sysadmin on call. The possibilities are endless!
Module Context
A new component has been added to the module loader system. The module context
is a data structure that can hold objects for a given scope within the module.
This allows for components that are initialized to be stored in a persistent
context which can greatly speed up ongoing connections. Right now the best
example can be found in the cp execution module.
Multiple Package Management
A long desired feature has been added to package management. By definition Salt
States have always installed packages one at a time. On most platforms this is
not the fastest way to install packages. Erik Johnson, aka terminalmage, has
modified the package modules for many providers and added new capabilities to
install groups of packages. These package groups can be defined as a list of
packages available in repository servers:
python_pkgs:
pkg.installed:
- pkgs:
- python-mako
- whoosh
- python-git
or specify based on the location of specific packages:
python_pkgs:
pkg.installed:
- sources:
- python-mako: http://some-rpms.org/python-mako.rpm
- whoosh: salt://whoosh/whoosh.rpm
- python-git: ftp://companyserver.net/python-git.rpm
Search System
The bones to the search system have been added. This is a very basic interface
that allows for search backends to be added as search modules. The first
supported search module is the whoosh search backend. Right now only the basic
paths for the search system are in place, making this very experimental.
Further development will involve improving the search routines and index
routines for whoosh and other search backends.
The search system has been made to allow for searching through all of the state
and pillar files, configuration files and all return data from minion
executions.
Notable Changes
All previous versions of Salt have shared many directories between the master
and minion. The default locations for keys, cached data and sockets has been
shared by master and minion. This has created serious problems with running a
master and a minion on the same systems. 0.11.0 changes the defaults to be
separate directories. Salt will also attempt to migrate all of the old key data
into the correct new directories, but if it is not successful it may need to be
done manually. If your keys exhibit issues after updating make sure that they
have been moved from /etc/salt/pki
to /etc/salt/pki/{master,minion}
.
The old setup will look like this:
/etc/salt/pki
|-- master.pem
|-- master.pub
|-- minions
| `-- ragnarok.saltstack.net
|-- minions_pre
|-- minion.pem
|-- minion.pub
|-- minion_master.pub
|-- minions_pre
`-- minions_rejected
With the accepted minion keys in /etc/salt/pki/minions
, the new setup
places the accepted minion keys in /etc/salt/pki/master/minions
.
/etc/salt/pki
|-- master
| |-- master.pem
| |-- master.pub
| |-- minions
| | `-- ragnarok.saltstack.net
| |-- minions_pre
| `-- minions_rejected
|-- minion
| |-- minion.pem
| |-- minion.pub
| `-- minion_master.pub
Salt 0.12.0 Release Notes
Another feature release of Salt is here! Some exciting additions are included
with more ways to make salt modular and even easier management of the salt
file server.
Major Features
Modular Fileserver Backend
The new modular fileserver backend allows for any external system to be used as
a salt file server. The main benefit here is that it is now possible to tell
the master to directly use a git remote location, or many git remote locations,
automatically mapping git branches and tags to salt environments.
Windows is First Class!
A new Salt Windows installer is now available! Much work has been put in to
improve Windows support. With this much easier method of getting Salt on your
Windows machines, we hope even more development and progress will occur. Please
file bug reports on the Salt GitHub repo issue tracker so we can continue
improving.
One thing that is missing on Windows that Salt uses extensively is a software
package manager and a software package repository. The Salt pkg state allows
sys admins to install software across their infrastructure and across operating
systems. Software on Windows can now be managed in the same way. The SaltStack
team built a package manager that interfaces with the standard Salt pkg module
to allow for installing and removing software on Windows. In addition, a
software package repository has been built on top of the Salt fileserver. A
small YAML file provides the information necessary for the package manager to
install and remove software.
An interesting feature of the new Salt Windows software package repository is
that one or more remote git repositories can supplement the master's local
repository. The repository can point to software on the master's fileserver or
on an HTTP, HTTPS, or ftp server.
New Default Outputter
Salt displays data to the terminal via the outputter system. For a long time
the default outputter for Salt has been the python pretty print library. While
this has been a generally reasonable outputter, it did have many failings. The
new default outputter is called "nested", it recursively scans return data
structures and prints them out cleanly.
If the result of the new nested outputter is not desired any other outputter
can be used via the --out option, or the output option can be set in the master
and minion configs to change the default outputter.
Internal Scheduler
The internal Salt scheduler is a new capability which allows for functions to
be executed at given intervals on the minion, and for runners to be executed
at given intervals on the master. The scheduler allows for sequences
such as executing state runs (locally on the minion or remotely via an
overstate) or continually gathering system data to be run at given intervals.
The configuration is simple, add the schedule option to the master or minion
config and specify jobs to run, this in the master config will execute the
state.over runner every 60 minutes:
schedule:
overstate:
function: state.over
minutes: 60
This example for the minion configuration will execute a highstate every 30
minutes:
schedule:
highstate:
function: state.highstate
minutes: 30
Set Grains Remotely
A new execution function and state module have been added that allows for
grains to be set on the minion. Now grains can be set via a remote execution or
via states. Use the grains.present state or the grains.setval execution
functions.
Gentoo Additions
Major additions to Gentoo specific components have been made. The encompasses
executions modules and states ranging from supporting the make.conf file to
tools like layman.
Salt 0.13.0 Release Notes
The lucky number 13 has turned the corner! From CLI notifications when quitting
a salt command, to substantial improvements on Windows, Salt 0.13.0 has
arrived!
Major Features
Windows Improvements
Minion stability on Windows has improved. Many file operations, including
file.recurse, have been fixed and improved. The network module works better, to
include network.interfaces. Both 32bit and 64bit installers are now available.
Nodegroup Targeting in Peer System
In the past, nodegroups were not available for targeting via the peer system.
This has been fixed, allowing the new nodegroup expr_form argument for the
publish.publish function:
salt-call publish.publish group1 test.ping expr_form=nodegroup
Blacklist Additions
Additions allowing more granular blacklisting are available in 0.13.0. The
ability to blacklist users and functions in client_acl have been added, as
well as the ability to exclude state formulas from the command line.
Command Line Pillar Embedding
Pillar data can now be embedded on the command line when calling state.sls
and state.highstate
. This allows for on the fly changes or settings to
pillar and makes parameterizing state formulas even easier. This is done via
the keyword argument:
salt '*' state.highstate pillar='{"cheese": "spam"}'
The above example will extend the existing pillar to hold the cheese
key
with a value of spam
. If the cheese
key is already specified in the
minion's pillar then it will be overwritten.
CLI Notifications
In the past hitting ctrl-C and quitting from the salt
command would just
drop to a shell prompt, this caused confusion with users who expected the
remote executions to also quit. Now a message is displayed showing what
command can be used to track the execution and what the job id is for the
execution.
Version Specification in Multiple-Package States
Versions can now be specified within multiple-package pkg.installed
states. An example can be found below:
mypkgs:
pkg.installed:
- pkgs:
- foo
- bar: 1.2.3-4
- baz
Noteworthy Changes
The configuration subsystem in Salt has been overhauled to make the opts
dict used by Salt applications more portable, the problem is that this is an
incompatible change with salt-cloud, and salt-cloud will need to be updated
to the latest git to work with Salt 0.13.0. Salt Cloud 0.8.5 will also require
Salt 0.13.0 or later to function.
The Salt Stack team is sorry for the inconvenience here, we work hard to make
sure these sorts of things do not happen, but sometimes hard changes get in.
Salt 0.14.0 Release Notes
Salt 0.14.0 is here! This release was held up primarily by PyCon, Scale and
illness, but has arrived! 0.14.0 comes with many new features and is breaking
ground for Salt in the area of cloud management with the introduction of Salt
providing basic cloud controller functionality.
Major Features
Salt - As a Cloud Controller
This is the first primitive inroad to using Salt as a cloud controller is
available in 0.14.0. Be advised that this is alpha, only tested in a few very
small environments.
The cloud controller is built using kvm and libvirt for the hypervisors.
Hypervisors are autodetected as minions and only need to have libvirt running
and kvm installed to function. The features of the Salt cloud controller are
as follows:
- Basic vm discovery and reporting
- Creation of new virtual machines
- Seeding virtual machines with Salt via qemu-nbd or libguestfs
- Live migration (shared and non shared storage)
- Delete existing VMs
It is noteworthy that this feature is still Alpha, meaning that all rights
are reserved to change the interface if needs be in future releases!
Libvirt State
One of the problems with libvirt is management of certificates needed for live
migration and cross communication between hypervisors. The new libvirt
state makes the Salt Master hold a CA and manage the signing and distribution
of keys onto hypervisors, just add a call to the libvirt state in the sls
formulas used to set up a hypervisor:
libvirt_keys:
libvirt.keys
New get Functions
An easier way to manage data has been introduced. The pillar, grains and config
execution modules have been extended with the new get
function. This
function works much in the same way as the get method in a python dict, but with
an enhancement, nested dict components can be extracted using a : delimiter.
If a structure like this is in pillar:
Extracting it from the raw pillar in an sls formula or file template is done
this way:
{{ pillar['foo']['bar']['baz'] }}
Now with the new get function the data can be safely gathered and a default
can be set allowing the template to fall back if the value is not available:
{{ salt['pillar.get']('foo:bar:baz', 'qux') }}
This makes handling nested structures much easier, and defaults can be cleanly
set. This new function is being used extensively in the new formulae repository
of salt sls formulas.
Salt 0.15.0 Release Notes
The many new features of Salt 0.15.0 have arrived! Salt 0.15.0 comes with many
smaller features and a few larger ones.
These features range from better debugging tools to the new Salt Mine system.
Major Features
The Salt Mine
First there was the peer system, allowing for commands to be executed from a
minion to other minions to gather data live. Then there was the external job
cache for storing and accessing long term data. Now the middle ground is being
filled in with the Salt Mine. The Salt Mine is a system used to execute
functions on a regular basis on minions and then store only the most recent
data from the functions on the master, then the data is looked up via targets.
The mine caches data that is public to all minions, so when a minion posts
data to the mine all other minions can see it.
IPV6 Support
0.13.0 saw the addition of initial IPV6 support but errors were encountered and
it needed to be stripped out. This time the code covers more cases and must be
explicitly enabled. But the support is much more extensive than before.
Copy Files From Minions to the Master
Minions have long been able to copy files down from the master file server, but
until now files could not be easily copied from the minion up to the master.
A new function called cp.push
can push files from the minions up to the
master server. The uploaded files are then cached on the master in the master
cachedir for each minion.
Better Template Debugging
Template errors have long been a burden when writing states and pillar. 0.15.0
will now send the compiled template data to the debug log, this makes tracking
down the intermittent stage templates much easier. So running state.sls or
state.highstate with -l debug will now print out the rendered templates in
the debug information.
State Event Firing
The state system is now more closely tied to the master's event bus. Now when
a state fails the failure will be fired on the master event bus so that the
reactor can respond to it.
Major Syndic Updates
The Syndic system has been basically re-written. Now it runs in a completely
asynchronous way and functions primarily as an event broker. This means that
the events fired on the syndic are now pushed up to the higher level master
instead of the old method used which waited for the client libraries to
return.
This makes the syndic much more accurate and powerful, it also means that
all events fired on the syndic master make it up the pipe as well making a
reactor on the higher level master able to react to minions further
downstream.
Peer System Updates
The Peer System has been updated to run using the client libraries instead
of firing directly over the publish bus. This makes the peer system much more
consistent and reliable.
Minion Key Revocation
In the past when a minion was decommissioned the key needed to be manually
deleted on the master, but now a function on the minion can be used to revoke
the calling minion's key:
$ salt-call saltutil.revoke_auth
Function Return Codes
Functions can now be assigned numeric return codes to determine if the function
executed successfully. While not all functions have been given return codes,
many have and it is an ongoing effort to fill out all functions that might
return a non-zero return code.
Functions in Overstate
The overstate system was originally created to just manage the execution of
states, but with the addition of return codes to functions, requisite logic can
now be used with respect to the overstate. This means that an overstate stage
can now run single functions instead of just state executions.
Pillar Error Reporting
Previously if errors surfaced in pillar, then the pillar would consist of only
an empty dict. Now all data that was successfully rendered stays in pillar and
the render error is also made available. If errors are found in the pillar,
states will refuse to run.
Using Cached State Data
Sometimes states are executed purely to maintain a specific state rather than
to update states with new configs. This is grounds for the new cached state
system. By adding cache=True to a state call the state will not be generated
fresh from the master but the last state data to be generated will be used.
If no previous state data is available then fresh data will be generated.
Monitoring States
The new monitoring states system has been started. This is very young but
allows for states to be used to configure monitoring routines. So far only one
monitoring state is available, the disk.status
state. As more capabilities
are added to Salt UI the monitoring capabilities of Salt will continue to be
expanded.
Salt 0.15.1 Release Notes
The 0.15.1 release has been posted, this release includes fixes to a number of
bugs in 0.15.1 and a three security patches.
Security Updates
A number of security issues have been resolved via the 0.15.1 release.
Path Injection in Minion IDs
Salt masters did not properly validate the id of a connecting minion. This can
lead to an attacker uploading files to the master in arbitrary locations.
In particular this can be used to bypass the manual validation of new unknown
minions. Exploiting this vulnerability does not require authentication.
This issue affects all known versions of Salt.
This issue was reported by Ronald Volgers.
RSA Key Generation Fault
RSA key generation was done incorrectly, leading to very insecure keys. It is
recommended to regenerate all RSA keys.
This issue can be used to impersonate Salt masters or minions, or decrypt any
transferred data.
This issue can only be exploited by attackers who are able to observe or modify
traffic between Salt minions and the legitimate Salt master.
A tool was included in 0.15.1 to assist in mass key regeneration, the
manage.regen_keys runner.
This issue affects all known versions of Salt.
This issue was reported by Ronald Volgers.
Command Injection Via ext_pillar
Arbitrary shell commands could be executed on the master by an authenticated
minion through options passed when requesting a pillar.
Ext pillar options have been restricted to only allow safe external pillars to
be called when prompted by the minion.
This issue affects Salt versions from 0.14.0 to 0.15.0.
This issue was reported by Ronald Volgers.
Salt 0.16.0 Release Notes
The 0.16.0 release is an exciting one, with new features in master redundancy,
and a new, powerful requisite.
Major Features
Multi-Master
This new capability allows for a minion to be actively connected to multiple
salt masters at the same time. This allows for multiple masters to send out commands
to minions and for minions to automatically reconnect to masters that have gone
down. A tutorial is available to help get started here:
Multi Master Tutorial
Prereq, the New Requisite
The new prereq requisite is very powerful! It allows for states to execute
based on a state that is expected to make changes in the future. This allows
for a change on the system to be preempted by another execution. A good example
is needing to shut down a service before modifying files associated with it,
allowing, for instance, a webserver to be shut down allowing a load balancer to
stop sending requests while server side code is updated. In this case, the
prereq will only run if changes are expected to happen in the prerequired
state, and the prerequired state will always run after the prereq state and
only if the prereq state succeeds.
Peer System Improvements
The peer system has been revamped to make it more reliable, faster, and like
the rest of Salt, async. The peer calls when an updated minion and master are
used together will be much faster!
Relative Includes
The ability to include an sls relative to the defined sls has been added, the
new syntax id documented here:
Includes
More State Output Options
The state_output
option in the past only supported full and terse,
0.16.0 add the mixed and changes modes further refining how states are sent
to users' eyes.
Improved Windows Support
Support for Salt on Windows continues to improve. Software management on
Windows has become more seamless with Linux/UNIX/BSD software management.
Installed software is now recognized by the short names defined in the
repository SLS. This makes it possible to
run salt '*' pkg.version firefox
and get back results from Windows and
non-Windows minions alike.
When templating files on Windows, Salt will now correctly use Windows
appropriate line endings. This makes it much easier to edit and consume files
on Windows.
When using the cmd state the shell
option now allows for specifying
Windows Powershell as an alternate shell to execute cmd.run and cmd.script.
This opens up Salt to all the power of Windows Powershell and its advanced
Windows management capabilities.
Several fixes and optimizations were added for the Windows networking modules,
especially when working with IPv6.
A system module was added that makes it easy to restart and shutdown Windows
minions.
The Salt Minion will now look for its config file in c:\salt\conf
by
default. This means that it's no longer necessary to specify the -c
option
to specify the location of the config file when starting the Salt Minion on
Windows in a terminal.
Muliple Targets for pkg.removed, pkg.purged States
Both pkg.removed
and pkg.purged
now support the pkgs
argument, which allow for
multiple packages to be targeted in a single state. This, as in
pkg.installed
, helps speed up these
states by reducing the number of times that the package management tools (apt,
yum, etc.) need to be run.
Random Times in Cron States
The temporal parameters in cron.present
states (minute, hour, etc.) can now be randomized by using random
instead
of a specific value. For example, by using the random
keyword in the
minute
parameter of a cron state, the same cron job can be pushed to
hundreds or thousands of hosts, and they would each use a randomly-generated
minute. This can be helpful when the cron job accesses a network resource, and
it is not desirable for all hosts to run the job concurrently.
/path/to/cron/script:
cron.present:
- user: root
- minute: random
- hour: 2
Since Salt assumes a value of *
for unspecified temporal parameters, adding
a parameter to the state and setting it to random
will change that value
from *
to a randomized numeric value. However, if that field in the cron
entry on the minion already contains a numeric value, then using the random
keyword will not modify it.
Confirmation Prompt on Key Acceptance
When accepting new keys with salt-key -a minion-id
or salt-key -A
,
there is now a prompt that will show the affected keys and ask for confirmation
before proceeding. This prompt can be bypassed using the -y
or --yes
command line argument, as with other salt-key
commands.
Support for Setting Password Hashes on BSD Minions
FreeBSD, NetBSD, and OpenBSD all now support setting passwords in
user.present
states.
Salt 0.16.2 Release Notes
Version 0.16.2 is a bugfix release for 0.16.0,
and contains a number of fixes.
Windows
- Only allow Administrator's group and SYSTEM user access to C:\salt. This
eliminates a race condition where a non-admin user could modify a template or
managed file before it is executed by the minion (which is running as an
elevated user), thus avoiding a potential escalation of privileges. (issue 6361)
Grains
- Fixed detection of
virtual
grain on OpenVZ hardware nodes
- Gracefully handle lsb_release data when it is enclosed in quotes
- LSB grains are now prefixed with
lsb_distrib_
instead of simply lsb_
.
The old naming is not preserved, so SLS may be affected.
- Improved grains detection on MacOS
Minion
- Fixed salt-key usage in minionswarm script
- Quieted warning about SALT_MINION_CONFIG environment variable on
minion startup and for CLI commands run via
salt-call
(issue 5956)
- Added minion config parameter
random_reauth_delay
to stagger
re-auth attempts when the minion is waiting for the master to approve its
public key. This helps prevent SYN flooding in larger environments.
User/Group Management
- Implement previously-ignored
unique
option for user.present
states in FreeBSD
- Report in state output when a
group.present
state attempts to use a gid in use by another
group
- Fixed regression that prevents a
user.present
state to set the password hash to the system
default (i.e. an unset password)
- Fixed multiple
group.present
states with
the same group (issue 6439)
File Management
- Fixed file.mkdir setting incorrect permissions (issue 6033)
- Fixed cleanup of source files for templates when
/tmp
is in file_roots
(issue 6118)
- Fixed caching of zero-byte files when a non-empty file was previously cached
at the same path
- Added HTTP authentication support to the cp module (issue 5641)
- Diffs are now suppressed when binary files are changed
Package/Repository Management
- Fixed traceback when there is only one target for
pkg.latest
states
- Fixed regression in detection of virtual packages (apt)
- Limit number of pkg database refreshes to once per
state.sls
/state.highstate
- YUM: Allow 32-bit packages with arches other than i686 to be managed on
64-bit systems (issue 6299)
- Fixed incorrect reporting in pkgrepo.managed states (issue 5517)
- Fixed 32-bit binary package installs on 64-bit RHEL-based distros, and added
proper support for 32-bit packages on 64-bit Debian-based distros
(issue 6303)
- Fixed issue where requisites were inadvertently being put into YUM repo files
(issue 6471)
Service Management
- Fixed inaccurate reporting of results in
service.running
states when the service fails to start
(issue 5894)
- Fixed handling of custom initscripts in RHEL-based distros so that they are
immediately available, negating the need for a second state run to manage the
service that the initscript controls
pip
- Properly handle
-f
lines in pip freeze output
- Fixed regression in pip.installed states with specifying a requirements file
(issue 6003)
- Fixed use of
editable
argument in pip.installed
states (issue 6025)
- Deprecated
runas
parameter in execution function calls, in favor of
user
Salt 0.16.3 Release Notes
Version 0.16.3 is another bugfix release for 0.16.0. The changes include:
- Various documentation fixes
- Fix proc directory regression (issue 6502)
- Properly detect Linaro Linux (issue 6496)
- Fix regressions in
mount.mounted
(issue 6522, issue 6545)
- Skip malformed state requisites (issue 6521)
- Fix regression in gitfs from bad import
- Fix for watching prereq states (including recursive requisite error)
(issue 6057)
- Fix mod_watch not overriding prereq (issue 6520)
- Don't allow functions which compile states to be called within states
(issue 5623)
- Return error for malformed top.sls (issue 6544)
- Fix traceback in
mysql.query
- Fix regression in binary package installation for 64-bit packages
on Debian-based Linux distros (issue 6563)
- Fix traceback caused by running
cp.push
without
having set file_recv
in the master config file
- Fix scheduler configuration in pillar (issue 6201)
Salt 0.16.4 Release Notes
Version 0.16.4 is another bugfix release for 0.16.0, likely to be the last before 0.17.0 is released.
The changes include:
Salt 0.17.0 Release Notes
The 0.17.0 release is a very exciting release of Salt, this brings to Salt
some very powerful new features and advances. The advances range from the
state system to the test suite, covering new transport capabilities and
making states easier and more powerful, to extending Salt Virt and much more!
The 0.17.0 release will also be the last release of Salt to follow the old
0.XX.X numbering system, the next release of Salt will change the numbering to
be date based following this format:
<Year>.<Month>.<Minor>
So if the release happens in November of 2013 the number will be 13.11.0, the
first bugfix release will be 13.11.1 and so forth.
Major Features
Halite
The new Halite web GUI is now available, a great deal of work has been put into
Halite to make it fully event driven and amazingly fast. The Halite UI can be
started from within the Salt Master, or standalone, and does not require an
external database to run, it is very lightweight!
This initial release of Halite is primarily the framework for the UI and the
communication systems making it easy to extend and build the UI up. It
presently supports watching the event bus and firing commands over Salt.
Halite is, like the rest of Salt, Open Source!
Much more will be coming in the future of Halite!
Salt SSH
The new salt-ssh
command has been added to Salt. This system allows for
remote execution and states to be run over ssh. The benefit here being, that
salt can run relying only on the ssh agent, rather than requiring a minion
to be deployed.
The salt-ssh
system runs states in a compatible way as Salt and states
created and run with salt-ssh can be moved over to a standard salt deployment
without modification.
Since this is the initial release of salt-ssh, there is plenty of room for
improvement, but it is fully operational, not just a bootstrap tool.
Rosters
Salt is designed to have the minions be aware of the master and the master does
not need to be aware of the location of the minions. The new salt roster system
was created and designed to facilitate listing the targets for salt-ssh.
The roster system, like most of Salt, is a plugin system, allowing for the list
of systems to target to be derived from any pluggable backend. The rosters
shipping with 0.17.0 are flat and scan. Flat is a file which is read in via the
salt render system and the scan roster does simple network scanning to discover
ssh servers.
State Auto Order
This is a major change in how states are evaluated in Salt. State Auto Order
is a new feature that makes states get evaluated and executed in the order in
which they are defined in the sls file. This feature makes it very easy to
see the finite order in which things will be executed, making Salt now, fully
imperative AND fully declarative.
The requisite system still takes precedence over the order in which states are
defined, so no existing states should break with this change. But this new
feature can be turned off by setting state_auto_order: False
in the master
config, thus reverting to the old lexicographical order.
state.sls Runner
The state.sls
runner has been created to allow for a more powerful system
for orchestrating state runs and function calls across the salt minions. This
new system uses the state system for organizing executions.
This allows for states to be defined that are executed on the master to call
states on minions via salt-run state.sls
.
Salt Thin
Salt Thin is an exciting new component of Salt, this is the ability to execute
Salt routines without any transport mechanisms installed, it is a pure python
subset of Salt.
Salt Thin does not have any networking capability, but can be dropped into any
system with Python installed and then salt-call
can be called directly. The
Salt Thin system, is used by the salt-ssh
command, but can still be used to
just drop salt somewhere for easy use.
Event Namespacing
Events have been updated to be much more flexible. The tags in events have all
been namespaced allowing easier tracking of event names.
Mercurial Fileserver Backend
The popular git fileserver backend has been joined by the mercurial fileserver
backend, allowing the state tree to be managed entirely via mercurial.
External Logging Handlers
The external logging handler system allows for Salt to directly hook into any
external logging system. Currently supported are sentry and logstash.
Jenkins Testing
The testing systems in Salt have been greatly enhanced, tests for salt are now
executed, via jenkins.saltstack.com, across many supported platforms. Jenkins
calls out to salt-cloud to create virtual machines on Rackspace, then the
minion on the virtual machine checks into the master running on Jenkins where
a state run is executed that sets up the minion to run tests and executes the
test suite.
This now automates the sequence of running platform tests and allows for
continuous destructive tests to be run.
Salt Testing Project
The testing libraries for salt have been moved out of the main salt code base
and into a standalone codebase. This has been done to ease the use of the
testing systems being used in salt based projects other than Salt itself.
StormPath External Authentication
The external auth system now supports the fantastic Stormpath cloud based
authentication system.
LXC Support
Extensive additions have been added to Salt for LXC support. This included
the backend libs for managing LXC containers. Addition into the salt-virt
system is still in the works.
Mac OS X User/Group Support
Salt is now able to manage users and groups on Minions running Mac OS X.
However, at this time user passwords cannot be managed.
Django ORM External Pillar
Pillar data can now be derived from Django managed databases.
Fixes from RC to release
- Multiple documentation fixes
- Add multiple source files + templating for
file.append
(issue 6905)
- Support sysctl configuration files in systemd>=207 (issue 7351)
- Add
file.search
and file.replace
- Fix cross-calling execution functions in provider overrides
- Fix locale override for postgres (issue 4543)
- Fix Raspbian identification for service/pkg support (issue 7371)
- Fix
cp.push
file corruption (issue 6495)
- Fix ALT Linux password hash specification (issue 3474)
- Multiple salt-ssh-related fixes and improvements
Salt 0.6.0 release notes
The Salt remote execution manager has reached initial functionality! Salt is a
management application which can be used to execute commands on remote sets of
servers.
The whole idea behind Salt is to create a system where a group of servers can
be remotely controlled from a single master, not only can commands be executed
on remote systems, but salt can also be used to gather information about your
server environment.
Unlike similar systems, like Func and MCollective, Salt is extremely simple to
setup and use, the entire application is contained in a single package, and the
master and minion daemons require no running dependencies in the way that Func
requires Certmaster and MCollective requires activeMQ.
Salt also manages authentication and encryption. Rather than using SSL for
encryption, salt manages encryption on a payload level, so the data sent across
the network is encrypted with fast AES encryption, and authentication uses RSA
keys. This means that Salt is fast, secure, and very efficient.
Messaging in Salt is executed with ZeroMQ, so the message passing interface is
built into salt and does not require an external ZeroMQ server. This also adds
speed to Salt since there is no additional bloat on the networking layer, and
ZeroMQ has already proven itself as a very fast networking system.
The remote execution in Salt is "Lazy Execution", in that once the command is
sent the requesting network connection is closed. This makes it easier to
detach the execution from the calling process on the master, it also means that
replies are cached, so that information gathered from historic commands can be
queried in the future.
Salt also allows users to make execution modules in Python. Writers of these
modules should also be pleased to know that they have access to the impressive
information gathered from PuppetLabs' Facter application, making Salt module
more flexible. In the future I hope to also allow Salt to group servers based
on Facter information as well.
All in all Salt is fast, efficient and clean, can be used from a simple command
line client or through an API, uses message queue technology to make network
execution extremely fast, and encryption is handled in a very fast and
efficient manner. Salt is also VERY easy to use and VERY easy to extend.
You can find the source code for Salt on my GitHub page, I have also set up a
few wiki pages explaining how to use and set up Salt. If you are using Arch
Linux there is a package available in the Arch Linux AUR.
Salt 0.6.0 Source: https://github.com/downloads/saltstack/salt/salt-0.6.0.tar.gz
GitHub page: https://github.com/saltstack/salt
Wiki: https://github.com/saltstack/salt/wiki
Arch Linux Package: https://aur.archlinux.org/packages.php?ID=47512
I am very open to contributions, for instance I need packages for more Linux
distributions as well as BSD packages and testers.
Give Salt a try, this is the initial release and is not a 1.0 quality release,
but it has been working well for me! I am eager to get your feedback!
Salt 0.7.0 release notes
I am pleased to announce the release of Salt 0.7.0!
This release marks what is the first stable release of salt, 0.7.0 should be
suitable for general use.
0.7.0 Brings the following new features to Salt:
- Integration with Facter data from puppet labs
- Allow for matching minions from the salt client via Facter information
- Minion job threading, many jobs can be executed from the master at once
- Preview of master clustering support - Still experimental
- Introduce new minion modules for stats, virtualization, service management and more
- Add extensive logging to the master and minion daemons
- Add sys.reload_functions for dynamic function reloading
- Greatly improve authentication
- Introduce the saltkey command for managing public keys
- Begin backend development preparatory to introducing butter
- Addition of man pages for the core commands
- Extended and cleaned configuration
0.7.0 Fixes the following major bugs:
- Fix crash in minions when matching failed
- Fix configuration file lookups for the local client
- Repair communication bugs in encryption
- Numerous fixes in the minion modules
The next release of Salt should see the following features:
- Stabilize the cluster support
- Introduce a remote client for salt command tiers
- salt-ftp system for distributed file copies
- Initial support for "butter"
Coming up next is a higher level management framework for salt called
Butter. I want salt to stay as a simple and effective communication
framework, and allow for more complicated executions to be managed via
Butter.
Right now Butter is being developed to act as a cloud controller using salt
as the communication layer, but features like system monitoring and advanced
configuration control (a puppet manager) are also in the pipe.
Special thanks to Joseph Hall for the status and network modules, and thanks
to Matthias Teege for tracking down some configuration bugs!
Salt can be downloaded from the following locations;
Source Tarball:
https://github.com/downloads/saltstack/salt/salt-0.7.0.tar.gz
Arch Linux Package:
https://aur.archlinux.org/packages.php?ID=47512
Please enjoy the latest Salt release!
Salt 0.8.0 release notes
Salt 0.8.0 is ready for general consumption!
The source tarball is available on GitHub for download:
https://github.com/downloads/saltstack/salt/salt-0.8.0.tar.gz
A lot of work has gone into salt since the last release just 2 weeks ago, and
salt has improved a great deal. A swath of new features are here along with
performance and threading improvements!
The main new features of salt 0.8.0 are:
Salt-cp
Cython minion modules
Dynamic returners
Faster return handling
Lowered required Python version to 2.6
Advanced minion threading
Configurable minion modules
Salt-cp -
The salt-cp command introduces the ability to copy simple files via salt to
targeted servers. Using salt-cp is very simple, just call salt-cp with a target
specification, the source file(s) and where to copy the files on the minions.
For instance:
# salt-cp ‘*’ /etc/hosts /etc/hosts
Will copy the local /etc/hosts file to all of the minions.
Salt-cp is very young, in the future more advanced features will be added, and
the functionality will much more closely resemble the cp command.
Cython minion modules -
Cython is an amazing tool used to compile Python modules down to c. This is
arguably the fastest way to run Python code, and since pyzmq requires cython,
adding support to salt for cython adds no new dependencies.
Cython minion modules allow minion modules to be written in cython and
therefore executed in compiled c. Simply write the salt module in cython and
use the file extension “.pyx” and the minion module will be compiled when
the minion is started. An example cython module is included in the main
distribution called cytest.pyx:
https://github.com/saltstack/salt/blob/develop/salt/modules/cytest.pyx
Dynamic Returners -
By default salt returns command data back to the salt master, but now salt can
return command data to any system. This is enabled via the new returners
modules feature for salt. The returners modules take the return data and sends
it to a specific module. The returner modules work like minion modules, so any
returner can be added to the minions.
This means that a custom data returner can be added to communicate the return
data so anything from MySQL, Redis, MongoDB and more!
There are 2 simple stock returners in the returners directory:
https://github.com/saltstack/salt/blob/develop/salt/returners
The documentation on writing returners will be added to the wiki shortly, and
returners can be written in pure Python, or in cython.
Advanced Minion Threading:
In 0.7.0 the minion would block after receiving a command from the master, now
the minion will spawn a thread or multiprocess. By default Python threads are
used because for general use they have proved to be faster, but the minion can
now be configured to use the Python multiprocessing module instead. Using
multiprocessing will cause executions that are CPU bound or would otherwise
exploit the negative aspects of the Python GIL to run faster and more reliably,
but simple calls will still be faster with Python threading.
The configuration option can be found in the minion configuration file:
https://github.com/saltstack/salt/blob/develop/conf/minion
Lowered Supported Python to 2.6 -
The requirement for Python 2.7 has been removed to support Python 2.6. I have
received requests to take the minimum Python version back to 2.4, but
unfortunately this will not be possible, since the ZeroMQ Python bindings do
not support Python 2.4.
Salt 0.8.0 is a very major update, it also changes the network protocol slightly
which makes communication with older salt daemons impossible, your master and
minions need to be upgraded together!
I could use some help bringing salt to the people! Right now I only have
packages for Arch Linux, Fedora 14 and Gentoo. We need packages for Debian and
people willing to help test on more platforms. We also need help writing more
minion modules and returner modules. If you want to contribute to salt please
hop on the mailing list and send in patches, make a fork on GitHub and send in
pull requests! If you want to help but are not sure where you can, please email
me directly or post tot he mailing list!
I hope you enjoy salt, while it is not yet 1.0 salt is completely viable and
usable!
-Thomas S. Hatch
Salt 0.8.7 release notes
It has been a month since salt 0.8.0, and it has been a long month! But Salt is
still coming along strong. 0.8.7 has a lot of changes and a lot of updates.
This update makes Salt’s ZeroMQ back end better, strips Facter from the
dependencies, and introduces interfaces to handle more capabilities.
Many of the major updates are in the background, but the changes should shine
through to the surface. A number of the new features are still a little thin,
but the back end to support expansion is in place.
I also recently gave a presentation to the Utah Python users group in Salt Lake
City, the slides from this presentation are available here:
https://github.com/downloads/saltstack/salt/Salt.pdf
The video from this presentation will be available shortly.
The major new features and changes in Salt 0.8.7 are:
- Revamp ZeroMQ topology on the master for better scalability
- State enforcement
- Dynamic state enforcement managers
- Extract the module loader into salt.loader
- Make Job ids more granular
- Replace Facter functionality with the new salt grains interface
- Support for “virtual” salt modules
- Introduce the salt-call command
- Better debugging for minion modules
The new ZeroMQ topology allows for better scalability, this will be required by
the need to execute massive file transfers to multiple machines in parallel and
state management. The new ZeroMQ topology is available in the aforementioned
presentation.
0.8.7 introduces the capability to declare states, this is similar to the
capabilities of Puppet. States in salt are declared via state data structures.
This system is very young, but the core feature set is available. Salt states
work around rendering files which represent Salt high data. More on the Salt
state system will be documented in the near future.
The system for loading salt modules has been pulled out of the minion class to
be a standalone module, this has enabled more dynamic loading of Salt modules
and enables many of the updates in 0.8.7 –
https://github.com/saltstack/salt/blob/develop/salt/loader.py
Salt Job ids are now microsecond precise, this was needed to repair a race
condition unveiled by the speed improvements in the new ZeroMQ topology.
The new grains interface replaces the functionality of Facter, the idea behind
grains differs from Facter in that the grains are only used for static system
data, dynamic data needs to be derived from a call to a salt module. This makes
grains much faster to use, since the grains data is generated when the minion
starts.
Virtual salt modules allows for a salt module to be presented as something
other than its module name. The idea here is that based on information from the
minion decisions about which module should be presented can be made. The best
example is the pacman module. The pacman module will only load on Arch Linux
minions, and will be called pkg. Similarly the yum module will be presented as
pkg when the minion starts on a Fedora/RedHat system.
The new salt-call command allows for minion modules to be executed from the
minion. This means that on the minion a salt module can be executed, this is a
great tool for testing Salt modules. The salt-call command can also be used to
view the grains data.
In previous releases when a minion module threw an exception very little data
was returned to the master. Now the stack trace from the failure is returned
making debugging of minion modules MUCH easier.
Salt is nearing the goal of 1.0, where the core feature set and capability is
complete!
Salt 0.8.7 can be downloaded from GitHub here:
https://github.com/downloads/saltstack/salt/salt-0.8.7.tar.gz
-Thomas S Hatch
Salt 0.8.8 release notes
Salt 0.8.8 is here! This release adds a great deal of code and some serious new
features. The latest release can be downloaded here:
https://github.com/downloads/saltstack/salt/salt-0.8.8.tar.gz
Improved Documentation has been set up for salt using sphinx thanks to the
efforts of Seth House. This new documentation system will act as the back end
to the salt website which is still under heavy development. The new sphinx
documentation system has also been used to greatly clean up the salt manpages.
The salt 7 manpage in particular now contains extensive information which was
previously only in the wiki. The new documentation can be found at:
http://thatch45.github.com/salt-www/
We still have a lot to add, and when the domain is set up I will post another
announcement.
More additions have been made to the ZeroMQ setup, particularly in the realm
of file transfers. Salt 0.8.8 introduces a built in, stateless, encrypted file
server which allows salt minions to download files from the salt master using
the same encryption system used for all other salt communications. The main
motivation for the salt file server has been to facilitate the new salt state
system.
Much of the salt code has been cleaned up and a new cleaner logging system has
been introduced thanks to the efforts of Pedro Algarvio. These additions will
allow for much more flexible logging to be executed by salt, and fixed a great
deal of my poor spelling in the salt docstrings! Pedro Algarvio has also
cleaned up the API, making it easier to embed salt into another application.
The biggest addition to salt found in 0.8.8 is the new state system. The salt
module system has received a new front end which allows salt to be used as a
configuration management system. The configuration management system allows for
system configuration to be defined in data structures. The configuration
management system, or as it is called in salt, the “salt state system” supports
many of the features found in other configuration managers, but allows for
system states to be written in a far simpler format, executes at blazing speeds,
and operates via the salt minion matching system. The state system also operates
within the normal scope of salt, and requires no additional configuration to
use.
The salt state system can enforce the following states with many more to come:
Packages
Files
Services
Executing commands
Hosts
The system used to define the salt states is based on a data structure, the
data structure used to define the salt states has been made to be as easy to
use as possible. The data structure is defined by default using a YAML file
rendered via a Jinja template. This means that the state definition language
supports all of the data structures that YAML supports, and all of the
programming constructs and logic that Jinja supports. If the user does not
like YAML or Jinja the states can be defined in yaml-mako, json-jinja, or
json-mako. The system used to render the states is completely dynamic, and any
rendering system can be added to the capabilities of Salt, this means that a
rendering system that renders XML data in a cheetah template, or whatever you
can imagine, can be easily added to the capabilities of salt.
The salt state system also supports isolated environments, as well as matching
code from several environments to a single salt minion.
The feature base for Salt has grown quite a bit since my last serious
documentation push. As we approach 0.9.0 the goals are becoming very clear, and
the documentation needs a lot of work. The main goals for 0.9.0 are to further
refine the state system, fix any bugs we find, get Salt running on as many
platforms as we can, and get the documentation filled out. There is a lot more
to come as Salt moves forward to encapsulate a much larger scope, while
maintaining supreme usability and simplicity.
If you would like a more complete overview of Salt please watch the Salt
presentation:
Flash Video:
http://blip.tv/thomas-s-hatch/salt-0-8-7-presentation-5180182
OGV Video Download:
http://blip.tv/file/get/Thatch45-Salt087Presentation416.ogv
Slides:
https://github.com/downloads/saltstack/salt/Salt.pdf
-Thomas S Hatch
Salt 0.8.9 Release Notes
Salt 0.8.9 has finally arrived! Unfortunately this is much later than I had
hoped to release 0.8.9, life has been very crazy over the last month. But
despite challenges, Salt has moved forward!
This release, as expected, adds few new features and many refinements. One
of the most exciting aspect of this release is that the development community
for salt has grown a great deal and much of the code is from contributors.
Also, I have filled out the documentation a great deal. So information on
States is properly documented, and much of the documentation that was out of
date has been filled in.
New Features
Salt Run
A big feature is the addition of Salt run, the salt-run
command allows for
master side execution modules to be made that gather specific information or
execute custom routines from the master.
Documentation for salt-run can be found here:
http://saltstack.org/ref/runners.html
Refined Outputters
One problem often complained about in salt was the fact that the output was
so messy. Thanks to help from Jeff Schroeder a cleaner interface for the
command output for the Salt CLI has been made. This new interface makes
adding new printout formats easy and additions to the capabilities of minion
modules makes it possible to set the printout mode or outputter
for
functions in minion modules.
Cross Calling Salt Modules
Salt modules can now call each other, the __salt__
dict has been added to
the predefined references in minion modules. This new feature is documented in
the modules documentation:
http://saltstack.org/ref/modules/index.html
Watch Option Added to Salt State System
Now in Salt states you can set the watch option, this will allow watch enabled
states to change based on a change in the other defined states. This is similar
to subscribe and notify statements in puppet.
Root Dir Option
Travis Cline has added the ability to define the option root_dir
which
allows the salt minion to operate in a subdir. This is a strong move in
supporting the minion running as an unprivileged user
Config Files Defined in Variables
Thanks again to Travis Cline, the master and minion configuration file locations
can be defined in environment variables now.
New Modules
Quite a few new modules, states, returners and runners have been made.
New Minion Modules
apt
Support for apt-get has been added, this adds greatly improved Debian and
Ubuntu support to Salt!
useradd and groupadd
Support for manipulating users and groups on Unix-like systems.
moosefs
Initial support for reporting on aspects of the distributed file system,
MooseFS. For more information on MooseFS please see: http://moosefs.org
Thanks to Joseph Hall for his work on MooseFS support.
mount
Manage mounts and the fstab.
puppet
Execute puppet on remote systems.
shadow
Manipulate and manage the user password file.
ssh
Interact with ssh keys.
New States
user and group
Support for managing users and groups in Salt States.
mount
Enforce mounts and the fstab.
New Returners
mongo_return
Send the return information to a MongoDB server.
New Runners
manage
Display minions that are up or down.
Salt 0.9.0 Release Notes
Salt 0.9.0 is here. This is an exciting release, 0.9.0 includes the new network
topology features allowing peer salt commands and masters of masters via the
syndic interface.
0.9.0 also introduces many more modules, improvements to the API and
improvements to the ZeroMQ systems.
New Features
Salt Syndic
The new Syndic interface allows a master to be commanded via another higher
level salt master. This is a powerful solution allowing a master control
structure to exist, allowing salt to scale to much larger levels then before.
Peer Communication
0.9.0 introduces the capability for a minion to call a publication on the
master and receive the return from another set of minions. This allows salt
to act as a communication channel between minions and as a general
infrastructure message bus.
Peer communication is turned off by default but can be enabled via the peer
option in the master configuration file. Documentation on the new Peer
interface.
Cleaner Key Management
This release changes some of the key naming to allow for multiple master keys
to be held based on the type of minion gathering the master key.
The -d option has also been added to the salt-key command allowing for easy
removal of accepted public keys.
The --gen-keys option is now available as well for salt-key, this allows
for a salt specific RSA key pair to be easily generated from the command line.
Improved 0MQ Master Workers
The 0MQ worker system has been further refined to be faster and more robust.
This new system has been able to handle a much larger load than the previous
setup. The new system uses the IPC protocol in 0MQ instead of TCP.
New Modules
Quite a few new modules have been made.
New Minion Modules
apache
Work directly with apache servers, great for managing balanced web servers
cron
Read out the contents of a systems crontabs
mdadm
Module to manage raid devices in Linux, appears as the raid
module
mysql
Gather simple data from MySQL databases
ps
Extensive utilities for managing processes
publish
Used by the peer interface to allow minions to make publications
Salt 0.9.2 Release Notes
Salt 0.9.2 has arrived! 0.9.2 is primarily a bugfix release, the exciting
component in 0.9.2 is greatly improved support for salt states. All of the
salt states interfaces have been more thoroughly tested and the new salt-states
git repo is growing with example of how to use states.
This release introduces salt states for early developers and testers to start
helping us clean up the states interface and make it ready for the world!
0.9.2 also fixes a number of bugs found on Python 2.6.
New Features
Salt-Call Additions
The salt-call command has received an overhaul, it now hooks into the outputter
system so command output looks clean, and the logging system has been hooked
into salt-call, so the -l option allows the logging output from salt minion
functions to be displayed.
The end result is that the salt-call command can execute the state system and
return clean output:
# salt-call state.highstate
State System Fixes
The state system has been tested and better refined. As of this release the
state system is ready for early testers to start playing with. If you are
interested in working with the state system please check out the (still very
small) salt-states GitHub repo:
https://github.com/thatch45/salt-states
This git repo is the active development branch for determining how a clean
salt-state database should look and act. Since the salt state system is still
very young a lot of help is still needed here. Please fork the salt-states
repo and help us develop a truly large and scalable system for configuration
management!
Notable Bug Fixes
Cython Loading Disabled by Default
Cython loading requires a development tool chain to be installed on the minion,
requiring this by default can cause problems for most Salt deployments. If
Cython auto loading is desired it will need to be turned on in the minion
config.
Salt 0.9.3 Release Notes
Salt 0.9.3 is finally arrived. This is another big step forward for Salt, new
features range from proper FreeBSD support to fixing issues seen when
attaching a minion to a master over the Internet.
The biggest improvements in 0.9.3 though can be found in the state system, it
has progressed from something ready for early testers to a system ready to
compete with platforms such as Puppet and Chef. The backbone of the state
system has been greatly refined and many new features are available.
New Features
WAN Support
Recently more people have been testing Salt minions connecting to Salt Masters
over the Internet. It was found that Minions would commonly loose their
connection to the master when working over the internet. The minions can now
detect if the connection has been lost and reconnect to the master, making
WAN connections much more reliable.
State System Fixes
Substantial testing has gone into the state system and it is ready for real
world usage. A great deal has been added to the documentation for states and
the modules and functions available to states have been cleanly documented.
A number of State System bugs have also been founds and repaired, the output
from the state system has also been refined to be extremely clear and concise.
Error reporting has also been introduced, issues found in sls files will now
be clearly reported when executing Salt States.
Extend Declaration
The Salt States have also gained the extend
declaration. This declaration
allows for states to be cleanly modified in a post environment. Simply said,
if there is an apache.sls file that declares the apache service, then another
sls can include apache and then extend it:
include:
- apache
extend:
apache:
service:
- require:
- pkg: mod_python
mod_python:
pkg:
- installed
The notable behavior with the extend functionality is that it literally extends
or overwrites a declaration set up in another sls module. This means that Salt
will behave as though the modifications were made directly to the apache sls.
This ensures that the apache service in this example is directly tied to all
requirements.
Highstate Structure Specification
This release comes with a clear specification of the Highstate data structure
that is used to declare Salt States. This specification explains everything
that can be declared in the Salt SLS modules.
The specification is extremely simple, and illustrates how Salt has been able
to fulfill the requirements of a central configuration manager within a simple
and easy to understand format and specification.
SheBang Renderer Switch
It came to our attention that having many renderers means that there may be a
situation where more than one State Renderer should be available within a
single State Tree.
The method chosen to accomplish this was something already familiar to
developers and systems administrators, a SheBang. The Python State Renderer
displays this new capability.
Python State Renderer
Until now Salt States could only be declared in yaml or json using Jinja or
Mako. A new, very powerful, renderer has been added, making it possible to
write Salt States in pure Python:
#!py
def run():
'''
Install the python-mako package
'''
return {'include': ['python'],
'python-mako': {'pkg': ['installed']}}
This renderer is used by making a run function that returns the Highstate data
structure. Any capabilities of Python can be used in pure Python sls modules.
This example of a pure Python sls module is the same as this example in yaml:
include:
- python
python-mako:
pkg:
- installed
FreeBSD Support
Additional support has been added for FreeBSD, this is Salt's first branch out
of the Linux world and proves the viability of Salt on non-Linux platforms.
Salt remote execution already worked on FreeBSD, and should work without issue
on any Unix-like platform. But this support comes in the form of package
management and user support, so Salt States also work on FreeBSD now.
The new freebsdpkg module provides package management support for FreeBSD
and the new pw_user and pw_group provide user and group management.
Module and State Additions
Cron Support
Support for managing the system crontab has been added, declaring a cron state
can be done easily:
date > /tmp/datestamp:
cron:
- present
- user: fred
- minute: 5
- hour: 3
File State Additions
The file state has been given a number of new features, primarily the
directory, recurse, symlink and absent functions.
- file.directory
Make sure that a directory exists and has the right permissions.
/srv/foo:
file:
- directory
- user: root
- group: root
- mode: 1755
- file.symlink
Make a symlink.
/var/lib/www:
file:
- symlink
- target: /srv/www
- force: True
- file.recurse
The recurse state function will recursively download a directory on the
master file server and place it on the minion. Any change in the files on
the master will be pushed to the minion. The recurse function is very
powerful and has been tested by pushing out the full Linux kernel source.
/opt/code:
file:
- recurse
- source: salt://linux
- file.absent
Make sure that the file is not on the system, recursively deletes
directories, files and symlinks.
/etc/httpd/conf.d/somebogusfile.conf:
file:
- absent
Sysctl Module and State
The sysctl module and state allows for sysctl components in the kernel to be
managed easily. the sysctl module contains the following functions:
- sysctl.show
- Return a list of sysctl parameters for this minion
- sysctl.get
- Return a single sysctl parameter for this minion
- sysctl.assign
- Assign a single sysctl parameter for this minion
- sysctl.persist
- Assign and persist a simple sysctl parameter for this minion
The sysctl state allows for sysctl parameters to be assigned:
vm.swappiness:
sysctl:
- present
- value: 20
Kernel Module Management
A module for managing Linux kernel modules has been added. The new functions
are as follows:
- kmod.available
- Return a list of all available kernel modules
- kmod.check_available
- Check to see if the specified kernel module is available
- kmod.lsmod
- Return a dict containing information about currently loaded modules
- kmod.load
- Load the specified kernel module
- kmod.remove
- Unload the specified kernel module
The kmod state can enforce modules be either present or absent:
kvm_intel:
kmod:
- present
Ssh Authorized Keys
The ssh_auth state can distribute ssh authorized keys out to minions. Ssh
authorized keys can be present or absent.
AAAAB3NzaC1kc3MAAACBAL0sQ9fJ5bYTEyYvlRBsJdDOo49CNfhlWHWXQRqul6rwL4KIuPrhY7hBw0tV7UNC7J9IZRNO4iGod9C+OYutuWGJ2x5YNf7P4uGhH9AhBQGQ4LKOLxhDyT1OrDKXVFw3wgY3rHiJYAbd1PXNuclJHOKL27QZCRFjWSEaSrUOoczvAAAAFQD9d4jp2dCJSIseSkk4Lez3LqFcqQAAAIAmovHIVSrbLbXAXQE8eyPoL9x5C+x2GRpEcA7AeMH6bGx/xw6NtnQZVMcmZIre5Elrw3OKgxcDNomjYFNHuOYaQLBBMosyO++tJe1KTAr3A2zGj2xbWO9JhEzu8xvSdF8jRu0N5SRXPpzSyU4o1WGIPLVZSeSq1VFTHRT4lXB7PQAAAIBXUz6ZO0bregF5xtJRuxUN583HlfQkXvxLqHAGY8WSEVlTnuG/x75wolBDbVzeTlxWxgxhafj7P6Ncdv25Wz9wvc6ko/puww0b3rcLNqK+XCNJlsM/7lB8Q26iK5mRZzNsGeGwGTyzNIMBekGYQ5MRdIcPv5dBIP/1M6fQDEsAXQ==:
ssh_auth:
- present
- user: frank
- enc: dsa
- comment: 'Frank's key'
Salt 0.9.4 Release Notes
Salt 0.9.4 has arrived. This is a critical update that repairs a number of
key bugs found in 0.9.3. But this update is not without feature additions
as well! 0.9.4 adds support for Gentoo portage to the pkg module and state
system. Also there are 2 major new state additions, the failhard option and
the ability to set up finite state ordering with the order
option.
This release also sees our largest increase in community contributions.
These contributors have and continue to be the life blood of the Salt
project, and the team continues to grow. I want to put out a big thanks to
our new and existing contributors.
New Features
Failhard State Option
Normally, when a state fails Salt continues to execute the remainder of the
defined states and will only refuse to execute states that require the failed
state.
But the situation may exist, where you would want all state execution to stop
if a single state execution fails. The capability to do this is called
failing hard
.
State Level Failhard
A single state can have a failhard set, this means that if this individual
state fails that all state execution will immediately stop. This is a great
thing to do if there is a state that sets up a critical config file and
setting a require for each state that reads the config would be cumbersome.
A good example of this would be setting up a package manager early on:
/etc/yum.repos.d/company.repo:
file:
- managed
- source: salt://company/yumrepo.conf
- user: root
- group: root
- mode: 644
- order: 1
- failhard: True
In this situation, the yum repo is going to be configured before other states,
and if it fails to lay down the config file, than no other states will be
executed.
Global Failhard
It may be desired to have failhard be applied to every state that is executed,
if this is the case, then failhard can be set in the master configuration
file. Setting failhard in the master configuration file will result in failing
hard when any minion gathering states from the master have a state fail.
This is NOT the default behavior, normally Salt will only fail states that
require a failed state.
Using the global failhard is generally not recommended, since it can result
in states not being executed or even checked. It can also be confusing to
see states failhard if an admin is not actively aware that the failhard has
been set.
To use the global failhard set failhard: True in the master configuration
Finite Ordering of State Execution
When creating salt sls files, it is often important to ensure that they run in
a specific order. While states will always execute in the same order, that
order is not necessarily defined the way you want it.
A few tools exist in Salt to set up the correct state ordering, these tools
consist of requisite declarations and order options.
The Order Option
Before using the order option, remember that the majority of state ordering
should be done with requisite statements, and that a requisite statement
will override an order option.
The order option is used by adding an order number to a state declaration
with the option order:
vim:
pkg:
- installed
- order: 1
By adding the order option to 1 this ensures that the vim package will be
installed in tandem with any other state declaration set to the order 1.
Any state declared without an order option will be executed after all states
with order options are executed.
But this construct can only handle ordering states from the beginning.
Sometimes you may want to send a state to the end of the line, to do this
set the order to last:
vim:
pkg:
- installed
- order: last
Substantial testing has gone into the state system and it is ready for real
world usage. A great deal has been added to the documentation for states and
the modules and functions available to states have been cleanly documented.
A number of State System bugs have also been founds and repaired, the output
from the state system has also been refined to be extremely clear and concise.
Error reporting has also been introduced, issues found in sls files will now
be clearly reported when executing Salt States.
Gentoo Support
Additional experimental support has been added for Gentoo. This is found in
the contribution from Doug Renn, aka nestegg.
Salt 0.9.5 Release Notes
Salt 0.9.5 is one of the largest steps forward in the development of Salt.
0.9.5 comes with many milestones, this release has seen the community of
developers grow out to an international team of 46 code contributors and has
many feature additions, feature enhancements, bug fixes and speed improvements.
Major Features
SPEED! Pickle to msgpack
For a few months now we have been talking about moving away from Python
pickles for network serialization, but a preferred serialization format
had not yet been found. After an extensive performance testing period
involving everything from JSON to protocol buffers, a clear winner emerged.
Message Pack (http://msgpack.org/) proved to not only be the fastest and most
compact, but also the most "salt like". Message Pack is simple, and the code
involved is very small. The msgpack library for Python has been added directly
to Salt.
This move introduces a few changes to Salt. First off, Salt is no longer a
"noarch" package, since the msgpack lib is written in C. Salt 0.9.5 will also
have compatibility issues with 0.9.4 with the default configuration.
We have gone through great lengths to avoid backwards compatibility issues with
Salt, but changing the serialization medium was going to create issues
regardless. Salt 0.9.5 is somewhat backwards compatible with earlier minions. A
0.9.5 master can command older minions, but only if the serial
config value in the master is set to pickle
. This will tell the master to
publish messages in pickle format and will allow the master to receive messages
in both msgpack and pickle formats.
Therefore the suggested methods for upgrading are either to just upgrade
everything at once, or:
- Upgrade the master to 0.9.5
- Set
serial
to pickle
in the master config
- Upgrade the minions
- Remove the
serial
option from the master config
Since pickles can be used as a security exploit the ability for a master to
accept pickles from minions at all will be removed in a future release.
C Bindings for YAML
All of the YAML rendering is now done with the YAML C bindings. This speeds up
all of the sls files when running states.
Experimental Windows Support
David Boucha has worked tirelessly to bring initial support to Salt for
Microsoft Windows operating systems. Right now the Salt Minion can run as a
native Windows service and accept commands.
In the weeks and months to come Windows will receive the full treatment and
will have support for Salt States and more robust support for managing Windows
systems. This is a big step forward for Salt to move entirely outside of the
Unix world, and proves Salt is a viable cross platform solution. Big Thanks
to Dave for his contribution here!
Dynamic Module Distribution
Many Salt users have expressed the desire to have Salt distribute in-house
modules, states, renderers, returners, and grains. This support has been added
in a number of ways:
Modules via States
Now when salt modules are deployed to a minion via the state system as a file,
then the modules will be automatically loaded into the active running minion
- no restart required - and into the active running state. So custom state
modules can be deployed and used in the same state run.
Modules via Module Environment Directories
Under the file_roots each environment can now have directories that are used
to deploy large groups of modules. These directories sync modules at the
beginning of a state run on the minion, or can be manually synced via the Salt
module salt.modules.saltutil.sync_all
.
The directories are named:
_modules
_states
_grains
_renderers
_returners
The modules are pushed to their respective scopes on the minions.
Module Reloading
Modules can now be reloaded without restarting the minion, this is done by
calling the salt.modules.sys.reload_modules
function.
But wait, there's more! Now when a salt module of any type is added via
states the modules will be automatically reloaded, allowing for modules to be
laid down with states and then immediately used.
Finally, all modules are reloaded when modules are dynamically distributed
from the salt master.
Enable / Disable Added to Service
A great deal of demand has existed for adding the capability to set services
to be started at boot in the service module. This feature also comes with an
overhaul of the service modules and initial systemd support.
This means that the service state
can now
accept - enable: True
to make sure a service is enabled at boot, and -
enable: False
to make sure it is disabled.
Compound Target
A new target type has been added to the lineup, the compound target. In
previous versions the desired minions could only be targeted via a single
specific target type, but now many target specifications can be declared.
These targets can also be separated by and/or operators, so certain properties
can be used to omit a node:
salt -C 'webserv* and G@os:Debian or E@db.*' test.ping
will match all minions with ids starting with webserv via a glob and minions
matching the os:Debian
grain. Or minions that match the db.*
regular
expression.
Node Groups
Often the convenience of having a predefined group of minions to execute
targets on is desired. This can be accomplished with the new nodegroups
feature. Nodegroups allow for predefined compound targets to be declared in
the master configuration file:
nodegroups:
group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com'
group2: 'G@os:Debian and foo.domain.com'
And then used via the -N
option:
Minion Side Data Store
The data module introduces the initial approach into storing persistent data on
the minions, specific to the minions. This allows for data to be stored on
minions that can be accessed from the master or from the minion.
The Minion datastore is young, and will eventually provide an interface similar
to a more mature key/value pair server.
Major Grains Improvement
The Salt grains have been overhauled to include a massive amount of extra data.
this includes hardware data, os data and salt specific data.
Salt -Q is Useful Now
In the past the salt query system, which would display the data from recent
executions would be displayed in pure Python, and it was unreadable.
0.9.5 has added the outputter system to the -Q
option, thus enabling the
salt query system to return readable output.
Packaging Updates
Huge strides have been made in packaging Salt for distributions. These
additions are thanks to our wonderful community where the work to set up
packages has proceeded tirelessly.
Fedora and Red Hat Enterprise
Salt packages have been prepared for inclusion in the Fedora Project and in
EPEL for Red Hat Enterprise 5 and 6. These packages are the result of the
efforts made by Clint Savage (herlo).
Debian/Ubuntu
A team of many contributors have assisted in developing packages for Debian
and Ubuntu. Salt is still actively seeking inclusion in upstream Debian and
Ubuntu and the package data that has been prepared is being pushed through
the needed channels for inclusion.
These packages have been prepared with the help of:
More to Come
We are actively seeking inclusion in more distributions. Primarily getting
Salt into Gentoo, SUSE, OpenBSD and preparing Solaris support are all turning
into higher priorities.
Refinement
Salt continues to be refined into a faster, more stable and more usable
application. 0.9.5 comes with more debug logging, more bug fixes and more
complete support.
More Testing, More BugFixes
0.9.5 comes with more bugfixes due to more testing than any previous release.
The growing community and the introduction a a dedicated QA environment have
unearthed many issues that were hiding under the covers. This has further
refined and cleaned the state interface, taking care of things from minor
visual issues to repairing misleading data.
Custom Exceptions
A custom exception module has been added to throw salt specific exceptions.
This allows Salt to give much more granular error information.
New Modules
The new data module manages a persistent datastore on the minion.
Big thanks to bastichelaar for his help refining this module
FreeBSD kernel modules can now be managed in the same way Salt handles Linux
kernel modules.
This module was contributed thanks to the efforts of Christer Edwards
Support has been added for managing services in Gentoo. Now Gentoo services
can be started, stopped, restarted, enabled, disabled and viewed.
The pip module introduces management for pip installed applications.
Thanks goes to whitinge for the addition of the pip module
The rh_service module enables Red Hat and Fedora specific service management.
Now Red Hat like systems come with extensive management of the classic init
system used by Red Hat
The saltutil module has been added as a place to hold functions used in the
maintenance and management of salt itself. Saltutil is used to salt the salt
minion. The saltutil module is presently used only to sync extension modules
from the master server.
Systemd support has been added to Salt, now systems using this next generation
init system are supported on systems running systemd.
virtualenv
The virtualenv module has been added to allow salt to create virtual Python
environments.
Thanks goes to whitinge for the addition of the virtualenv module
Support for gathering disk information on Microsoft Windows minions
The windows modules come courtesy of Utah_Dave
The win_service module adds service support to Salt for Microsoft Windows
services
Salt can now manage local users on Microsoft Windows Systems
The yumpkg module introduces in 0.9.4 uses the yum API to interact with the
yum package manager. Unfortunately, on Red Hat 5 systems salt does not have
access to the yum API because the yum API is running under Python 2.4 and Salt
needs to run under Python 2.6.
The yumpkg5 module bypasses this issue by shelling out to yum on systems where
the yum API is not available.
New States
The new mysql_database state adds the ability to systems running a mysql
server to manage the existence of mysql databases.
The mysql states are thanks to syphernl
The mysql_user state enables mysql user management.
virtualenv
The virtualenv state can manage the state of Python virtual environments.
Thanks to Whitinge for the virtualenv state
New Returners
A returner allowing Salt to send data to a cassandra server.
Thanks to Byron Clark for contributing this returner
Salt 0.9.6 Release Notes
Salt 0.9.6 is a release targeting a few bugs and changes. This is primarily
targeting an issue found in the names declaration in the state system. But a
few other bugs were also repaired, like missing support for grains in extmods.
Due to a conflict in distribution packaging msgpack will no longer be bundled
with Salt, and is required as a dependency.
New Features
HTTP and ftp support in files.managed
Now under the source option in the file.managed state a HTTP or ftp address
can be used instead of a file located on the salt master.
Allow Multiple Returners
Now the returner interface can define multiple returners, and will also return
data back to the master, making the process less ambiguous.
Minion Memory Improvements
A number of modules have been taken out of the minion if the underlying
systems required by said modules are not present on the minion system.
A number of other modules need to be stripped out in this same way which
should continue to make the minion more efficient.
Minions Can Locally Cache Return Data
A new option, cache_jobs, has been added to the minion to allow for all of the
historically run jobs to cache on the minion, allowing for looking up historic
returns. By default cache_jobs is set to False.
Pure Python Template Support For file.managed
Templates in the file.managed state can now be defined in a Python script.
This script needs to have a run function that returns the string that needs to
be in the named file.
Salt 0.9.7 Release Notes
Salt 0.9.7 is here! The latest iteration of Salt brings more features and many
fixes. This release is a great refinement over 0.9.6, adding many conveniences
under the hood, as well as some features that make working with Salt much
better.
A few highlights include the new Job system, refinements to the requisite
system in states, the mod_init
interface for states, external node
classification, search path to managed files in the file state, and refinements
and additions to dynamic module loading.
0.9.7 also introduces the long developed (and oft changed) unit test framework
and the initial unit tests.
Major Features
Salt Jobs Interface
The new jobs interface makes the management of running executions much cleaner
and more transparent. Building on the existing execution framework the jobs
system allows clear introspection into the active running state of the
running Salt interface.
The Jobs interface is centered in the new minion side proc system. The
minions now store msgpack serialized files under /var/cache/salt/proc
.
These files keep track of the active state of processes on the minion.
Functions in the saltutil Module
A number of functions have been added to the saltutil module to manage and
view the jobs:
running
- Returns the data of all running jobs that are found in the proc
directory.
find_job
- Returns specific data about a certain job based on job id.
signal_job
- Allows for a given jid to be sent a signal.
term_job
- Sends a termination signal (SIGTERM, 15
) to the process
controlling the specified job.
kill_job
Sends a kill signal (SIGKILL, 9
) to the process controlling the
specified job.
The jobs Runner
A convenience runner front end and reporting system has been added as well.
The jobs runner contains functions to make viewing data easier and cleaner.
The jobs runner contains a number of functions...
active
The active function runs saltutil.running
on all minions and formats the
return data about all running jobs in a much more usable and compact format.
The active function will also compare jobs that have returned and jobs that
are still running, making it easier to see what systems have completed a job
and what systems are still being waited on.
lookup_jid
When jobs are executed the return data is sent back to the master and cached.
By default is is cached for 24 hours, but this can be configured via the
keep_jobs
option in the master configuration.
Using the lookup_jid
runner will display the same return data that the
initial job invocation with the salt command would display.
list_jobs
Before finding a historic job, it may be required to find the job id.
list_jobs
will parse the cached execution data and display all of the job
data for jobs that have already, or partially returned.
External Node Classification
Salt can now use external node classifiers like Cobbler's
cobbler-ext-nodes
.
Salt uses specific data from the external node classifier. In particular the
classes value denotes which sls modules to run, and the environment value sets
to another environment.
An external node classification can be set in the master configuration file via
the external_nodes
option:
http://salt.readthedocs.org/en/latest/ref/configuration/master.html#external-nodes
External nodes are loaded in addition to the top files. If it is intended to
only use external nodes, do not deploy any top files.
State Mod Init System
An issue arose with the pkg state. Every time a package was run Salt would
need to refresh the package database. This made systems with slower package
metadata refresh speeds much slower to work with. To alleviate this issue the
mod_init
interface has been added to salt states.
The mod_init
interface is a function that can be added to a state file.
This function is called with the first state called. In the case of the pkg
state, the mod_init
function sets up a tag which makes the package database
only refresh on the first attempt to install a package.
In a nutshell, the mod_init
interface allows a state to run any command that
only needs to be run once, or can be used to set up an environment for working
with the state.
Source File Search Path
The file state continues to be refined, adding speed and capabilities. This
release adds the ability to pass a list to the source option. This list is then
iterated over until the source file is found, and the first found file is used.
The new syntax looks like this:
/etc/httpd/conf/httpd.conf:
file:
- managed
- source:
- salt://httpd/httpd.conf
- http://myserver/httpd.conf: md5=8c1fe119e6f1fd96bc06614473509bf1
The source option can take sources in the list from the salt file server
as well as an arbitrary web source. If using an arbitrary web source the
checksum needs to be passed as well for file verification.
Refinements to the Requisite System
A few discrepancies were still lingering in the requisite system, in
particular, it was not possible to have a require
and a watch
requisite
declared in the same state declaration.
This issue has been alleviated, as well as making the requisite system run
more quickly.
Initial Unit Testing Framework
Because of the module system, and the need to test real scenarios, the
development of a viable unit testing system has been difficult, but unit
testing has finally arrived. Only a small amount of unit testing coverage
has been developed, much more coverage will be in place soon.
A huge thanks goes out to those who have helped with unit testing, and the
contributions that have been made to get us where we are. Without these
contributions unit tests would still be in the dark.
Compound Targets Expanded
Originally only support for and
and or
were available in the compound
target. 0.9.7 adds the capability to negate compound targets with not
.
Nodegroups in the Top File
Previously the nodegroups defined in the master configuration file could not
be used to match nodes for states. The nodegroups support has been expanded
and the nodegroups defined in the master configuration can now be used to
match minions in the top file.
Salt 0.9.8 Release Notes
Salt 0.9.8 is a big step forward, with many additions and enhancements, as
well as a number of precursors to advanced future developments.
This version of Salt adds much more power to the command line, making the
old hard timeout issues a thing of the past and adds keyword argument
support. These additions are also available in the salt client API, making
the available API tools much more powerful.
The new pillar system allows for data to be stored on the master and
assigned to minions in a granular way similar to the state system. It also
allows flexibility for users who want to keep data out of their state tree
similar to 'external lookup' functionality in other tools.
A new way to extend requisites was added, the "requisite in" statement.
This makes adding requires or watch statements to external state decs
much easier.
Additions to requisites making them much more powerful have been added as well
as improved error checking for sls files in the state system. A new provider
system has been added to allow for redirecting what modules run in the
background for individual states.
Support for OpenSUSE has been added and support for Solaris has begun
serious development. Windows support has been significantly enhanced as well.
The matcher and target systems have received a great deal of attention. The
default behavior of grain matching has changed slightly to reflect the rest
of salt and the compound matcher system has been refined.
A number of impressive features with keyword arguments have been added to both
the CLI and to the state system. This makes states much more powerful and
flexible while maintaining the simple configuration everyone loves.
The new batch size capability allows for executions to be rolled through a
group of targeted minions a percentage or specific number at a time. This
was added to prevent the "thundering herd" problem when targeting large
numbers of minions for things like service restarts or file downloads.
Upgrade Considerations
Upgrade Issues
There was a previously missed oversight which could cause a newer minion to
crash an older master. That oversight has been resolved so the version
incompatibility issue will no longer occur. When upgrading to 0.9.8 make
sure to upgrade the master first, followed by the minions.
Debian/Ubuntu Packages
The original Debian/Ubuntu packages were called salt and included all salt
applications. New packages in the ppa are split by function. If an old salt
package is installed then it should be manually removed and the new split
packages need to be freshly installed.
On the master:
# apt-get purge salt
# apt-get install salt-{master,minion}
On the minions:
# apt-get purge salt
# apt-get install salt-minion
And on any Syndics:
# apt-get install salt-syndic
The official salt stack ppa for Ubuntu is located at:
https://launchpad.net/~saltstack/+archive/salt
Major Features
Pillar
Pillar offers an interface to declare variable data on the master that is then
assigned to the minions. The pillar data is made available to all modules,
states, sls files etc. It is compiled on the master and is declared using the
existing renderer system. This means that learning pillar should be fairly
trivial to those already familiar with salt states.
CLI Additions
The salt
command has received a serious overhaul and is more powerful
than ever. Data is returned to the terminal as it is received, and the salt
command will now wait for all running minions to return data before stopping.
This makes adding very large --timeout arguments completely unnecessary and
gets rid of long running operations returning empty {}
when the timeout is
exceeded.
When calling salt via sudo, the user originally running salt is saved to the
log for auditing purposes. This makes it easy to see who ran what by just
looking through the minion logs.
The salt-key command gained the -D and --delete-all arguments for
removing all keys. Be careful with this one!
Running States Without a Master
The addition of running states without a salt-master has been added
to 0.9.8. This feature allows for the unmodified salt state tree to be
read locally from a minion. The result is that the UNMODIFIED state tree
has just become portable, allowing minions to have a local copy of states
or to manage states without a master entirely.
This is accomplished via the new file client interface in Salt that allows
for the salt://
URI to be redirected to custom interfaces. This means that
there are now two interfaces for the salt file server, calling the master
or looking in a local, minion defined file_roots
.
This new feature can be used by modifying the minion config to point to a
local file_roots
and setting the file_client
option to local
.
Keyword Arguments and States
State modules now accept the **kwargs
argument. This results in all data
in a sls file assigned to a state being made available to the state function.
This passes data in a transparent way back to the modules executing the logic.
In particular, this allows adding arguments to the pkg.install
module that
enable more advanced and granular controls with respect to what the state is
capable of.
An example of this along with the new debconf module for installing ldap
client packages on Debian:
ldap-client-packages:
pkg:
- debconf: salt://debconf/ldap-client.ans
- installed
- names:
- nslcd
- libpam-ldapd
- libnss-ldapd
Keyword Arguments and the CLI
In the past it was required that all arguments be passed in the proper order to
the salt and salt-call commands. As of 0.9.8, keyword arguments can be
passed in the form of kwarg=argument
.
# salt -G 'type:dev' git.clone \
repository=https://github.com/saltstack/salt.git cwd=/tmp/salt user=jeff
Matcher Refinements and Changes
A number of fixes and changes have been applied to the Matcher system. The
most noteworthy is the change in the grain matcher. The grain matcher used to
use a regular expression to match the passed data to a grain, but now defaults
to a shell glob like the majority of match interfaces in Salt. A new option
is available that still uses the old style regex matching to grain data called
grain-pcre
. To use regex matching in compound matches use the letter P.
For example, this would match any ArchLinux or Fedora minions:
# salt --grain-pcre 'os:(Arch:Fed).*' test.ping
And the associated compound matcher suitable for top.sls
is P:
NOTE: Changing the grains matcher from pcre to glob is backwards
incompatible.
Support has been added for matching minions with Yahoo's range library. This
is handled by passing range syntax with -R or --range arguments to salt.
More information at:
https://github.com/grierj/range/wiki/Introduction-to-Range-with-YAML-files
Requisite "in"
A new means to updating requisite statements has been added to make adding
watchers and requires to external states easier. Before 0.9.8 the only way
to extend the states that were watched by a state outside of the sls was to
use an extend statement:
include:
- http
extend:
apache:
service:
- watch:
- pkg: tomcat
tomcat:
pkg:
- installed
But the new Requisite in
statement allows for easier extends for
requisites:
include:
- http
tomcat:
pkg:
- installed
- watch_in:
- service: apache
Requisite in is part of the extend system, so still remember to always include
the sls that is being extended!
Providers
Salt predetermines what modules should be mapped to what uses based on the
properties of a system. These determinations are generally made for modules
that provide things like package and service management. The apt module
maps to pkg on Debian and the yum module maps to pkg on Fedora for instance.
Sometimes in states, it may be necessary for a non-default module to be used
for the desired functionality. For instance, an Arch Linux system may have
been set up with systemd support. Instead of using the default service module
detected for Arch Linux, the systemd module can be used:
http:
service:
- running
- enable: True
- provider: systemd
Default providers can also be defined in the minion config file:
providers:
pkg: yumpkg5
service: systemd
When default providers are passed in the minion config, then those providers
will be applied to all functionality in Salt, this means that the functions
called by the minion will use these modules, as well as states.
Requisite Glob Matching
Requisites can now be defined with glob expansion. This means that if there are
many requisites, they can be defined on a single line.
To watch all files in a directory:
http:
service:
- running
- enable: True
- watch:
- file: /etc/http/conf.d/*
This example will watch all defined files that match the glob
/etc/http/conf.d/*
Batch Size
The new batch size option allows commands to be executed while maintaining that
only so many hosts are executing the command at one time. This option can
take a percentage or a finite number:
salt '*' -b 10 test.ping
salt -G 'os:RedHat' --batch-size 25% apache.signal restart
This will only run test.ping on 10 of the targeted minions at a time and then
restart apache on 25% of the minions matching os:RedHat
at a time and work
through them all until the task is complete. This makes jobs like rolling web
server restarts behind a load balancer or doing maintenance on BSD firewalls
using carp much easier with salt.
Module Updates
This is a list of notable, but non-exhaustive updates with new and existing
modules.
Windows support has seen a flurry of support this release cycle. We've gained
all new file,
network, and
shadow modules. Please note
that these are still a work in progress.
For our ruby users, new rvm and
gem modules have been added along
with the associated
states
The virt module gained basic Xen support.
The yum
pkg modules gained Scientific
Linux support.
The pkg module on Debian, Ubuntu,
and derivatives force apt to run in a non-interactive mode. This prevents
issues when package installation waits for confirmation.
A pkg module for OpenSUSE's
zypper was added.
The service module on Ubuntu
natively supports upstart.
A new debconf module was
contributed by our community for more advanced control over deb package
deployments on Debian based distributions.
The mysql.user state and
mysql module gained a
password_hash argument.
The cmd module and state gained
a shell keyword argument for specifying a shell other than /bin/sh
on
Linux / Unix systems.
New git and
mercurial modules have been added
for fans of distributed version control.
In Progress Development
Master Side State Compiling
While we feel strongly that the advantages gained with minion side state
compiling are very critical, it does prevent certain features that may be
desired. 0.9.8 has support for initial master side state compiling, but many
more components still need to be developed, it is hoped that these can be
finished for 0.9.9.
The goal is that states can be compiled on both the master and the minion
allowing for compilation to be split between master and minion. Why will
this be great? It will allow storing sensitive data on the master and sending
it to some minions without all minions having access to it. This will be
good for handling ssl certificates on front-end web servers for instance.
Solaris Support
Salt 0.9.8 sees the introduction of basic Solaris support. The daemon runs
well, but grains and more of the modules need updating and testing.
Windows Support
Salt states on windows are now much more viable thanks to contributions from
our community! States for file, service, local user, and local group management are more fully
fleshed out along with network and disk modules. Windows users can also now manage
registry entries using the new "reg" module.
Salt 0.9.9 Release Notes
0.9.9 is out and comes with some serious bug fixes and even more serious
features. This release is the last major feature release before 1.0.0 and
could be considered the 1.0.0 release candidate.
A few updates include more advanced kwargs support, the ability for salt
states to more safely configure a running salt minion, better job directory
management and the new state test interface.
Many new tests have been added as well, including the new minion swarm test
that allows for easier testing of Salt working with large groups of minions.
This means that if you have experienced stability issues with Salt before,
particularly in larger deployments, that these bugs have been tested for,
found, and killed.
Major Features
State Test Interface
Until 0.9.9 the only option when running states to see what was going to be
changed was to print out the highstate with state.show_highstate and manually
look it over. But now states can be run to discover what is going to be
changed.
Passing the option test=True
to many of the state functions will now cause
the salt state system to only check for what is going to be changed and report
on those changes.
salt '*' state.highstate test=True
Now states that would have made changes report them back in yellow.
State Syntax Update
A shorthand syntax has been added to sls files, and it will be the default
syntax in documentation going forward. The old syntax is still fully supported
and will not be deprecated, but it is recommended to move to the new syntax in
the future. This change moves the state function up into the state name using
a dot notation. This is in-line with how state functions are generally referred
to as well:
The new way:
/etc/sudoers:
file.present:
- source: salt://sudo/sudoers
- user: root
- mode: 400
Use and Use_in Requisites
Two new requisite statements are available in 0.9.9. The use and use_in
requisite and requisite-in allow for the transparent duplication of data
between states. When a state "uses" another state it copies the other state's
arguments as defaults. This was created in direct response to the new network
state, and allows for many network interfaces to be configured in the same way
easily. A simple example:
root_file:
file.absent:
- name: /tmp/nothing
- user: root
- mode: 644
- group: root
- use_in:
- file: /etc/vimrc
fred_file:
file.absent:
- name: /tmp/nothing
- user: fred
- group: marketing
- mode: 660
/files/marketing/district7.rst:
file.present:
- source: salt://marketing/district7.rst
- template: jinja
- use:
- file: fred_file
/etc/vimrc:
file.present:
- source: salt://edit/vimrc
This makes the 2 lower state decs inherit the options from their respectively
"used" state decs.
Network State
The new network state allows for the configuration of network devices via salt
states and the ip salt module. This addition has been given to the project by
Jeff Hutchins and Bret Palsson from Jive Communications.
Currently the only network configuration backend available is for Red Hat
based systems, like Red Hat Enterprise, CentOS, and Fedora.
Exponential Jobs
Originally the jobs executed were stored on the master in the format:
<cachedir>/jobs/jid/{minion ids}
But this format restricted the number of jobs in the cache to the number of
subdirectories allowed on the filesystem. Ext3 for instance limits
subdirectories to 32000. To combat this the new format for 0.9.9 is:
<cachedir>/jobs/jid_hash[:2]/jid_hash[2:]/{minion ids}
So that now the number of maximum jobs that can be run before the cleanup
cycle hits the job directory is substantially higher.
ssh_auth Additions
The original ssh_auth state was limited to accepting only arguments to apply
to a public key, and the key itself. This was restrictive due to the way the
we learned that many people were using the state, so the key section has been
expanded to accept options and arguments to the key that over ride arguments
passed in the state. This gives substantial power to using ssh_auth with names:
sshkeys:
ssh_auth:
- present
- user: backup
- enc: ssh-dss
- options:
- option1="value1"
- option2="value2 flag2"
- comment: backup
- names:
- AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0111==
- AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0222== override
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0333== override
- ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0444==
- option3="value3",option4="value4 flag4" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0555== override
- option3="value3" ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAlyE26SMFFVY5YJvnL7AF5CRTPtAigSW1U887ASfBt6FDa7Qr1YdO5ochiLoz8aSiMKd5h4dhB6ymHbmntMPjQena29jQjXAK4AK0500rMShG1Y1HYEjTXjQxIy/SMjq2aycHI+abiVDn3sciQjsLsNW59t48Udivl2RjWG7Eo+LYiB17MKD5M40r5CP2K4B8nuL+r4oAZEHKOJUF3rzA20MZXHRQuki7vVeWcW7ie8JHNBcq8iObVSoruylXav4aKG02d/I4bz/l0UdGh18SpMB8zVnT3YF5nukQQ/ATspmhpU66s4ntMehULC+ljLvZL40ByNmF0TZc2sdSkA0666==
LocalClient Additions
To follow up the recent additions in 0.9.8 of additional kwargs support,
0.9.9 also adds the capability to send kwargs into commands via a dict.
This addition to the LocalClient api can be used like so:
import salt.client
client = salt.client.LocalClient('/etc/salt/master')
ret = client.cmd('*', 'cmd.run', ['ls -l'], kwarg={'cwd': '/etc'})
This update has been added to all cmd methods in the LocalClient class.
Better Self Salting
One problem faced with running Salt states, is that it has been difficult
to manage the Salt minion via states, this is due to the fact that if the
minion is called to restart while a state run is happening then the state
run would be killed. 0.9.9 slightly changes the process scope of the state
runs, so now when salt is executing states it can safely restart the
salt-minion daemon.
In addition to daemonizing the state run, the apt module also daemonizes.
This update makes it possible to cleanly update the salt-minion package on
Debian/Ubuntu systems without leaving apt in an inconsistent state or killing
the active minion process mid-execution.
Wildcards for SLS Modules
Now, when including sls modules in include statements or in the top file,
shell globs can be used. This can greatly simplify listing matched sls
modules in the top file and include statements:
base:
'*':
- files*
- core*
include:
- users.dev.*
- apache.ser*
External Pillar
Since the pillar data is just, data, it does not need to come expressly from
the pillar interface. The external pillar system allows for hooks to be added
making it possible to extract pillar data from any arbitrary external
interface. The external pillar interface is configured via the ext_pillar
option. Currently interfaces exist to gather external pillar data via hiera
or via a shell command that sends yaml data to the terminal:
ext_pillar:
- cmd_yaml: cat /etc/salt/ext.yaml
- hiera: /etc/hirea.yaml
The initial external pillar interfaces and extra interfaces can be added to
the file salt/pillar.py, it is planned to add more external pillar interfaces.
If the need arises a new module loader interface will be created in the future
to manage external pillar interfaces.
Single State Executions
The new state.single function allows for single states to be cleanly executed.
This is a great tool for setting up a small group of states on a system or for
testing out the behavior of single states:
salt '*' state.single user.present name=wade uid=2000
The test interface functions here as well, so changes can also be tested
against as:
salt '*' state.single user.present name=wade uid=2000 test=True
New Tests
A few exciting new test interfaces have been added, the minion swarm allows
not only testing of larger loads, but also allows users to see how Salt behaves
with large groups of minions without having to create a large deployment.
Minion Swarm
The minion swarm test system allows for large groups of minions to be tested
against easily without requiring large numbers of servers or virtual
machines. The minion swarm creates as many minions as a system can handle and
roots them in the /tmp directory and connects them to a master.
The benefit here is that we were able to replicate issues that happen only
when there are large numbers of minions. A number of elusive bugs which were
causing stability issues in masters and minions have since been hunted down.
Bugs that used to take careful watch by users over several days can now be
reliably replicated in minutes, and fixed in minutes.
Using the swarm is easy, make sure a master is up for the swarm to connect to,
and then use the minionswarm.py script in the tests directory to spin up
as many minions as you want. Remember, this is a fork bomb, don't spin up more
than your hardware can handle!
python minionswarm.py -m 20 --master salt-master
Shell Tests
The new Shell testing system allows us to test the behavior of commands
executed from a high level. This allows for the high level testing of salt
runners and commands like salt-key.
Client Tests
Tests have been added to test the aspects of the client APIs and ensure that
the client calls work, and that they manage passed data, in a desirable way.