Creating a Static Inventory of Managed
Hosts
Uses
INI style and can also be defined in YAML. Inventory can also be populated
dynamically
#will
supply the path to the Ansible configuration file that's currently in use
ansible
--version
#We
can match the entire CIDR notation of 192.168.4.0/22 using below
192.168.[4:7].[0:255]
#To
match server01 through server20. Notice we have used 01 instead of 1, so this
will match 01, 02, 03 ... instead of 1, 2, 3
server[01:20]
#To
convert inventory to YAML
ansible-inventory
-y --list
#To
check if the host is defined in inventory:
ansible
washington1.example.com --list-hosts
#Default
ansible config file
/etc/ansible/ansible.cfg
#Default
inventory file location:
/etc/ansible/hosts
#Creating
our own inventory file:
[webservers]
web01
web02
#Once
inventory is added list it using below command
#Since
we need output from our own file we use -i and point it to our file
"inventory"
#The
output will be in json format
ansible-inventory
-i inventory --list
Managing Connection Settings and Privilege
Escalation
In
this section, we'll explore managing the connection settings and privilege
escalation.
Agentless
architecture.
Linux
- SSH and Python
Windows
- Windows RM and PowerShell
Inventory
will describe the list of hosts that we wish for Ansible to manage, we now can
go forward and understand how to describe to Ansible the various bits of
information.
Ansible
configuration file:
If you're unsure which configuration file you're
using, you can use the ‑‑version flag for the Ansible command.
ansible
--version
By default, Linux‑based systems will take advantage of the SSH
protocol. If you wish to define a separate protocol or a non‑standard port, you'll need to describe that to
Ansible as well.
The
user being used to log into the system can also be described in inventory.
Once
you've gained access to a system, if you need to escalate privileges to the
administrative or root credentials, Ansible will need to understand how that
occurs in your environment. By default, it will use the sudo command to do so.
Other options exist, and if you do use a different privilege escalation method,
such as the su command.
Lastly, you can describe to Ansible whether an SSH
password should be provided or if key‑based authentication is in place.
All
of these defaults can be adjusted within the Ansible configuration file or by
passing a set of flags on the command line during invocation.
which
Config File Ansible looks for in which order?
It
is not uncommon to have several configuration files for different Ansible
workloads in your environment.
1.Ansible
does consult an ANSIBLE_CONFIG environmental variable. If this is set, it will
be set to the path of an Ansible configuration file.
2.If
environmnet variable is not set, Ansible looks for the configuration file in
current working directory.
3.If
it doesn't find an ansible.cfg file in the current working directory, it will
then look in your home directory for a dot file, or hidden file.
.ansible.cfg
4.Lastly,
if it hasn't found a configuration file in any of those locations, it will use
the default installation file at /etc/ansible/ansible.cfg.
Again, just a reminder that the flag ‑‑version for the Ansible command will clearly spell
out which configuration file is being consulted. Be sure if you navigate around
the file system you can check this because you may have switched directories to
one that contains an alternative ansible.cfg file.
ansible.cfg:
The
ansible.cfg file consists of several sections. Each section contains a heading
and has a collection of key value pairs. The section headings, or titles, are
enclosed within square brackets, and then the key value pairs are set as key
equals value. The basic operations of Ansible executions take advantage of two
main sections. One is the default section for Ansible operations, and the second
is the privilege_escalation section where Ansible looks to understand how to
gain privilege escalation when invoked for your managed hosts. The connection
settings we discussed previously will be defined within the default section of
the configuration file. This will include three main pieces of information for
Ansible to understand.
default
section: [defaults]
1.Remote_user
will explain which user to take advantage of when connecting to managed hosts.
If you do not supply a remote user argument, it will use your current username.
2.Remote_port
specifies which SSH port you'll use to contact your managed host. By default,
this is port 22.
3.The ask_pass argument controls how or whether or
not Ansible will prompt you for an SSH password. By default, it does not prompt
for a password, as it is most customary and a best practice to use
key‑based authentication for SSH connections.
privileg
escalation section: [privileg escalation]
In the privilege_escalation setting section of the
configuration file, several main arguments are used for Ansible to understand
how to escalate privileges to a higher‑tiered user, such as the root user.
The
become key will describe whether or not you will automatically use
privilege_escalation. This is a Boolean, and the default is set to no.
The
become_user key will define which user to switch to when privilege escalation
occurs. By default, this is the root user. The become_method key will determine
how Ansible will switch to becoming that escalated user. Sudo is the default
implementation; however, there are other options, such as su.
The
become_ask_pass key will control whether or not Ansible prompts you for a
password when escalating privileges. By default, this is set to no.
Example
of an ansible.cfg file:
In
general, an ansible.cfg file should contain only the keys you're overriding from
defaults.
[defaults]
inventory
= ./inventory
remote_user
= ansible
ask_pass
= flase
[privilege_escalation]
become
= true
become_user
= root
become_ask_pass
= false
In
a typical environment, not all hosts are equal, and there could be different
properties we wish to set as variables on specific hosts.
One of the easiest ways to provide host‑specific variables is to create a host_vars
directory. In that directory you'll create a text file that matches the
hostname. Within this text file, you can supply a list of key value pairs that
are unique to that host. Any variables provided in this fashion will override
the one set within the ansible.cfg file. There's also a slight different syntax
and naming when it comes to using this method.
Host‑based connection an privilege escalation variables:
Ansible_host
will specify a different IP or hostname to use when connecting to the host
instead of the one that specified an inventory. Think of this as a secondary IP
or alternative hostname for that host. Ansible_port will specify the SSH port
that you would prefer to use for connecting to that host.
Ansible_user
specifies the user for that connection.
Ansible_become
will specify whether or not you should use privilege escalation for that host.
ansible_become_user
specifies which user to become on that host.
ansible_become_method
specifies the methodology on how privilege escalation works, whether this be
sudo, su, or something alternative.
Example of some host‑based connection variables in a host_vars
subdirectory:
Here
we have a subdirectory, host_vars, containing the file server1.example.com. The
server1.example.com file contains variables specifically used when connecting
and manipulating server1.example.com only.
File
-> project/host_vars/server1.example.com
ansible_host:
192.0.2.104
ansible_port:
34102
ansible_user:
root
ansible_become:
false
No
other servers will inherit this, but this will override any defaults that are
contained in ansible.cfg when you interact with server1.example.com.
Creating
ansible.cfg:
When
creating this file, refer the default file in /etc/ansible/ansible.cfg. They
have already listed the defaults, just take them and copy to your file and add
setting only if there is change.
To
filter the inventory:
ansible
databases --limit db01 -m pint
databases -> Heading in inventory
Running Ad Hoc Commands
Ansible
provides a catalogue of modules. These are the underlying code that explains via
code how Ansible can provide the automation tasks we'll leverage. Modules exist
for a large number of system administrative tasks such as creating and managing
users, installing and removing or even updating software, deploying
configurations, as well as configuring the network services that run on your
systems. Ansible modules are what is known as idempotent. In other words,
they'll always check to see if the work being requested is required on the
system or if it's already in the desired state. If a system is already in the
desired state described by your Ansible work, then it will skip that and report
back that no change was necessary. If a change is required, Ansible will then
perform that change and report that as well.
An
ad hoc command runs a single module against a specified host to perform a single
task. To run ad hoc commands, we'll use the ansible command.
After the ansible command, you'll need to supply a
host‑pattern. This host‑pattern will specify which host this task will run
on.
Additionally, you'll need to specify a module using the ‑m flag. Each module takes a unique set of arguments, which you'll provide with the ‑a flag.
Lastly, you'll specify an inventory file with the ‑i flag, where the host can be found for Ansible.
One of the simplest ad hoc commands, as well as one
of the most common system administrative tasks, is ping. The ping module doesn't
actually send an ICMP packet like we're used to as system administrators using
the ping command, but it does check to see if Ansible can contact the managed
host. Specifically in a Linux‑default implementation, this would be using an SSH
interaction.
ansible all -m ping
When
using Ansible ad hoc commands, there're a number of flags available to the user to override default behaviors. The default
behaviors are defined in the ansible.cfg configuration file.
We can see that it may be necessary to tell Ansible
that we need a prompt for a
password for our ad hoc command. You
can use the ‑k flag or the ‑‑ask‑pass flag for this behavior.
When you need to specify a specific
user for the interaction, the
‑u flag will allow you to do so. This will be overriding
the REMOTE_USER setting contained within ansible.cfg.
A ‑b enables privilege
escalation akin to the become argument
within our configuration file.
The capital ‑K flag denotes that we need to be prompted for a password during
privilege escalation, and the flag
‑‑become‑method overrides the default privilege escalation
method.
With Ansible, the default is sudo. Other valid
choices exist such as su and can be seen using the ansible‑doc command. Most Ansible modules take a set of
arguments to describe the actions you wish for Ansible to perform.
The ‑a
flag in an ad hoc command allows you to supply those arguments. Syntactally,
we'll contain those within single quotes and put a space between each
key‑value pair. Example,
‑m flag to declare the module
user
‑a flag to specify those
arguments.
Also
important to consider is the concept of state. Here we've declared a
state=present. We'll take a look through this course at state as that is a main
approach of how Ansible has you describe the behavior you wish for it to
perform.
ansible -m user -a 'name=newbie uid=4000
state=present' servera.lab.example.com
For
example, if we wish to remove this user, we could change the state to absent and
rerun this command. That would then remove the user we had just created.
Below
is the helpful list showing some of the flags you have available to you during
ad hoc commands.
Configuration
Directive |
Command-Line
Option |
inventory |
-i |
remote_user |
-u |
become |
-b |
become-method |
--become-method |
become_user |
--become-user |
become_ask_pass |
-K |
To take a look at the list of modules available to
us for ad hoc or one‑off commands, we can use the ansible‑doc -l command.
With the ‑l flag, it will list all modules available to us. Use
grep to filter
ansible-doc ping
I'll show a new technique of limiting to a single
host. I'll then use an explicit statement for our inventory, the ‑k flag to tell Ansible that I
wish for it to prompt me for passwords
when authenticating to this host, and then a simple module call to ping. We're
prompted for that password, and supplying our user's password for that system,
the command continues. Note that instead of all systems responding here,
only the limited
web01 system responds.
ansible all --limit web01 -i inventory -k
Let's
try an ad hoc command to restart sshd on
one of our targeted hosts. I'll use an ansible command. This time I'll allow it
to rely on the inventory we know it'll be using instead of explicitly stating it
to show you how simple ad hoc commands can be. I'll target all hosts. I'll call
the module service. And I'll supply the
arguments for that module. We'll use the key state setting it to the value
restarted. Additionally, we'll name the service we wish to restart.
ansible all -m service -a "state=restarted
name=sshd"
Let's
try one more example of ad hoc commands. Let's try to create users on our target
machines. First, let's take a look at the Ansible module for user. We can note
that the equal sign denotes mandatory fields, and not many of them are mandatory
here, but we do have a lot of flexibility with the user module itself. I'll
create a simple user by name. Here we can see that name is one of the mandatory
fields, so I'll create a simple user using name and set a simple password. Let's
use the user module to create a simple user across our web server systems.
ansible webservers -m user -a "name=test
password=secret state=present"
Creating
a Simple Playbook
Playbook:
Playbook
contains one or more plays, a play is an
ordered list of tasks to run against
hosts in your inventory. Each task will
take advantage of a specific Ansible module to perform some action against your
managed hosts. Most of the tasks authored throughout the modules are idempotent
and can safely be executed over and over again without issue. The intention of a
playbook is to alter lengthy, complex manual system administration into easily
repeatable routines. This should provide predictability, as well as reusability
for the work you author with Ansible.
Create simple playbooks
First thing to know when formatting an Ansible
playbook is about YAML. YAML is a simple to author structure with standard file
extensions ending in .yml. Two‑space indentation with the space character only is the main concept
behind the syntax within YAML files. Note that the spaces cannot be substituted with the tab character. The tab character is not allowed in proper YAML.
YAML doesn't place strict requirements on how many spaces are used for the
indentation, but the two basic rules come into play
Data
elements at the same level in the hierarchy must align with the same
indentation.
Items
that are children of another item must be indented more than their parents. All
children of the same data element, again, must be indented with the same
indentation.
Proper
playbooks always begin with three dashes
to denote the start of a file. These will
be all the way left justified.
Failures will result in immediately halting the play execution.
Within our inventories, we may not always find it
appropriate to target every node within a group or even the entire inventory.
This is where the ‑‑limit flag will allow us to target specific
hosts within our inventory. The limit
is a host pattern that further limits the hosts for the play. Given our playbook
targeting all hosts, we could then supply a ‑‑limit argument and call out a singular host, or
even a host pattern for this to execute upon.
ansible-playbook site.yml --limit
datacenter2
The Ansible playbook command also provides us a
helpful syntax‑check argument. We can call ansible‑playbook, passing in the argument for ‑‑syntax‑check. Then, just simply name your YAML file you wish
for the syntax check to be performed upon. If any errors are found, Ansible will
do its best to denote where in the file that error exists.
ansible-playbook –syntax-check
webserver.yml
It can often be advantageous before performing the
actual execution of a playbook to do a test or dry run of its work. The Ansible playbook argument
provides the ‑C argument to be able to do just that. You can see an example
here of ansible‑playbook using the ‑C
argument on the webserver.yml playbook file. The resulting output simulates what
would occur if you remove that flag, but does not actually perform that work.
Once the work has been validated and you approve for this to carry on, simply
remove the flag and run this again.
Gathering Facts is a built‑in feature of Ansible executions where Ansible will
profile all the
targeted hosts to understand as much
as it can about them.
Using
Variables in Plays
Naming our variables:
There
are a few rules. Variable names always start
with a letter. Additionally, they can only contain letters, numbers, and underscores. Periods and dashes are not allowed in variable
names.
Scope of variable
Once you've begun to create variables, it's
important to understand the scope or the available reach for each variable
you've created. The concept of global, host, and play‑based scopes exists.
Global
variable -> One that is set for every
host. An example of this would be extra variables we create within a job
template.
Host‑based values -> Are set for a particular host or
host group. These would include variables we set in the
inventory or in our host_vars directory as explored in a previous module.
Play‑scoped variables -> Are available for all hosts in the context of a currently executing
play. These play‑based scoped variables include things included in
the vars directive
at the top of a play or in the
include_vars
tasks contained.
Variable Precedence:
When variables are defined in multiple places,
precedence also has to be considered. If a variable is defined at multiple
levels, the level with the highest precedence will take over. A
narrow‑scoped variable, in general, will take precedence
over a wider‑scoped variable. Considering the types we discussed
in our previous slide, this would mean that a play‑scoped variable would override a
global‑scoped variable. Variables defined within a
playbook are overridden by extra variables defined on the command
line during execution. To override in
this manner, simply provide the ‑e option and the substituted value for any variables you wish to
override when you're calling
ansible‑playbook.
Defining Variable:
A
common method is to place a vars block at the beginning of the play and then
list the variables you wish to define. You can see an example of vars being
defined in this way in this block where the user_name and user_state are defined
in a vars block at the top of a play.
-
hosts: all
vars:
user_name: joe
user_state: present
You
could additionally define these variables in an
external file. If you do so in this manner, we use the vars_files
argument at the top of a play to load variables in from a file located
elsewhere. You can see an example here where the vars_files block is created and
the relative path to the vars directory and a users.yml file has been provided.
-
hosts: all
vars_files:
- vars/users.yml
Referencing Variable:
Once
defined, variables can then be used within your tasks contained in a playbook.
When we're ready to reference a variable within
a play execution, we'll substitute its value by using double braces {{ variable_name }}. The
double braces will contain the name of the variable we wish to substitute in.
You can see an example here where we have a variable defined in the vars block
at the top of the play as use_name. The value this is set to is joe. Within our
task, we're creating the user Joe by using variable interpolation. You can see
the double brace nomenclature utilized to substitute the value joe for the
variable user_name. We're doing so in two places, both in the name of the task,
as well as in the name provided for the argument for the user module.
-
name: Example play
hosts: all
vars:
user_name: joe
tasks:
- name: Create user {{ user_name }}
user:
name:"{{ user_name }}"
state: present
When
referencing one variable as another variable's value, the double brace will
start the value. When it does, you may also need to quote around this value.
This will prevent Ansible from interpreting the variable reference as starting a
YAML dictionary. Ansible provides the helpful hint that the with_items without
the quotation marks should be written as with_items including the quotation marks around the double braces ->
:"{{ user_name }}".
Host‑ based variables and
group‑based variables
As the names denote, host variables apply to a specific host, while group variables apply to
all hosts in a host group or group of groups.
Host variables will take precedence over any group variables supplied on
a host, but variables defined inside a play will then override either of these.
You can define both host and group variables in the inventory itself or in
subdirectories that contain YAML files that match the names of the host in a
host_vars subdirectory or group in a group_vars subdirectory. These YAML files
will then contain the list of variables you wish applied with those scopes.
Variables defined in the host_vars and
group_vars directory have a higher precedence than those defined as
inventory variables.
To
utilize this technique, you'll need to create directories at the same level as
your Ansible playbook. Creating the two directories, group_vars and host_vars,
will allow you areas to provide YAML files to define variables with this
technique. If we had a group defined an inventory named servers, we could then
create a subdirectory group_vars that contains the YAML file, servers. Any
variables we define in the servers file will then be supplied as variables on
all hosts in the servers group. In the example to the right, we see proper YAML
syntax for setting variables in this fashion.
ansible_user:
devops
newfiles:
The
ansible_user variable is set to the string devops, while the newfiles variable
is a list of two different values. If you wish to create variables for a
specific host with this technique, create a host_vars directory and contain
those variables and a YAML file that matches the host's name.
Here's
a look at a proper file hierarchy that has examples of this technique.
You
can see we have the group_vars and host_vars subdirectory at the same tier our
playbook is contained.
Underneath
the group_vars subdirectory, we have
files for all, datacenters, datacenters1, and datacenters2. These represent
groups we've defined in inventory, and each of these files will contain a list
of variables that are applied to the members of those groups specifically.
In
the host_vars subdirectory, we have four
different files that correspond to each of our host, demo1 through demo 4. The
files contained here will have a list of variables that explicitly apply to
those individual hosts.
Defining variables within a playbook.
In
this example, we're doing exactly that. We have a vars block that has a variable
named packages. The dictionary packages then contains a list of packages we'll
use a task to install. The packages syntax uses proper indentation of two spaces
before listing out each of the members of the packages dictionary.
Once
we've defined this dictionary, we can then call a task using a package installer
module, such as yum, in the argument for the yum module's task. The name can
then use the variable, using both double quotations and double braces to call
packages. The packages variable contains a list of five different individual
packages that would then be looped through to install all five of these pieces
of software.
Selecting Items from Dictionary
Here
we can see a more elaborate structure for this variable named users. This users
variable is an array of values. It is possible to return one value from the
array within this variable. When we wish to do so, we'll use bracketed syntax,
as well a single quotations around each of the elements.
For
example, we could reference the users variable and then the aditya user's
username and then their first name by the syntax users, opening the bracket,
opening the quotation mark, and naming the username Aditya, and then following
that with an open bracket and open quotation mark for the fname.
In
a similar fashion, we can get Carlotta's home directory reference with users
open bracket, open quotation, Carlotta. Close both of those. And then open
bracket, open quotation home.
Register Statement
The
register statement will allow us to capture the output of a task and store it in
a variable during execution. The output is saved into a temporary variable that
could be used throughout the rest of the playbook for either debugging or
utilization for another task.
This
is a common technique that allows us to take advantage of the return values from
each module, store them in a variable, and reuse them throughout the rest of our
workloads.
These
registered variables are only stored in memory and are destroyed once playbook
execution completes.
From
our previous play, let's have a look at what it looks like currently.
We
can evolve the value that we're supplying for the username test as a variable at
the top of the play.
Alright, up here in the heading keys, we can add a
new key for vars. Since this is a child, we'll need to
two‑space indent beneath it, but we'll simply supply
the vars. Let's create a variable named username. We'll set this user name to
test. We'll then utilize it down below in the play by doing variable
substitution. Since we're substituting
a variable, we'll needs a first enclose quotation marks, the double braces, and
include the variable name in the middle here, username.
ansible-playbook
example.yml
Let's
supply a command line variable
substitution.
ansible-playbook
-e “username=student”
We'll say ansible‑playbook and, providing extra variables, we can
supply username and set it equal to a second username. It will add both test
user and student user.
Oftentimes,
we may want the variable to contain a list of values. For example, we may have
additional information we want to supply. We'll transform the username variable
into more of a dictionary of values.
Here, I'll remove test for now and then begin building child values underneath
username. Each of these child keys will be indented two spaces further. And as
we provide more values for the test user, we can then further indent two
additional spaces. (Working) Now that we've evolved our variable, we'll need to
update the interpolation below. The
notation we'll use here will involve brackets and single quotation marks to
iterate through the fields. Let's also take advantage of that new value that
we've supplied. The user module also provides a key comment to allow for
additional commentary within the etc/passwd file.
We've
previously taken a look at the host_vars
concept, but let's also evolve our playbook to take advantage of that
technique. I'm going to go ahead and create the group_vars alongside the
host_vars directory. This is what we currently have.
To
clean up our work for further exercises, I'm going to go ahead and remove the
db01 host variables we previously set by simply deleting that file. After that
clean up, we have the current structure in place.
Since
we've been performing the work on the webservers group, we can create a file in
the group_vars directory for the webservers group. We can migrate the variables
we just created into that file and take advantage from the playbook in the exact
same fashion.
Let's
create the file group_vars/webservers. In this file, we'll simply paste our
variable information from playbook.
It
can be helpful to provide a comment at the beginning of each file, so we
understand what the file's intention is. In example.yml remove those values.
Since we have no additional variables currently, we can leave the key and have
it blank. But since we don't provide variables, I'll go ahead and remove it as
well. It's best to keep your playbooks as clean as possible.
Ansible-playbook
example.yml
Ansible Vault:
Create
a new encrypted file:
ansible-vault
create filename
View
an encrypted file
ansible-vault
view filename
Edit
an encrypted file
ansible-vault
edit filename
Encrypt
existing file
ansible-vault
encrypt filename
Save
to new file
--output=new_filename
Decrypt
file
ansible-vault
decrypt filename
Now that we have encrypted information, we'll want
to use that within our playbooks. We can provide the vault password that we set
when encrypting the file with the ‑‑vault ‑id option. You can see an example command
ansible‑playbook ‑‑vault ‑id @prompt filename
The
@prompt option ensures that Ansible
understands it needs to receive user input for the password. If you do not
provide that password, Ansible will return an error.
You may have different passwords for various files
that are encrypted using Ansible Vault. When you need to supply multiple passwords, we'll have to understand the technique that
allows us to do that. Using the ‑‑vault ‑id option, we can set labels on the encrypted file. We can
then use this as many times as necessary to label the various files we have
encrypted and ask Ansible to prompt us for the different passwords when we need
to supply them. Have a look at this last example.
ansible‑playbook ‑‑vault ‑id vars@prompt –vault-id playbook@prompt
site.yml
Ansible‑playbook calls the vault‑id and supplies a vars@prompt argument. It calls it again, providing a playbook@prompt argument before then calling the playbook
site.yml. Ansible will then prompt you with this execution for two different
passwords, one for vars and one for playbook. Given that you provide the two
appropriate passwords, the files will be decrypted when utilized by the
playbook, and execution will proceed; else Ansible will provide an error.
Once
we've created a password, we may need to change
that on an encrypted file.
Ansible-vault
rekey filename
To
change the password of an encrypted file, we'll use the subcommand rekey for the
Ansible Vault command. You can use this subcommand on multiple data files at
once, providing a helpful way to rekey a bunch of files to the same password.
The rekey subcommand will prompt for the
current password and then the new password you wish to set for these encrypted
files.
While
we're discussing sensitive information, sometimes Ansible output can include
sensitive values. When this is the case, you may want to suppress the output from a given task that
could do that. When we want to suppress that output, we can use the key no_log. By using this value, Ansible will
suppress the output of the task so that
sensitive information is not displayed. Have a look at these two examples.
In the top example, we're debugging a variable
called secret. You can see on the right the output declares the secret. You know
it. In the second example on the bottom left, we've added the no_log keyword and
set that to true. Note that when we debug this variable on the right‑hand output, it does not display the value of that
variable.
Let's
take a look at a simple file I've authored. I'm calling secret. Here, I've
created a single variable named secret set to a value of our_secret_data.
I
want to be able to encrypt this file to where it can't be displayed or accessed
by users without a password and then use that data within Ansible workflows.
First things first, let's encrypt that file.
We'll
use the Ansible Vault subcommand encrypt and simply name the file. It'll prompt
us for a password, which we'll enter and confirm. Encryption is now successful.
When we try to display the contents of that file again, we'll notice we don't
have access to the data itself, and we do have a handy note that Ansible Vault
is in use here.
Let's
evolve our workloads in the existing playbook to include this concept. Let me
open up my editor. You can see what I've done here is I've now loaded in the
variables from our encrypted file, that filename's secret.
Ansible has a module named include_vars that allows us to do exactly that. I have simply
presupplied this information to make it a little quicker for us to take a look
at this example. Next, I'm using the debug module, which allows us to display
the contents of variables within our Ansible workspaces. I'm naming the variable
we created. Let's go ahead and run this. I'll run this with no additional
options so we can see what happens when we try to access this vault‑encrypted file without taking the proper techniques
to decrypt during usage. Let's see what happens. Uh oh, it's attempting to
decrypt the file, but no vault secrets were found.
Okay,
let's see what happens when we now supply the proper flag that allows us to pass
the password in.
Oh,
great! It was able to decrypt our data, and we can see the contents of our
variable called secret. Excellent.
But
let's take a look at how we can maybe use this in a more meaningful way, such as
to supply a password for some of those users we are creating.
In
our group_vars file for the webservers group, let's take a look at the
group_vars webservers set of variables we've created. Let's supply another value
here for the password for this user, and we'll use the variable we've created.
Excellent. Let's save our work.
Now
we'll adjust our playbook to supply the password for these users.
Okay, now we've added that argument. So let's
see if we are able to supply the password for our encrypted file and have this
playbook create these users and set their password to that secret we've now
encrypted.
I'll
run the playbook as we did previously, asking it to prompt us for our vault
password.
Ansible-playbook
–ask-vault-pass example.yml
We
do get a warning about the fact that we're passing the password in plaintext.
However, we know that that password is stored in an encrypted location. But from
Ansible's perspective, it's decrypted during time of execution. We can ignore
this for now because this is just a simple example.
Now
additionally, we're displaying the contents of that secret data anyway. So let's
use one more argument to make sure we're not supplying that secret encrypted
data in the Ansible output. Let's add one more argument into our example
playbook.
The
argument we'll add will go here, and we're simply going to supply the no_log
option and set it to true. Let's save our work and rerun it one final time. Note
that we're no longer displaying the variable in this section here. This is the
most appropriate way to handle sensitive information and ensure that Ansible
isn't revealing data that you do not wish displayed in the Ansible output.
Let's talk about a few other subcommands available
with the ansible‑vault command. Specifically, we know that we were
able to encrypt with ansible‑vault encrypt. But if we no longer wanted that file
encrypted, we could use ansible‑vault decrypt and name the file.
Ansible-vault
decrypt secret
Task
Iteration with Loops
We'll
want to demonstrate basic looping functionality within Ansible in order to
iterate over tasks.
If
we were to need to create several users, for example, we can use one single task
with a loop to create five users instead of five individual tasks that each take
that unique argument.
Ansible
provides this functionality using the keyword
loop. Loops can then be configured to repeat a task using each item in a
given list. The loop variable item holds the value during each iteration.
Here's
an example on the left of a technique that does not use loops to create three
different users. We can see the user module called three different times, each
supplying a different name. All three of these tasks look identical with the
exception that the name changes.
Using
what we learned before, we can define a variable called myusers. This myusers
variable will then contain the three names that we were using on the left
example. Once we've created this variable, we can then take advantage of the
loop keyword. Using the loop keyword in the example On the right, we can call
the myusers variable. We'll substitute this in to the name field, and to do so,
we'll use the keyword item.
More
advanced looping techniques exist besides loop. We also have the concept of
with_dict, which takes in a list, but
each item in the list is actually a hash or array of the dictionary instead of
just a simple value. In below example, we're defining the with_dict to have both
name and groups for each element. You can see that we have two groups,
flintstones and rubbles, that get created in our first task. In the second task,
we take advantage of the with_dict to have various members of these two groups
created. Note the nomenclature for item.name and item.groups that corresponds
with the two fields created underneath with_dict.
In
the first element, we can see that the name is set to fred, and the groups
corresponds to flintstones. In this way, item.name would translate to fred, and
item.groups would translate to flintstones.
While
loops provide us a tremendous power within Ansible, they're not always the most efficient way to
accomplish a task. Depending on the module, you can consider whether it's more
appropriate to use a loop or not.
In
the example here, we're taking advantage of the yum module to install several
pieces of software. Using the loop structure on the left, we'll call the yum
module three separate times to install individual packages. Due to the
functionality provided within the yum module, we could have just passed those in
as three different names for the argument within the module. The example on the
right would call the yum module one single time, providing all three arguments
for installation. In this case, the task on the right will be more efficient and
faster. This is very contextual per module, so consider how each module may
work, and some testing and exploration may be necessary to determine whether or
not loops are appropriate in your workload.
Let's hop in our terminal and give this a try.
Let's try creating a simple loop in a playbook so that we can add a couple
users. Up to this point, we've only been managing single users at a time, but
the power of loops is to allow us to do multiple items in a given task. Let's
have a look at a variable file I've just created.
Note
we have two files now. The databases and webservers group each have their own
variables file. I've populated the databases file with a simple list of what I'm
calling dbusers. Namely, the usernames will be test, dev, and qa. We'll create a
simple playbook using the user module that adds these three users to our
database systems. We'll do so using a loop, I'm going to create a file I'm
calling loop.yml.
I'll start with the typical kind of headings, only
this time instead of webservers, I'm going to target databases, the name of our
group that includes our DB systems. And since it's multiple users instead of the
single user, I'll update that line as well. Let's fill out the arguments for the
user module that we're going to use to iterate through our loop of database
users. We'll supply the name field. And in this case, since we're going to use a
loop, loops require the keyword item, and we represent that in the typical
variable fashion included in both quotation marks, as well as double braces.
We'll still declare a state, and in this case, we'll set the state to present.
Now we need to include our loop field. We called our variable db_underscore
users, so that's the variable that we'll want to supply here to the loop
argument. Now as the loop argument also takes a variable, we'll need to come
back and add the proper syntax of the double braces and quotation marks in order
to have this function properly. Now that we've created that simple playbook,
let's go ahead and execute it using ansible‑playbook and call the name of the playbook we just
created, loop.yml. Excellent. We can see that the user creation task has added
three users to two systems each, six entries altogether, a very powerful example
of how loop can be used to iterate through our variable lists.
Triggering Tasks with Handlers
We'll
learn to author handlers that run tasks when
another task makes changes on a managed host. Ansible handlers can be
created within our workloads. They'll take advantage of all the same modules we
use throughout all of our other Ansible workloads. These Ansible modules are
designed to be idempotent. As they are idempotent, Ansible only tries to do work
when it's absolutely necessary, not just because it came up in a task. To do so,
it will always validate the state of a machine before performing any actions and
only perform those actions when it's necessary to remediate to the desired
state. An example of where we consider the concept of a handler is when we may
want to run the same kind of module task at various points within our playbooks,
but performing the same action.
A
good example of this is when we want to
reboot a server after a certain set of
actions. We may have three or four tasks within a playbook that require a
restart when successfully executing, but we wouldn't want to restart the server
after each one of those. We can author a handler to restart the server and call
it for each of these tasks. Ansible will then keep track of the request to
restart the server and perform that only once using the authored handler.
As
stated before, handlers are simply tasks just like we've seen before. But these
are defined in a way that they respond to a notification triggered by another
task. A task will only notify the handler when the task makes a change on
something on your managed host. A handler has a globally unique name for your
workloads, and it's triggered at the end of a
block of tasks in a playbook. If there isn't a task that notifies the
handler, then the handler will not run. If multiple tasks notify the same handler, the handler
will only run once at the end of those tasks' execution. Since they're
simple tests like any others, you have access to the full library of Ansible
modules that you've seen so far. Typical things like reboots and service
restarts are commonplace usages for task handlers. You can consider a handler an
inactive task that will only execute when triggered and explicitly invoked when
using a notify statement in another task.
Here's
an example of a defined handler.
We
can see that we're using a template module to create some work. And at the
bottom of that execution, we use the keyword
notify. The notify keyword then supplies the argument restart apache.
This argument must directly match to the name of an author handler somewhere
within our workload. As you get started with handlers, it's customary to author
them at the bottom of your YAML files. In this case, we have done exactly that.
Our handler is defined with the matching name of restart apache. This handler
takes advantage of a task using the service module. This task will then restart
the httpd service. In this example, if the template task performs any work, it
will then notify the handler restart apache. Once notified, the handler will
restart Apache at the end of task completion.
Call multiple handlers
In
the example here, we're notifying two separate handlers.
Those
handlers are both defined at the bottom of the file and have directly matching
names of restart mysql and restart apache. We can see that both of them use the
service module to restart their proper services. If the task of the top performs
any work, it will then notify both of these handlers. These handlers will run
their service restarts at the end of playbook execution.
A list of handlers that may be called throughout
playbook execution will always run in the order specified by their calls. They
do not run in the order in which they're listed in the notify statements in a
task or in the order in which tasks notify them. They're executed in the order
in which they're defined within your playbook structures. Handlers typically run
after all other tasks in a play complete. A handler called by a task in the task
part of a playbook will not run until all of those tasks have been processed.
The names of handlers exist in a per‑play namespace. If two or more handlers are
incorrectly given the same name, only one will run, the one first defined. If
multiple task notify the same handler, the handler only runs once. That's really
the purpose of handlers here. If no tasks notify a handler, then it will not
run. Again, this is really at the heart of the purpose of handlers. Tests that
include a notify statement to notify a handler do not do so unless they report a
state of changed. In other words, if a task does not perform work, it will not
notify its handler. If the test does not notify its handler, then the handler is
not executed.
Consider
with our playbook that added users for the DB and web server systems.
Potentially, we may want to reboot those systems when users get added. If we add
DB users, we could create a task that reboots the machines. And then if we add
web server users, we'd create another task to reboot machines. Potentially, this
could result in multiple reboots across all the systems. We can evolve our
playbook a bit further to use a handler to accomplish this sort of task in a
more graceful way. Let's take a look at my hander.yml file I've created.
This
is just an evolution of the file that we already were using up to this point. If
we wanted to reboot after database users are added, we could insert a task at
this location to go ahead and call the reboot module. Then after web server
users get added, we could add another test to reboot. However, handlers allow
for this to be a graceful approach. To start authoring handlers, we'll put them
at the same hierarchy as tasks. From there, they're authored in the exact same
fashion as our task modules. I'll give this handler a name of Reboot system.
With this module, if we wish to reboot a machine, no other arguments are
required. Now that we have this handler authored, how do we invoke it in our
tests above? Well that's where the notify keyword comes in. So let's scroll up,
and we'll insert a line directly in line with the loop when and user and call it
notify. Notify then must match the exact name of the handler defined, including
capital letters. So the capital R here is important. We'll do the same for our
web server users. We'll add a notify statement, and we'll call the exact same
handler. And since these users exist, if we were to run the playbook in its
current form, no work would be done, and therefore we would result in no handler
being called. So let me log in to one of these systems. Let's call it db02, and
I'm going to use the command userdel. Let's just show real quick that in
etc/passwd we have those users, test, dev, and qa, that we expected to see. I'm
going to remove the qa user. I'll use the userdel command, and I'll say qa.
Since I'm not a privileged user, I need to invoke sudo to do that. If we tail
the file again, we now see that that qa user's removed. Now at least that task
in our playbook should require some work to get that qa user added. So I think
we're in a good situation to test our handler. Alright, so let's take a look at
our handler one more time, handler.yml. Great. Okay, so now let's go ahead and
execute this playbook. (Working) Notice the changed status there for adding that
qa user to db02. The handler is, in fact, being run. So now we'll wait for the
machines to reboot. Great. Now we can see that the handler was invoked and
rebooted the system db02.
Running Conditional Tasks
In
this section, we'll discuss running conditional tasks. Ansible implements both
conditionals and handlers for us to be able to control if or when tasks execute.
We'll take a look at that in this section.
- Ansible conditionals allow us to
qualify whether to run or skip certain tasks. Both variables and facts are
available to be tested using conditionals.
- Conditionals will leverage
operators, such as greater than, less than, or equal, some various numerical
data, or boolean values to qualify whether or not tasks should execute. A few
good use cases for when you may want to qualify a task execution using a
conditional exist. Perhaps you would only want to do certain tasks if there is
system memory available. You could use an Ansible fact on available memory to
qualify whether a task should execute.
- You could peruse and create users
on a managed host depending on which to domain it may belong. Certain tests may
need to be skipped if a variable is or isn't set to a certain value. And using
the register technique we've learned previously, you could also leverage the
data we've gathered throughout task execution and potentially store it in
variables using the register technique we learned in our previous sections to
determine whether or not to run further tasks.
Conditionals
take advantage of a when statement to
qualify if or when they should run. If the condition is met, then a task will
execute. However, if the condition is not met, the task is skipped. Let's take a
look at this example.
In
this example, we have a variable we've created called run_my_task and initially
set its value to true. In the task area, we have the installation of the httpd
package. This package will only be installed when run_my_task is set to true. In
this case, since we've initially set that value to true, the httpd package would
be installed.
Let's
have a look at a slightly more sophisticated example.
In
this example, we're going to test whether or not the my_service variable has a
value. If it does, then the value of my_service is used as the name of the
package to install. If the variable is not defined, then the task is skipped
without an error. You can see in our vars block, we've set my service to httpd.
And then in the yum task, we install the my_service variables value. The
conditional when qualifies to only do so when the my_service variable is
defined. In this case, we would install the httpd package since it is.
Here's
a table that shows some examples of various conditionals we can take advantage
of.
When
we wish to test if a variable is equal to a string, we'll quote that string and
use the double equal sign. If we wish to test for a numeric value, we don't need
the quotation marks. Less than and greater than examples exist, as well as less
than or equal to examples. The exclamation point provides us the ability to say
not equal to. And if we simply want to test for a variable's existence we can
use the keyword is defined.
The
converse of that is if we want to test to make sure a variable does not exist.
In
this case, we'll use is not defined. When we want to test for boolean true
values or one or yes, we can simply name the variable. If we wish to test for if
the boolean value is false or zero or no, we'll say not variable name. We can
also start to create more complex associations, such as if the first variable's
value is present within a second variable's list. You can see an example of that
in the bottom of this table. Ansible_distribution in supported_distros. In this
case, we have an Ansible fact gathered that qualifies and stores the value of
the Ansible distribution. We also could define our own supported distros
variable. If the gathered Ansible fact for Ansible distribution is listed within
our defined supported distros, then this task will execute.
Now
that I've mentioned Ansible facts, let's
take a look at how to use some of those in constructing your conditionals. The
distribution Ansible fact is gathered and set when the play runs. This fact
identifies the operating system of the current host. In our play to the right,
we also define a supported_os variable. We list two operating systems that we
consider supported OSes. Using the conditional at the bottom of this play, we'll
create a when statement that consults the Ansible fact for distribution and
determines whether or not it is in the list of supported operating systems we've
created.
We
can see the syntax for that uses ansible_facts and opens a bracket and quotes
the fact we wish to test, in this case distribution. Then we used the keyword in
and consult the variable we created at the top of this play, supported_os. If
the gathered Ansible fact for distribution is either Red Hat or Fedora, then
this task will execute.
Multiple
conditions:
Building
on this concept, we can test for multiple conditions. We can use a single when
statement to evaluate several conditions. We'll combine those conditions using
either the and or the or keywords. If we have multiple of these statements,
we'll group them with parentheses. Let's have a look at a few examples. In this
first example, we're testing for the ansible_distribution to be equal to RedHat
or the ansible_distribution to be equal to Fedora.
If either of these is true, the task will
execute.
In
the next example, we're using ansible_distribution_version, testing it to match
7.5, and the ansible_kernel set to a specific value of 3.10.0.
Both
of these will have to test as true for this test to execute.
We
can utilize lists to describe a list of conditionals as well. When we use this
technique, these are combined as an AND
operation.
In
other words, both of these conditions must be met for the test to execute. You
can see an example of that here where we've adapted the previous slide's example
into this list. If both the distribution version and the kernel version for
Ansible match these values, then the task will execute. If either do not, then
the task will be skipped without error.
As
we evolve that to more complex conditional statements, we can group conditions
with parentheses.
This
will allow us to ensure that Ansible correctly interprets the expressions we're
authoring. Here you can see a complex example of a when statement where we're
testing for the system to have RedHat at version 7 or Fedora at version 28. If
either of those are true, then it will execute. But it must be RedHat version 7
or Fedora version 28.
The concept of loops and conditionals can certainly
be combined. In this example, the mariadb‑server package is installed by the yum module. It's
only installed if there's a file system mounted on / with more than 300MB free.
We're using ansible_mounts as a conditional test.
This fact is a list of dictionaries that represent a fact about each mounted
file system. The loop will iterate over the list, and the conditional statement
is not met unless there is an actual mount found to have 300MB free or greater.
Both of these conditions must be true. If both of these conditions are met, then
the yum task will install the mariadb‑server.
Restart based on dependency:
Here's
another example where a playbook will restart the HTTPD web server only if the
Postfix server is running.
In
the first task, we're going to find out if Postfix is running or not. We'll run
a command using the command module and then register its output into a variable
we're calling result. In the next task, we're then going to consult that output
using a when statement on result.rc and verifying that it 0. If that is true,
then we will restart the httpd service using the service module.
Let's
hop in our terminals and try this out. Let's consider a scenario where we want
to add users to both our database systems and our web server systems. We can
write a playbook with two simple tests to add these users. With conditionals, we
now have the ability to target specific systems based on criteria and ensure
that the proper users are added to only the systems they belong on. In this
example, I'll add web users to our web systems and DB users to our database
systems. Let's hop into a simple playbook that I've gone ahead and created. I'm
calling it conditional.yml.
Let me show
you what I've set up before we go ahead and add our conditionals. I've added a
new vars area to the top of our playbook and created a variable called
web_users. This is a list of three different web users, member, admin, and
developer. This first test should look familiar as this is the create database
users tests that we've seen previously. We're using the user module, and we're
using a loop of item, a state of present, and our db_users variable that's
contained in our group_vars. Here we'll need to fill out the same for our web
users. In this case, we're going to use the same technique of a loop. So let's
go ahead and fill that out. The state is also going to be present, and our loop
in this case is going to be that variable we defined at the top, the web_users.
Now as this is currently, we're targeting all hosts, which means both of these
tests will run on all systems. We're going to want to use a when conditional on
each of these tests to make sure that these users are added to only the systems
that we expect, namely the first test should only target systems in the
databases group, and the second one should target only web servers. Let's set up
a when conditional for each of these that does exactly that. We can put when
statements just here at the bottom. I'll add both of the keywords first. Now
let's consider some things available to us that we can use in conditionals. In
this case, I want to verify if a system belongs to specific group. There's a
helpful syntax that allows us to do that. I'll give you that example now. Here
we can say databases in, and we have a key here that is available to us through
Ansible known as a magic variable. The magic variable we'll use here is group_names. Group_names is a list of all groups Ansible is
aware of in your workspace that it gathers during the gather facts phase of all
playbook executions. We'll take advantage of this fact to target the databases
group within that list. Let's do the same thing for the web servers section.
(Working) Here we'll say webservers in group_names, targeting a different
section of our inventory. Let's save our work and execute the playbook. Say
ansible‑playbook conditional.yml. Great. We can see the
blue output is denoting that we're skipping certain systems for these tasks,
namely when we're creating the database users, we're skipping both web01 and
web02. When we're creating the web server users, note there we're skipping the
tasks involved for those users on the database systems. You can see the summary
at the bottom shows that we have performed the proper changes on the appropriate
systems and have added those users. As with all Ansible playbooks, we can always
test for idempotency in our work by rerunning the playbook command. Aside from
our skipping, the full green output shows that idempotency was intact and that
the work was not duplicated. As these users already existed, we simply see
statuses of ok for all of our tests. Using Ansible facts throughout your
conditionals, as well as all of the other catalog of abilities you have to
condition certain task execution really makes it easy for you to target the
proper systems for various aspects of your Ansible playbooks and workloads. That
concludes this section. I'll see you in the next video.
Recovering from Errors with Blocks
We'll
learn to use blocks to group tests together in a play so that we can recover
from errors that may occur in the block. Blocks are available in Ansible as a logical grouping of tasks into a unit. They'll
then be used to control how tasks are executed. Blocks can have when
conditionals applied to the entire block, for example. And then that would mean
that all the tasks in the block only run when that conditional is met.
Here's
an example of using that technique.
We
give the task a name of installing and configuring Yum versionlock plugin. We
start our block and list out two separate tasks, one for the yum module and one
for the lineinfile module. We supply a conditional on this block that says that
the ansible_distribution fact must equal RedHat. If we're working on a Red Hat
system, then both of these tasks will execute.
You
can also utilize blocks with the keyword
rescue. This will help you to understand ways to recover from task failure. A block can
have a set of tasks grouped into a rescue statement that will execute only if
the block fails. Normally, the tasks in the rescue statement will recover the
host from some sort of failure that could have occurred during the block tasks.
If the block exists because multiple tasks are needed to accomplish some
outcome.
Additional
to rescue, block also can be companioned with always. This
third section will run no matter the output of the block and rescue.
After the block runs, if there was a failure, the rescue tasks will then
execute. No matter the output of block and rescue, the tasks in the always
section will run. Now the always section is only limited by the conditional that
may have been set on the block.
So
to summarize,
Block will define your main tasks to execute
Rescue will be utilized when the block clauses
fail
Always tasks will run independently of the
success or failure of the tasks defined in block and rescue.
Here's
an example of all three of these put to use.
We
have a block statement defining a shell command. We have a rescue section that
defines a different shell command. And then we have an always section that
restarts the service. In the block section, you can tell we're trying to upgrade
a database. If this were to fail, we're going to use the rescue task to revert
that database. No matter which of these tasks was successful, the always section
will enable the restart of the database. We could further supply a when
conditional on the block clause, and that would be applied to both the rescue
and always clauses. Perhaps we wish to match on an operating system, such as Red
Hat. If we had a conditional that matches for the OS Red Hat, and we were
working on a different system, none of these tasks would run.
Let's
have a look at a sample playbook that I've authored to utilize this approach. I
have called this playbook block_rescue_always.yml for obvious reasons.
Here
I'm going to use a block, rescue, and always approach to attempt to update a
database system. That being said, we'll attempt to update this system package.
And if it fails, we'll restart the database so we can ready that system to
attempt that again or mitigate whatever issues may have occurred. And lastly,
we'll reboot the system so we can put it back into production. Let's take a look
at my work. With hosts, I'm going to target our databases. We'll utilize become
true because dealing with package updates and installation type tests, you'll
certainly need those privileges. In my task section, I have the task update
database. Opening the block section, I'll iterate a few tests that are going to
attempt to update the database. You can see my first task is going to message
the users that the database is being updated. Then we'll use the yum module and
a state set to latest to update our PostgreSQL database server. The name here
corresponds to the package name for that system. If all goes well, excellent.
However, we have a rescue section in case it doesn't. In this case, the rescue
section both supplies a message, an error message that the database will be
restarted, and then the message itself, Update failed. Restarting database to
correct issues. Once we've messaged the user, we'll restart the database since
the update had failed. We need to use the service module to manage our services
and then declare the service we wish. From here, I'll need to declare a state.
And here the state I want is restarted. Our always block will run no matter if
the upgrade was successful or not. Here the upgrade, given either status, will
still want a reboot of the system. Here we'll notify the user if the reboot
update process has completed. See the previous output for status of failure or
completion and then reboot the system. We'll call the reboot function, and we've
seen that previously. It requires no arguments to reboot a system immediately.
Let's save this work and give it an execution. Now something I haven't called
out previously is a great syntax check.
Ansible-playbook –syntax-check
block_rescue_always.yml
I'll
run it here just to make sure that all of the work we've put in in the
block_rescue_always.yml file is appropriate YAML syntax. Since no issues arose,
we know that we're in good YAML. I'll use the Ansible playbook command now to
run the play.
Deploying Files with Jinja2 Templates
We've
had a look at the copy module that allows
us to copy a file from our source machine onto the managed hosts. We've seen
that the file module allows us to
manipulate the permissions and settings on those files. Additionally, the synchronize module allows us to take advantage
of the rsync type abilities within Linux systems. For existing files on our
targeted hosts, the lineinfile module
allows us to edit certain lines within an existing file on a targeted host.
Let's
consider a situation, however, where we need to deploy a customized file on each
of our managed hosts. Each host may need specific values altered relative to
that host. In this situation, a template could be very valuable. The Jinja2
templating engine allows us to template files and then deploy them using an
Ansible playbook. Within Jinja2, we can substitute variables with values that
are relative to the unique managed host.
In
this example, we're having a look at an sshd_config file and how we could
template that to be used on each of our managed hosts.
You
can see that we have the port value written in our way that we've seen with
variables previously using the double braces for the variable we're calling
here, ssh_port. Given that configuration file with the variable substitution
we're requiring, we can now use the Ansible module template. The Ansible template module allows us to deploy a Jinja2
templated file. It is similar to the module
copy in the number of arguments and style that is used.
In
this example, we're taking a look at the template shown on the previous slide.
Notice
that the file will end with the extension
.j2 when we're using a Jinja2 templated file. We're also using the
template module available from the Ansible library. We're showing the source as
this template file, sshd_config.j2, using that standard file extension. We then
declare the destination on the targeted host. Here that destination is
etc/ssh/sshd_config. You can see additional file parameters we're setting for
this template available through the various arguments.
We've
discussed a bit about Ansible facts as special variables available to us that
Ansible will gather during these setup phase of each playbook execution. At the
start of each play, Ansible will gather these facts and make them available to
us throughout our workloads. Additionally, you
can collect facts at any time by running the module setup. Once Ansible
has gathered these facts, they're available and stored in a special variable set called ansible_facts. This
variable is structured as a dictionary.
Lots of information is included in our ansible_facts, such as network address information about each host, the host
names, storage information, operating system data, as well as many other
aspects about the hardware and software available on the managed host.
An
example here shows how we can display all facts for a managed host, as well as a
subset of facts, specifically for an IP version 4 address.
To
display variable information, we utilize the debug module. The argument the debug module will take is the name of the
variable we wish to display. This key, var, will be used to choose the variable.
Additionally, as we wish to explore subsets of the Ansible fact information,
we'll use the bracket and quote notation to declare a specific value contained
within the ansible_facts. In this example, the first task displays all facts by
simply debugging the variable named ansible_facts. This can be a helpful
approach when you need to view all the facts about a given host to determine
what is valuable for your Ansible workload. The second task lists all of the
IPv4 addresses for a specific host. It does so using the debug module and
specifying the all_ipv4_addresses contained within the ansible_facts dictionary.
Now
that we understand that facts are available to us, let's look at how we could
use those in a Jinja2 template.
Here, we're using a message of the day template or an motd template
file. As per standard nomenclature, we'll call this file motd.j2. The
standard location for this file will be etc/motd on our managed Linux hosts. The
Ansible fact fqdn can then be utilized to replace the fully qualified DNS name
of the host into various configuration files, specifically the message of the
day file. You can see the example at the right does exactly this. We're using
the double brace notation and including the Ansible fact variable we wish to
take advantage of, specifically fqdn in this case. In the second box, you can
see that we're taking advantage of the template module to use this new motd.j2
file and deploy it on our managed hosts into the location etc/motd as declared
by the desk or destination value by the template argument. The example that the
bottom shows this variable substitution to server1.example.com for that specific
targeted host. Each target host would substitute its fully qualified domain name
in this fashion.
When
we wish to supply comments in a template,
we have a special syntax to do so.
We use a brace, followed by a pound sign or
hash and then include our comment. The comments contained in this way in our
templates shouldn't appear in the final file. Have a look at this example. The
first line of this example includes a comment that will not be included when
this file gets deployed. The variable references on the second line are then
interpolated with the facts gathered by Ansible for the specific targeted hosts.
Using
the Jinja2 template engine, we have available control structures that we can take advantage
of when we need more complex substitutions. We have the for statement providing us away to loop over a
particular set of items. In the example below, we're using the groups all as a
special variable that lists all the members that are contained in a group.
Note
the syntax using the brace and percent sign to declare these values.
Additionally, when we're done with a loop, we close a for loop using the endfor
key word, again contained in the brace and percent sign syntax. When we're done
with this loop, we'll close the statement using the endfor keyword contained in the brace and
percentage sign. The middle line in this example uses a specific set of
variables we have available to us, in this case provided by our host_vars. The
result of this specific line is to generate something like an etc host formatted
file that contains the IP address that matches the fully qualified domain name
of each host within your inventory. For all the hosts in the inventory, you
should generate one line per host. That line would contain the IP address that
matches the fully qualified domain name and thus fills out an etc host file.
Jinja2
also makes available to us the use of conditionals. We'll again use the brace and
percent sign syntax for any expressions or logic we wish to take advantage of.
These
expressions are available to us within template files, but you shouldn't use
these within your authored Ansible playbooks. In this example, we see the
utilization of the if and endif keywords. For evaluation, the if statement will
check the boolean value of the finished variable as declared. If the finished
variable is set to true, the result will occur before the endif statement then
closes out that structure.
Let's
get into our terminal and try some of these techniques. When we're ready to
create our first templates, first we'll organize them into a subdirectory. I'll
make a subdirectory called templates. Switching into that subdirectory, I'll
create our first templated file. In this case, I'll deploy a message of the day
file to some of our systems. So I'll create the template called motd.j2. .j2 is
the standard extension for Jinja2 templates. Let's use some of the techniques we
just discussed. Let's start our file with a simple comment just to explain the
purpose. So here I'll use the comment syntax, which is the brace that contains a
hash, a pound sign. Inside here, we can place any comment we like. Here I'll
just state the intention in this file. Now we'd proceed with templatting out the
file we wish to have in place. For a simple message of the day, I guess we could
just state the hostname of the system we're logging into. So let's create some
text. Now that we have this simple text, we need to understand how to insert a
variable. Here we want to use a variable that supplies the hostname for the
system. I'll use the example that we had in our content of the fqdn to supply
this value. You can see the syntax is using the double braces, taking advantage
of our list of ansible_facts and then using the bracket and quote notation to
supply fqdn. This is a good, simple template. So let's get started with this,
and we can evolve it with a few techniques.
Now
that we have the motd Jinja2 template, we'll need to write some Ansible
playbooks that deploy this. So switching out of this, I've already created a
skeleton called template.yml.
Let
me take a look at the template.yml file. I have not yet filled out the task
explicitly, so let's go ahead and do so now. The module that Ansible provides
for us to supply Jinja2 templates is the template module. It takes a number of
arguments, so let's go ahead and fill those out now. The first argument it takes
is the name of the template we wish to deploy. In this case, I called motd.j2.
Next, we need to declare on the target system where we wish for this file to
placed. A proper motd goes in etc/motd. Next, we can declare permissioning. The
owner of the motd is root. The group is also root. An the mode or permission
setting is 644. So for proper nomenclature here, we'll say 0644 containing
quotation marks. That should place our motd on these systems in the correct
location with correct permission and ownership.
Let's
consider what else is available to us. Since the fqdn didn't give us the exact
value we were looking for, let's try instead of an Ansible fact, let's try one
of those special variables that are available to us, in this case inventory_hostname. This should reference the
hostnames we supplied directly in our inventory. Let's save our file and rerun
our playbook.
In
order to craft our template file for the host file, we need to understand a
little bit about how to traverse ansible_facts. I've created this list of the
Ansible facts for the web01 system and have stored those in this temporary file.
We can see the list of all Ansible facts that are available to us within this
file. When we're ready to utilize one of these, we'll need to know the
nomenclature for using an Ansible fact, any subfact, and then the values
contained within. Specifically, I'll be looking for the default IPv4 addresses.
So I'll need this keyword here under Ansible fact, as well as the default
address. We'll use this in our variable so that we can place the IP address for
each system and then correspond it to the hostname for this system. Now let's
author our host.j2 file. This will be located in our templates folder in
host.j2.
I've gone ahead and gotten started here. Again, I
started with a comment at the top to mention the purpose of this file. I've also
begun our structure for a for loop. Notice that I'm going to say for host in the
groups all. I'll close the structure with endfor and use the brace and
percentage sign syntax. Within this loop, I'm going to want to use a variable
substitution to set up that structure for our host file. As we just saw, I'm
going to start with an Ansible fact. The Ansible fact that we saw that met our
needs was the one that was called ansible_default_ipv4. (Working) Under that
field, we needed the specific value for address. We'll then close this variable,
and the next entry on this line will be the inventory hostname that we had used
previously. That should conclude this file, so now let's save our work. I've
also expanded our Ansible playbook for the template deployment to include a
secondary test to deploy this file. Note that we're deploying the host file.
I've named our new template and the destination location for this file. The rest
of the parameters all remain the same. Let's go ahead and run our playbook.
(Working) Got to use the right name here, template.yml. (Working) Looks like I
encountered an issue with one of those variable names. I noticed that
ansible_default_ipv4 isn't actually correct. It's actually default_ipv4. Making
that correction, I'll go ahead and save my work again and re‑execute my tasks. (Working) Voila. Now that we've
logged into web01, let's display our host file to see if we like what we see.
Perfect. Now it looks like what we expect to see. That concludes this section.
I'll see in the next video.
Processing Variables with Jinja2 Filters
Jinja2
has a number of filters we can take advantage of to process and reformat the
values contained in our variables. Within the Jinja2 engine, we have a number of
filters that are supported for our expressions of variables. Filters allow us
the ability to modify and process variable information to meet our needs. Some
of these filters are provided by the Jinja2 language itself, and others are
included as specific plugins for Ansible. You can also author custom filters,
but that's kind of beyond the scope of this course. If you require further
information on that, have a look within the Ansible documentation for playbooks_filters. These filters can be very
powerful and allow us to prepare data for use within our playbook or within
templated files for our various Ansible workloads.
Now
that we're ready to process data using the Jinja2 filters, we'll need to
understand how to do exactly that. To apply a filter, you'll need to first
reference the variable. You'll follow that variable name with the pipe
character. After that character, you'll then add the name of the filter you want
to apply. Some filters require a series of arguments beyond that or optional
additional arguments contained within parentheses. You can utilize multiple
filters within a pipeline to get the formatted output you require. In this
example, we can see how the capitalized
filter allows us to capitalize the first letter of a string.
{{ myname | capitalize }}
If
we included a variable such as myname and that variable included a value such as
james, all letters being lowercase, we could then use the filter capitalized to
ensure that the J in James is then capitalized upon output.
Oftentimes,
we may need multiple transformations of
our data. In this case, we can take advantage of multiple filters. The unique filter will get a unique set of items
from a list, removing any duplicated entries. The sort filter then sorts that list of items.
In
this example, we can see that the mylist variable has a series of numbers
contained within.
We'll
then pass that list through the unique and sort filters. The duplicate 9 should
be removed from this by the unique filter. Sort will then put them in numerical
order. The resulting output, as shown, would then show 1, 3, 7, 9
A
more complex example is the ipaddr
filter. This filter can perform a number of operations on IP addresses.
If we were to pass a single IP address, it will return true if it is not in the
proper format for an IP address. It will return false if it is not. If we were
to pass in a list of multiple IP addresses, the filter will then return a list
of the ones that are properly formed. Let's have a look at the example at right.
We
can see we have the variable mylist containing three IP addresses. Without
getting too heavy into the networking concepts here, the bottom IP address is
considered an invalid IP address. If we were to use the ipaddr filter on this
variable, as shown in the task in the example, the output would remove this last
entry, leaving us with just the 192. and the 10. IP addresses.
As
we get into more complex examples using CIDR information, you can see an example
entry contained in this box.
With the parenthetical arguments we are allowed to
supply for the ipaddr filter, we can supply network/prefix to describe how we'd
like to see this information output. With the variable mylist defined in this
play, we can see that long‑form CIDR notation is provided within the list of IP
addresses. With network/prefix argument it appended to our filter ipaddr, we're
asking that the output truncate that to proper CIDR
notation from VLSN notation. The output
would then show the 192 /24 address and so forth to properly show these IP
addresses and ranges with CIDR notation. This not only changed the order in
which these values are displayed, but specifically the format. This can be very
powerful when your workloads require specific types and formats of input.
A
key concept of processing variables with Jinja2 filters is that they don't actually change the value stored in the
variable. They're really just transforming
it to be more appropriate output that's utilized in your workloads.
There's a large number of filters available both as standard Jinja2 included
filters, as well as ones provided specifically for Ansible's usage to cover
within this conversation. You can see here in this slide a small list of the
filters available to you to the utilize within your Ansible workloads.
Templating External Data with Lookup
Plugins
In this section, we'll explore Templating External
Data with Lookup Plugins. We'll want to understand how to use lookup plugins so
that we can template external data using the Jinja2 template engine.
Lookup
plugins are available within Ansible.
They are an Ansible
extension to the Jinja2 templating
language to extend additional functions and features for Ansible workloads.
These lookup plugins import and format data from
external sources, so that they can be
utilized in variables and templates. Lookup plugins will allow you to use the
contents of a file, for example, as a
value within a variable. Additionally, that'll allow you to lookup information
from other sources, including external sources, and then supply them through a
template. The ansible‑doc argument ‑t
lookup ‑l will list all available lookup plugins.
ansible-doc-t lookup -l
When you wish to see documentation of one in
particular, you can then supply its name, ansible‑doc ‑t
lookup, and then the name of the file lookup plugin will display the
documentation specifically for that lookup plugin.
ansible-doc-t lookup -file
We have two main ways that we can call a lookup
plugin. Lookup will return a string in comma‑separated form. The query argument returns an
actual YAML‑formatted list of items. If you require further
processing of this information, query is often easier, as Ansible really intends
YAML syntax. The example at right takes a look at using the dig lookup plugin,
so that we can lookup the DNS MX records for a specific gmail.com entry.
This
lookup returns a list where each item is one specific DNS MX record. Once it's
gathered this information, it then prints the list one item at a time. You can
see in the example we're creating a variable mxvar and utilizing the format for
the query approach of lookup plugins, naming the lookup plugin dig, supplying
the domain we wish to do that dig upon, gmail.com, and then naming the record
type, MX. Once that variable is created, we simply use a debug module statement
to list those out using a loop.
When
we want to load in the contents of a file as a variable, we can use the file lookup plugin. We can provide either a
relative or absolute path to the file we wish to load in this fashion. In the
example at right, we use the Ansible module authorized_key to copy the contents
of a specific file located at files/naoko.key.pub into a specified area of a
targeted machine within the .ssh folder.
In
this case, we're using a lookup plug in because the value of key must be the
actual public key and not a file name. The file itself contains this actual key.
When
we need to look at each line within a file or output, we can use the lines
lookup plugin. This is often helpful to use in tandem with filters. At right,
we're looking at an example that uses the lines lookup plugin to build a list
consisting of the lines contained within the file etc/passwd.
Each
of the lines in that entry contains a specific user's information. The debug
task in this example uses the regex_replace filter to print out the name of each
user contained in each line of the etc/passwd file.
We
can use a template lookup plugin when we
want to take a Jinja2 template and evaluate each of the values when setting a
variable. When passing a relative path to the template, Ansible will look in the
playbook's templates sub directory. Consider that the template sub directory has
a file, my.template.j2. That file could contain the content Hello, interpolating
the variable named my_name. We could then author the play at right that prints
out the text, Hello class! Using that variable, my_names, set to the value
class, and then the lookup value for template.
The lookup arguments for template would also
needs to include the template name, my.template.j2, to be able to find the
variable contents contained within.
Perhaps
one of the most useful lookup plugins is the url
lookup plugin. This one allows you to grab the content of a web page or
the output of an API call. This can be very powerful in your Ansible workloads
when you need to probe an API or grab content from a specific web page, like
status pages. In this example, we're having a look at querying the Amazon API to
print out the Ipv4 and IPv6 network addresses used by our AWS systems.
You
can see we create a variable called amazon_ip_ranges. This variable uses the
lookup plugin url and then specifies the url we wish to probe. An additional
argument of split_lines is provided and set to false. From there, we use several
debug tests to be able to peruse the various IP ranges and prefixes. The first
one shows the IPv4 ranges. The second task does the same, but for IPv6.
When you're ready to learn more about lookup
plugins, you can have a look at ansible‑doc‑t lookup. This is a helpful way to find documentation about
lookup plugins directly within your command line interface. The dash
‑l argument will list all lookup plugins available
to you in your environment. Further, you can use additional commands like grep
to drill down and filter these results. By supplying a single lookup plugin as
the final argument, you'll see documentation on that specific lookup plugin.
Creating Roles
Welcome
back to our course, Ansible Fundamentals. In this module, we'll look at working
with roles for automation reuse. Roles are a very powerful tool available to you
within Ansible. We'll look at how they're structured and how you can utilize
these within your playbooks. We'll describe how to create your own roles and
then use them within a playbook. We'll look at the directory structure required
to do so and then run one is part of a play. Lastly, we'll look at how you can
select and retrieve roles from Ansible Galaxy, the online community that
collects shared roles. This section focuses on creating roles.
Within
Ansible, roles allow you to make automation code
far more reusable. Roles package tasks that can be configured throughout
variables. A playbook will call a role, passing in the proper values for the
variables for the use case. This will allow you to create very generic code that
can be reused between projects or even shared with others.
There are many benefits to using Ansible roles. Roles
allow you to group
content, and this allows you to easily
share code with others or between projects. Roles are written in a way that
defined the essential elements of a system type, such as web server,
database server, or repository, or other
aspects. Roles are bite‑size pieces of larger projects, making the code base
far more manageable. Since you have many components making up the larger
project, different administrators can develop roles in parallel and share their
work to comprise the larger project.
When
we create Ansible roles, we use the same toolkit we do when authoring playbooks.
There's particularly three steps involved in creating a role. The first is to
create the directory structure that a
role utilizes. Second, you'll author the
content for the role. A common approach to authoring roles is to start by
writing a play and then refactoring that into a
role that makes it more generic. A key thing to note is that you should
never store secrets within a role. The
concept of a role is to make them reusable and shareable, and you wouldn't want
secrets to be passed in this fashion. A proper approach would be to pass your secrets as parameters from within
the play.
Roles have a very specific directory structure. This
directory structure is a standardized approach that makes sharing and consuming
other roles easy. The
top‑level directory defines the name of the role itself. Contained
within this top‑level directory is the very predictable role
directory structure. Each of the files for your role are organized into
subdirectories that are named according to the purpose of each of these files.
Subdirectories include things such as tasks and handlers. While you can manually create this directory
structure, Ansible provides a command that makes it easy to do so in an
automated fashion.
Ansible-galaxy -init rolename
Ansible‑galaxy and the init subcommand allow you to name a
role, which will automatically create the skeleton directory for you.
Here's
a look at the default layout of the role skeleton directory structure.
At
the top level, we'll have the name of our role. In this example, we're calling
it role_example. Beneath there, we have a series of subdirectories that contain
our Ansible files. Each of these subdirectories has a main.yml where you'll
author your work. The default subdirectory contains the values for default
variables used within the role. These can be overridden during role invocation.
These particular variables have a low precedence as they're intended to be
changed and customized when you consume the role within a play. The files
subdirectory contains static files referenced throughout the role. The handlers
subdirectory contains the definitions of the handlers used within the role. The
meta folder defines specific information about the role, such as the author,
license, or optional role dependencies. A task subdirectory is included where
the tasks performed by the role are defined. This is similar to a task section
within a play. The template subdirectory contains all the Jinja2 templates
you'll use throughout the role. The tests subdirectory can contain an inventory
that can then be used to test the role. And lastly, the vars subdirectory
defines values of variables used internally by the role. These variables have a
high precedence and are not intended to be changed through the play.
As
mentioned before, it's really common to start with a fully authored playbook to transition that work into a
role. In this example, we have a simple playbook that creates an FTP
server on all systems in an inventory group we're calling ftpservers.
You
can see the three tasks contain one to install vsftpd, another to place a
templated configuration file for the vsftpd service, and lastly, a service
module call to manage the service starting. While real plays may be more
elaborate and include more tasks, this will suffice for our simple example of
converting a playbook into a role.
The
first step you should do when converting a playbook into a role is to set parameters as variables.
In
the first task, you can see we've converted the name of the package itself into
a variable we're calling ftp_package. At the top of the play, we then define
ftp_package to match vsftpd as defined in our previous play. Throughout the rest
of the playbook, we've done similar work for creating several variables that we
then substitute in to the tasks themselves. The purpose here is to allow values
to be easily changed.
The
next thing you'll do is define the role content. First, you'll create the
directories necessary for this role. You need only create the directories you
intend to utilize as blank directory shouldn't
be included.
In
this example, we'll use the meta, tasks, templates, and default subdirectories.
It's always prudent to include a README.md file so that you can clearly explain
how the role works and its intended purpose. We can include this role in a roles
directory that contains all the roles we define in our organization. Since we
had similar work, such as templates, included at this directory structure, we
can now move the templates contents into the roles templates directory. The same
could be done for files we may have contained at that tier as well.
The
next step we can do is visit the tasks we defined within our play. These can
then be moved into their own file within the tasks subdirectory. By copy and
pasting the contents from playbook into the main.yml within the test
subdirectory, you've now migrated that work into the proper role structure.
Note
here that lines that start with the pound or hash symbol are comments that we
can place throughout the files within our role to help guide users when
consuming the role in their playbooks. Lastly, with YAML syntax throughout a
role, indentation needs to be consistent. This can be tricky to get the hang of
and will certainly take practice as you develop further roles with Ansible.
Next,
role defaults can be defined within the default subdirectory.
The
variables we created within our playbook can now be copied and pasted into the
defaults/main.yml file. You can see an
example of that in this box. These are the variables that can be overridden with
different values when you call the role from within a play. Perhaps in your
organization, you utilize a different package for your FTP server. Overriding
this value when you invoke the play can allow the role to still be consumed in
your organization with a simple adjustment to the variable settings. If these
were more fixed values, we would instead prefer to put them in the vars
subdirectory. These will be values we intend not to be overridden.
To
provide clarity in documenting the role,
we can supply information in the main.yml file in our meta subdirectory.
Here
you can see some information we're providing, such as the author name, the
description, and the company we may work for. As you become familiar with roles,
you can have a look at additional roles available through Ansible Galaxy to get
an understanding of proper content supplied via README.md or included in the
meta files.
Now
that we've created a role and understand the structure required to do so, we
next will use a role within a playbook.
The
keyword roles is supplied to list any roles to be consumed in a playbook. In
this simple example on the left, we're not overriding any variables, so the
default values will be adhered to. In this case, invoking in this fashion will
do exactly what the original author playbook did. Note that we've provided no
additional tasks on this play. So simply what is contained within the roles will
be all the work that Ansible performs. We can supply additional tasks in a
playbook in this fashion, but always note that
the roles will run first.
In
this example, we can look at using a role with custom parameters.
Here
we're calling the role twice. The first time we call the role is with its
default options and creates the default directory and group. The second time
it's invoked, it overrides the role's default variables and uses a different
template for the config file.
Taking
that previous example and transitioning it into a role in a play book, we can
have a look at this example. Here in the tasks section of our playbook, we're
using the include_role module.
The
first task does exactly what the first role invocation did in the previous
example, namely to use the role in its default fashion. The second task, also
using the include_role module, does what the second invocation of the role did
in the previous example, namely, overriding the ftp_config_src with a different
Jinja2 template. This approach allows you to mix roles with normal tasks within
a play.
Let's hop in the terminal and take a look at these
techniques. Now that we've created a few playbooks and we understand the concept
of roles, let's take one of those bits and try to transition it into more of a
role structure. I've copied one of our previous playbooks into one I'm now
calling users.yml. This is the playbook where we created some users on the
database and web server systems using a few variables. We also had a handler in
there. Let's take a look at how we can take this playbook and morph it into what
is now going to be considered a role. I'm still working in my Ansible directory,
and I have available to me now the ansible‑galaxy command. Galaxy matches the name of the online
website Galaxy to house the community roles that we share amongst one another.
We can use this command and its subcommanded init to create our users role.
Ansible-galaxy init users
We
can see the directory users was created out of this command. And if we ls the
users directory, we can see that it created a number of subdirectories that we
can use. To best organize this work. I'm going to actually create a directory
called roles. I'll move the users directory into the roles directory, cd into
roles, we still have users, and let's take a look at the full structure.
Perfect. Here we can see all the subdirectories we discussed. Now we won't use
all of these as we transition the role that we have, so I'll likely clean up a
bunch of these subdirectories and files that aren't in use. Taking a look at the
users.yml one more time, we can see that we have some items at the beginning of
the playbook.
This
vars entry could become a group_vars for the web.servers group that ended up
being the intention in the end. We've already created a database users group
var, so we'll take advantage of that. The tasks themselves, the two user tasks,
can become the task entries in that file, and we have a handler defined at the
bottom. We'll transition that one to the handlers section.
Great.
Now that we've taken a look at the playbook we wish to transition, we understand
the subdirectories we have available to us to morph those bits of work into
their requisite files.
Each
of the files contained in the subdirectories are called main.yml. We'll copy and
paste out the bits that go in each of these main.ymls we'll utilize. Once again,
let's cat our users.yml.
First
things first, let's take these variables that we created here and move those
into the web servers group_vars
directory. Here we have some existing variables in place.
These
were used in our other playbooks, and we can leave those in place. Here I'm
going to use the web_users, and I will space it appropriately for the file. Now
that we move that variable out, we can go back to our playbook and make an
alteration.
(Working)
This bit has now been removed. Excellent. So now we see that we have tasks.
Let's
gather our tasks and move them into our task's
main.yml. (Working) There we go.
Let's
go back into our playbook. Now that we've removed users and the tasks, the last
thing to move will be the handlers.
(Working) Let's copy this, and let's vim the
handlers/main.yml.
And
we can drop that definition in there. Let's go back in our users.yml, clean up
our handler, and understand what will need to be in place here, like we had said
previously the keyword tasks. Since we've moved those out, now we'll use the
keyword roles and call the role we wish to execute.
We
called our rules user, so we'll type in users. Let's go ahead and save our work,
and let's go up a few directories so we're a little bit more comfortable with
where we're located. Here we can see in our tree, we now have the roles
directory containing our users role we just created. We authored a handlers.yml.
We authored a task.yml. And we're utilizing variables. Let's go ahead and run
our playbook. (Working) Perfect. You can see that the same workload that had
already occurred when we had this authored as a playbook is now morphed into the
consumable role structure. This is a great way to organize work, especially when
you intend to break out sections of development between teams or team members.
Using
Roles with Ansible Galaxy
This
section explorers using roles with Ansible Galaxy. Now that we've had a look at
creating roles, we need to consider how we could obtain and use other roles or
even share the ones we've authored. Typically, a role is contained in its own
Git repository separate from the playbook. This is a clean way to organize
Ansible workloads to be consumed and shared. With each role having its own Git
repository, it makes it easy to utilize that role throughout many playbooks. The
role will have to be available to any playbook that utilizes it, so consider the
environment where you're executing your playbooks. It's very common to use your
own roles, but consider that you may want to reuse or even borrow roles from the
open source community.
Ansible
Galaxy is an online, open source community where we can share and consume Ansible roles. This
public library is provided by Ansible, but the roles contained within are
authored by the community themselves. Ansible Galaxy provides a searchable
database so that you can find the roles that may be most appropriate for your
workloads. Many of these roles include documentation and videos so that Ansible
users can understand the purpose and intent for a given role, as well as the
developers who create them. Always audit the roles you intend to consume within
your playbooks, especially in production environments.
Ansible
Galaxy provides many online features that can help with consuming and finding
roles that are perfect for your organizational needs. It provides a
Documentation tab directly within the website homepage. This will help you
understand how to download and implement roles that you find on Ansible Galaxy.
Additionally, it'll provide instructions on how to develop and upload roles you
wish to share with the Ansible Galaxy community. When you're ready to look
through the catalog of roles already available through Ansible Galaxy, there's a
helpful browsing and search feature built right into the website. You can search
for Ansible roles by their name, tags, or even other role attributes. Many roles
on Ansible Galaxy are designed for different operating systems or even
networking devices, so you can use the Search tab to specifically filter for
those roles. Results from the search are always presented in descending order
based on the Best Match score.
Here's a nice image of the Ansible Galaxy website,
but let's go ahead and jump online and take a look around. Here we can see the
Ansible Galaxy homepage. The Documentation link is contained in the
upper‑right corner and will lead you to helpful
information about Ansible Galaxy. For those getting started with Ansible Galaxy,
this can be a wealth of information to help guide your journey. The search
feature allows us to find the roles we wish to utilize in our own workloads.
For
example, we may wish to deploy a web server using Apache. We can enter Apache
and hit Enter. The roles are then listed in a best match descending order. We
can see tags that have been applied to the various roles, as well as the author,
and drill in for more information. We can see a lot of heuristics here,
especially content scoring for quality and community.
You can also provide your own ratings once you log
in. If you wish to download this role, you can do so by clicking the Download
tarball or taking advantage of the helpful command for ansible‑galaxy that you can paste right into your terminal.
Links for the repository and issue tracker for any given role may be included if
they're available, as well as a doc site.
Let's try for one more. Let's take a look at a role
that may install an FTP server. We see a number of results and can scroll down
to find the one that may meet our needs. As we had previously discussed with
vsftpd, we can click into that community role. Here we can get an at‑a‑glance look at quality, any issues that may be
tracked, as well as their GitHub repository. If we determine that this is the
right role for us, we can utilize the command right in our terminal to pull this
role into our environment. Now that we've had a look around in the Ansible
Galaxy website, let's understand how we can come back to our command line and
take advantage of some of what we've found.
Directly from our command line, we have available
to us the ansible‑galaxy command. This can be used for searching,
displaying information about installing, listing, removing, or even initializing
roles. Ansible‑galaxy search will provide us with that search
functionality that we saw on the website. Ansible‑galaxy info displays more information about a
specific role. And if we've decided to utilize one of the roles we found, we can
then use the ansible‑galaxy install argument to download a role directly from Ansible
Galaxy. By default, these roles will be installed into the first directory that
is writable within your current role path. Use the ‑p option to specify a different
directory you may want to install this
role.
Putting
on our security hats, we know have to talk about the Ansible Galaxy community.
First, you do not have to use Ansible Galaxy to store your roles if it's not
appropriate for you or your organization. However, it is a best practice to
store roles in their own unique Git repository or any version control system you
may prefer. is discouraged to put sensitive data like passwords directly into a
role as this should be set through variables and passed to the role within the
play itself.
A common approach to installing roles is to utilize
a requirements file. If your playbooks going to require specific roles, you can
create a roles
subdirectory requirements.yml file in
the project directory. This file is a YAML‑formatted list of roles that you wish installed.
For each role, you'll use the name keyword to override the local name of the
role. You'll then use the version keyword to specify the version of the role.
Lastly, used the src attribute to specify the role source. A very simple
requirements.yml entry at right shows us utilizing a geerlingguy.redis role at
version 1.6.0.
This
version can be found within Ansible Galaxy and will be retrieved from that
location if we were to invoke it in one of our playbooks.
Once authored, we can utilize the requirements
file, as well as the ansible‑galaxy command to install the role directly to our
environment.
Ansible‑galaxy install can take an argument of ‑r
and supply the requirements.yml file. If you wish to install this into a
specified location other than your current working directory, you can use the
‑p flag. For additional documentation on this
technique, consult the online documentation at galaxy.ansible.com, specifically
around the topic of installing roles.
Let's
look at four examples of using a requirements file to install a role.
The
first example grabs the latest version of the geerlingguy.redis role from
Ansible Galaxy. You can see we've simply specified the src or source. The second
example includes the version keyword to specify a particular version of 1.6.0 we
wish to utilize within our playbooks. The third example specifies a version
commit hash. This will be utilized to pull the specified version from version
control. Additionally, it also uses the name keyword to rename this role to
something more user friendly within their environment. Lastly, the fourth
example uses the SSH protocol and selects the latest version on a specific
branch, in this case master. All four of these may be appropriate given your use
case, but consider all of the options you have available to you when authoring
your requirements file.
Once you've downloaded roles, you can use
ansible‑galaxy list.
The list subcommand will list the roles that are
found locally in your environment. You can see in this first example
ansible‑galaxy list shows several roles that have been
downloaded from Ansible Galaxy. If we need to remove any of these, the
ansible‑galaxy remove subcommand will do exactly that.
Simply
specify the name of the role you wish removed after the subcommand.
Let's move over to our terminal and try some of
these techniques. Now that we've authored a basic role, let's consider using one
from the community shared resources at Ansible Galaxy. Ansible Galaxy can be
found at galaxy.ansible.com. At galaxy.ansible.com, we can take advantage of the
search feature and find a role we want to use in our own playbooks. I'm going to
find one that installs Apache on CentOS or Red Hat family systems. Here I can
see one called centos_apache, and it's for CentOS and Fedora family servers.
This should be fine in our Red Hat environment. Now that I've clicked in, a few
easy things to take a glance at. One is the command we'll use to install this in
our environment. I'm going to put that in my copy buffer because I'll intend to
use that a bit later. But before I use any role that I find on Galaxy, I always
want to check out the intention and code that lives within. I'll bounce out to
the GitHub repository linked within the Ansible Galaxy page. From the README, I
can see that this does intend the Apache installation I was wanting, and I can
have a quick read at the intended uses of the role itself. Here you can see it's
recommended that you might want to set certain variables, as well as other
variables that may be available to you throughout the role, as well as some
default values they've set. Instructions on how to use it include the command we
saw previously to install this in your environment and then some additional
information about how to configure it for your environment as well. Once we're
ready to run it, you can even have a nice copy/pasteable playbook that you can
utilize. We'll get back to that in just a bit. For now, let's head over to our
terminal and put this to use. I'm still working from our Ansible area. And since
I need to install this, the first thing will be to paste that command that I
copied out of Ansible Galaxy. The ansible‑galaxy command has the install subcommand and then
the name of the role found on Ansible Galaxy. We can see that it reached out and
found the role and pulled it directly from its GitHub address.
Tasks/main.yml will have the actual task
It tells us that it's installed in a .directory in
the current location. Let's browse into that directory and have a look. Here we
can see the typical layout of an Ansible Galaxy role. To understand what the
role particularly does, I'll have a look at a few of these files. First, I want
to see exactly what the tasks are to make sure it meets my needs. Looking at the
tasks, I can see the first one installs the latest version of Apache. We then
supply‑‑‑ Oh, okay. There's a Jinja2 template that they're
using for the configuration file, so I'll go have a look at that and make sure
there's nothing bad in there. They're going to remove some default files that
Apache installs. I'm familiar with these. I'm fine with removing those. And
lastly, they started enable Apache. This makes a lot of sense. Works for me.
Alright, I do want to go take a look at that Jinja2 configuration file called
custom.conf.j2. So let's have a look at that one. Okay, so this just set some
normal Apache parameters. It's using variables here, so I might be curious to
know what those defaults are. Let's take a look and see if we can find where
those variables are located. Here they are in the defaults/main.yml. If we're
okay with these, we can run the playbook as is. But these are now variables we
could override if we needed to make adjustments to any of these values. I'm fine
with these defaults for now. But consider that over time as you run on Apache
Server, you may want to revisit this, adjust some values, and rerun the
playbook. I think everything else was pretty explicit. I noticed there's a
handler, so let me just take one last look at the handlers. A simple restart of
Apache. That seems very prudent. Okay, now let's put this role to use in a
playbook. I remember there being an example of how to do so directly in the
GitHub repository. I'll go have a look at that now. Alright, so here they have a
hosts, and they're targeting some hosts called servers. That's a group we don't
necessarily have, but no problem. We can adapt from there. Our user argument is
typically root because we don't define this. If we had a specific user we wish
to use, we could declare that with that argument, the become True you've seen us
using, and lastly, the roles here make a lot of sense naming that role. Okay,
I'll copy and paste this and adapt it for my needs. (Working) Alright, I'm going
to need to create a playbook. So I'm going to call this playbook galaxy. And, as
always, the typical YAML extension. Start with the three dashes. Supply a name
for this one. We'll say Use a galaxy role to install apache. Okay, I'm going to
paste in the rest of that supplied work and start formatting it to meet my
needs. Well, they did a pretty good job, but I called my group webservers. I
shouldn't require the users argument. And from there, I think the rest looks
sane to me. Let me save this work. Now that we have the playbook author, let's
go ahead and execute it. (Working) It found our two web servers. We can see that
it's calling the proper role. It was able to install Apache, the config file,
removed default files from the Apache configuration, and start and enable
Apache. Excellent. Well it looks like we were able to successfully use that role
to deploy and start Apache. I wonder if we could do the same to stop Apache as
we probably wouldn't want to leave this web server running just for these
demonstration purposes. To do so, we could go and make a small update to the
role as it's installed on our machine. Remember it was in home, our demo user,
the .ansible directory, and then the roles area for the role we installed. Let's
update the tasks. Let's put in a new task that simply stops Apache. (Working)
We'll use the service module, calling the same. We want the state to be stopped,
enabled to be no. Now you might be thinking if we just run this again that it
would restart Apache and re‑enable these things. But since that work was
already performed and Apache's already in that state, that won't occur nor will
the notification of the handler. Let's give that a try to prove this out. Let me
find my command again. I'm in the wrong location. So let me cd back over to
home/demo/ansible. I can then scroll back in my commands and rerun the
galaxy.yml. Now we've updated the role for our own customizations. This is
perfectly appropriate. Had these customizations been ones that were useful to
the community, we could also consider contributing back those changes through
GitHub. We can see that we were able to stop the Apache Server and disable it
since that was how we wanted to leave this demonstration environment. Finally, I
feel comfortable that we were able to gather a role from Ansible Galaxy, utilize
it, and even customize it to meet our needs. This expands a great array of work
available to you from the community, as well as allows you an opportunity to
contribute and customize the work found therein.
Working with Dynamic Inventories
Welcome back to our course, Ansible Fundamentals.
In this module, we'll look at managing complex inventories. This section will
explore working with dynamic inventories. We'll learn how to install and utilize
dynamic inventory scripts for our Ansible targeting. Up to now, we've taken a
look at using Ansible static inventories. These are very easy to write and
convenient when you have a small number of managed hosts. The reality, however,
is that these become difficult to keep up to date in really large or dynamic
infrastructure. This is also hard to use with short‑lived cloud instances where cloud‑based deployments may spin up and spin down quite frequently. Many
large environments have what they consider a single source of truth that tracks
the available hosts available within their environment. This could be monitoring
systems like Zabbix or an Active Directory type system they may
utilize. Many configuration management databases exist, and depending on what
you have available within your environment, you can utilize that to understand
what is actually available throughout your ecosystem.
To address the shortcomings of static inventories,
Ansible also provides dynamic inventories. Dynamic inventories are scripts or small executable programs that can
generate the
inventory automatically. These are
used to get information from external sources of truth, like one of those
content management databases (CMDB). We configure dynamic inventories exactly as
you would a static inventory file, but these will be marked with executable
permissions. Here's a simple example
of how to use the ch mod or chmod command to add executable permissions to a
script we're calling inventory‑script.py. These scripts can be written in any
programming language that provides inventory output in JSON
format. When we wish to use something
of this approach, we have the inventory file, in this example, being used as the
inventory. When you utilize ansible‑inventory ‑i
and provide the name of an inventory file that is a dynamic script, you can also
provide the ‑‑list argument to list out all the hosts that were
gathered using the script.
Ansible-inventory -i inventory-file --list
Many
sample dynamic inventory scripts are available from the Ansible GitHub site.
Have a look over at github.com/ansible
and see if one of those meets your needs.
When you utilize the ansible‑inventory list feature, INI format of files will
then output to JSON format. Dynamic inventory utilizes JSON output because it's an easier way to parse complex
inventories.
When considering using dynamic inventories, a
path of least resistance is simply to use one of the dynamic inventory scripts
provided through GitHub. Many of these have documentation included to indicate
how they are configured. It is also possible to author your own dynamic
inventory scripts with helpful documentation provided at docs.ansible.com.
Follow the link provided to find out more information, but we'll explore a
simple overview of how to author these dynamic scripts in the following slides.
When authoring a basic dynamic inventory script,
you can write them in any language. If you offer them in an interpreted
language, make sure to start with the appropriate interpreter line. Here we're
showing an example of using Python. The dynamic inventory would start with the
script #!/usr/bin/python to path out to the interpreter being utilized. The file
itself should have executable permissions so that Ansible can execute it when it
comes time to invoke the dynamic inventory script. Any dynamic inventory script
when passed the ‑‑list option should output in JSON format a dictionary of all
of the hosts and groups within the inventory. An example of this type of output
is seen here.
Dissecting
that example, we start with the meta section. _meta sections can be used to provide inventory
variables from an external source. You can see here two variables, ntpserver and
dnsserver, being supplied for the server1.example.com host. If you do not
provide inventory variables, simply provide an empty _meta section in order to
speed up processing. Syntax is always important, so make sure that you include
all the right braces, as well as commas in the proper places.
To summarize using dynamic inventories, step 1
would be to locate it directly onto your Ansible control node. The next step
would be to ensure that it is executable by adding the execute permission. Here
you can see us using the chmod command to do so. Follow the documentation to
complete any required configuration necessary to utilize the inventory script.
Next, update Ansible through its configuration file or through command line
invocation pointed to the script to ensure that it's using the inventory you've
now provided. You'll utilize this in the exact same way that you use static
inventories. Lastly, test to make sure that this dynamic inventory script is
working properly using the ansible‑inventory command. Remember, the ‑‑list argument should return all hosts in your
environment.
Let's
have a look at an example of using dynamic
inventories. Ansible dynamic inventories come from a number of places.
These could be provided by your cloud provider, authored by someone in your
organization, or even a quick script you whip up just to help you manage your
Ansible interactions. For this short demonstration, I've done exactly that.
Let's take a look at my script.
I've called mine a dynamic_inventory_example.py and
authored it with Python. I've declared it as Python. You can see I've imported
the modules I'm going to need and provided the simple structure that Ansible
expects. Here, I've supplied some arguments for the specific hard‑coded IP addresses I have in my inventory. A more
elegant one would potentially probe an API or something like that. This is
typically what's provided by cloud providers. Here I'm just using the nodes
available in this example environment. The main feature that every inventory
must provide is the ability to pass in a ‑‑list argument. While you may or may not utilize
this, it's something that Ansible will expect. Let's take a look at just simply
running this. Before I can run this, I need to first make it executable. I'll
add the execution flag with the ch mod or chmod command. We can now see it has
those execution permissions. I can go ahead and run this outside of Ansible just
using my Python interpreter. I'll say python, name it, and then invoked that
‑‑list argument that Ansible will utilize.
We
can see the four nodes I want to have are extracted from that output. It can be
more typical to have this in JSON format, especially from API calls. So it just
depends on what your dynamic inventory may be providing for you. Let's also
discuss how we can use this in tandem with the inventories we've already
created. As we mentioned, you could create an inventory directory and simply
collect all of your inventories within. I'll go ahead and create that structure
now. First, I have a file named inventory, so I'm going to move that to
inventory.txt to differentiate it from my Python scripts. I'll make the
directory inventory. I'll move both my inventory into that, as well as my new
dynamic inventory script.
And
lastly, to make sure that Ansible finds it, I can update my ansible.cfg.
I've created an inventory entry that I previously
used that path to the file named inventory. Now that's a directory. So all the
files contained, which are mine Python script, as well as my text‑based INI file will be parsed as one entire
inventory. This is a true evolution of using singular files for inventory and
allows you a lot of flexibility as we discussed. That concludes this section.
I'll see in the next video.
Managing Inventory Variables
In
this section, we'll take a look at Managing Inventory Variables. There are many
different ways to construct inventory depending on your environment. We'll have
a look at several use cases, as well as how to
combine multiple inventory sources.
One of the best things you can do to make Ansible
work well in your environment is to use inventories effectively.
Well‑formed and structured inventory provides you with
many ways to easily manage hosts. Assigning hosts into multiple groups and
organizing your groups in ways that are suited to your environment is always
important. Some ways you may consider grouping your hosts is by the
function of the server, such as what
does it do, like a
web server or database server nodes.
Additionally, you could consider geographic
location. Perhaps your entity has many
regions or utilizes several different data centers. In a blended environment,
you could consider grouping like hosts by their
processor architectures. You could
additionally consider the various operating systems in use
throughout your environment or even
the versions they're in. Lastly, another way to consider grouping your machines
is by the various
life cycle phase that they belong to,
such a development,
testing, staging, or production environments. A lot of this can be handled through conditionals
within your playbooks; however, if you've gone ahead and done so within your
inventory, it can be more efficient and save you a lot of cumbersome work when
authoring your playbooks. When you consider that a play should target a specific
kind of host, you should always ask the question if this would be best placed
within a group in your inventory. Once defined within your inventory, you can
reuse these groups throughout all of your Ansible workloads.
To continue using inventories effectively, you can
take advantage of several aspects that Ansible makes available to you to make
your work easy and human friendly. Longer complicated host names can be
shortened to something more friendly using the ansible_host variable. An example of this is when you
have
long‑formed names such as those generated from cloud environments
like AWS, something like the example here of ip‑IP address‑us‑west‑2.compute.internal would nearly be impossible for a human to
remember; however, setting this to easy keywords such as
webserver can save you time and allow
you to take advantage of the techniques available through inventory. You can do
this directly in the inventory, as well as the host_vars sub
directory for that system. Consider
all the ways we've discussed variables being configured throughout this course
to make sure that targeting is an easy technique throughout your Ansible.
When
having more complex inventories, we need to consider variables and which one
will have precedence, especially for systems that may be contained in multiple
group. My system is contained in multiple groups and those groups have
conflicting variables. You'll need to understand which one will take precedence.
Variable set as a host variable will always take
priority. If the variable is set by a
child grouped to its parent, the child will always override. Lastly,
group variables set by the all group are
overwritten by any other group that a host belongs to.
When
two groups at the same level contain a host, for example, two parent groups,
they are merged alphabetically. So consider a system like testvar that could
belong to A group and B group. When merged, the B group comes alphabetically
second and would override any A group variable of the same name. You can
configure Ansible, like many other Linux systems, to behave in many other ways.
If you wish to alter this behavior, please have a look at the online
documentation for inventory to be able to do so. A best practice and guideline
would be to set up your group variables in a way that avoids any of these
collisions.
Variables
will grow and expand as you author more Ansible, and keeping these variables organized will be very
critical to doing great work with Ansible. You'll want to keep things simple,
define variables that have common sense names to their purpose. If you restrict
your variable naming approach to just a couple different methods and only a few
places, you can really streamline the ease of finding out variable names, uses,
and purposes. It's also a very good practice to not repeat yourself. Set variables for a whole group instead of individual
hosts where you can, especially when they have the same value. Organizing your variables into small, readable
files makes it easy to find the variable you're looking for. You can use
directory structures instead of a file for things like the group_vars or
host_vars. All files in that directory are automatically used. You can split
your variable definitions into as many files that are necessary for any larger
projects. To make it easiest to find particular variables, group like kinds of
variables into the same file and give it a very meaningful name. Now that we
understand that inventory comes in many forms, it's possible to consult multiple
inventory sources in your Ansible workloads. You can utilize multiple inventory
files, as well as scripts combined together to form your true Ansible inventory.
The
ansible.cfg can be configured with the
inventory directive to look at a directory instead of a file. This directory
could then contain all of the static and dynamic inventory scripts that you wish
to utilize for your overall Ansible inventory. Once it's configured in this
fashion, Ansible will automatically combine all of these together during play
execution. Multiple inventory sources are combined in alphabetical order by
default. The latter alphabetical sources will win any conflicts. In order to
mitigate this, be sure to name the files and scripts you use for inventory very
carefully.
Spending
some time thinking about your inventory design can go a long way in helping you
manage the targeting of your managed hosts. Designing careful group names and
organizations can make it very easy to write your plays to target the host you
wish to manage. Always remember that the host directive in a play can target a
group, not just a single host. Also, a group can consist of a single host or
many hosts. Additionally, groups can also contain child groups or a collection
of other groups. You can also write playbooks that contain multiple plays, each
of which may perform their actions on a different set of groups.
Have
a look at the example to the right. The first play targets the host's databases,
a collection of systems defined by that group name, while the second play
targets a group called webservers. While also more complex, you can set a
condition on a test that targets machines only if it meets that condition. An
example of this is below, where the inventory_hostname is in a specific group
named testing.
The
task will only execute when this condition is met.
A
common approach to the software life cycle is to have specific machines for
development, testing, and production. This concept can transition directly into
your inventory design. At the right is a single inventory that has several
groups based on server purpose, as well as the server life cycle.
Given
this example, several advantages exist. We can target machines based on their
purpose, such as database or web server, or we could deploy workloads just into
testing environments instead of production. Never forget that you also have the
concept of targeting all.
Another
way to organize a similar set of systems would be to have separate inventory
files.
In this example on the right, the top file
represents an inventory file we would call inventory‑production and has the systems broken down by
database, webservers, and the production group. A secondary file called
inventory‑testing has database and webserver groups as well,
but adds the testing group instead. A disadvantage of this approach is that it's
more difficult to easily write a single play that affects both testing and
production. If you wanted to, you would have to call a different inventory file,
possibly through the command line for different circumstances.
With
that approach, you may be required to run the play multiple times. Another
reason you may want to separate your inventory into multiple files would be to
organize your group variables. This will help clarify which group variables take
precedence. Consider that if the same variable it's set in two different groups
for an inventory, and a host is a member of both groups, you'll need to consider
which group variable will take precedence. From an Ansible perspective, the last
value loaded will always take precedence, and by default, files are loaded in alphabetical order. The
later in the alphabet the file name appears, the higher its precedence in this
case.
In
this example shown here, the value of a_conflict displayed by the playbook is
from webservers. The reason for this is because the webservers,yml inventory
file comes alphabetically later than the production.yml, which comes
alphabetically after inventory. One way to mitigate this type of conflict would
be to utilize child groups. A variable set within a child group takes precedence
over any values set by their parent group. In the example at right, the host is
still a member of both webservers and production, but the group production is a
child group of the webservers group. Now the value of a_conflict reported by the
play is from_production. Consider, however, though if you write a play that
targets the group webservers, it will also affect all hosts in the group
production given this configuration.
We
can also separate inventories by the environment. This'll help us avoid the
conflicts we were seeing previously by organizing groups of like servers into
the different classes. While this may or may not work for your use case, you can
consider what may be right for you given a lot of these techniques. You can
build as many inventories as are necessary to organize your hosts and make them
targetable for your workloads. With a large collection of inventories, it may be
necessary to execute the same playbook multiple times, targeting various
inventories to ensure the work is properly administered across your entire
environment, but in doing so, you'll gain the ability to have different values
for each environment for the various variables you may need.
As
your workloads become more elaborate, you may require conditional variables.
It's not uncommon to need to load some variables in your playbook with vars
files or with the include_vars module. With these available to you, you can
further control the order in which variables are loaded through your play. Both
the vars files and the include_vars have a
higher precedence than group variables, so they will override any
variable set through group variables.
In this example, if the host is in the group's
production and webservers, the value of a_conflict set in the production.yml
file in the vars directory will be displayed due to include_vars having
precedence over these group variables.