Ansible

Frontend

Test Docker Swarm Init with Molecule and get it running on manager. https://www.digitalocean.com/community/tutorials/how-to-test-ansible-roles-with-molecule-on-ubuntu-18-04


Abstractions: Modules, Roles, Playbooks, Tasks


Commands

ansible all -m file -a 'path=/root/.ssh state=directory owner=root group=root mode=700' -sK
ansible all -i '192.168.226.7,' --key-file ~/.ssh/hermes.key -m docker_swarm -a "join_token='test-token' remote_addrs=192.168.226.12:2377 state=join advertise_addr=192.168.226.6"

scp -i /path/to/hermes_private_key  /path/to/secret_file ubuntu@<Swarm Manager IP address>:~
ansible all --key-file -i /path/to/hermes_private_key  -i '<Swarm Manager IP address>,'-m docker_stack -a "name=app compose=docker-compose.yml" -vvv

Gotchas

By default, Ansible assumes it can find a /usr/bin/python on your remote system that is either Python2, version 2.6 or higher or Python3, 3.5 or higher. https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-handle-python-not-having-a-python-interpreter-at-usr-bin-python-on-a-remote-machine

Ansible (Software Provisioning, Configuration management or workflow automation) Variables

Ansible by default manages machines over the SSH protocol.
 Once Ansible is installed, it will not add a database, and there will be no daemons to start or keep running. You only need to install it on one machine (which could easily be a laptop) and it can manage an entire fleet of remote machines from that central point. When Ansible manages remote machines, it does not leave software installed or running on them, so there’s no real question about how to upgrade Ansible when moving to a new version.

Ansible is written in Python and uses SSH to execute commands on different machines.

Configuration Management: Configuration management is an “infrastructure as code” practice that codifies things, e.g. what packages and versions should be installed on a system, or what daemons should be running. Workflow automation may be anything from provisioning cloud infrastructure to deploying software
Roles are self contained plays that may be applied to different machines: You may have a “web server”, “mysql-replica”, or “redis” role You’ll find roles to set up databases, web servers, and monitoring systems.

Ansible is agent-less. This works well for small setups but becomes problematic at a larger scale. There are few ways to overcome this problem, and ansible-pull is one of them. This command will poll a git repo, clone it, and execute playbooks. This is a good solution for auto-scaling cloud infrastructure which needs configuration after it’s created. 

Do we really need provisioning in the cloud? Instead of using the empty AMIs you could bake your own AMI and skip the whole provisioning part completely but I see a giant flaw in this setup. Every change, even a small one, requires recreation of the whole instance. If it’s a change somewhere on the base level then you’ll need to recreate your whole fleet. It quickly becomes unusable in case of deployment, security patching, adding/removing a user, changing config and other simple things. Even more so if you bake your own AMIs then you should again provision it somehow and that’s where things like Ansible appears again. My recommendation here is again to use Packer with Ansible. So in the most cases, I’m strongly for the provisioning because it’s unavoidable anyway.

Why not use it?

Ansible doesn’t necessarily have to own and do every single task that it sets out to do. So, being that glue or being the orchestrator, think of it as the composer of a nice symphonic piece, we can reach out and tell other tools and work with other tools to do the task that it’s best suited for.

Ansible requires Python to be installed on the remote machine as well as the local machine. Terraform does not, not only that, terraform even have its own SSH functionality! The remote-exec provisioner could be used for all software installation, and that way we could just get rid of ansible all together, but as I said earlier, ansible have a nicer way of defining dependencies, and it have quite good control over what have been done on the server through its configuration settings (if, for example the apt object is set in ansible and the state is set to present, it will register that the program is already installed and then not bother installing it again)

So, instead of using the remote-exec provisioner, we invoke ansible via the local-exec provisioner.

Datacentre specific


Team Specific

Canonical Example of Task:

Installing nginx. Nginx role can be dependend on SSL role, which installs SSL file

How it works


UseCases


Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory, which defaults to being saved in the location /etc/ansible/hosts. You can specify a different inventory file using the -i  option on the command line

Ad Hoc Commands An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later. For instance, if you wanted to power off all of your lab for Christmas vacation, you could execute a quick one-liner in Ansible without writing a playbook.

Playbooks

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.

If Ansible modules are the tools in your workshop, playbooks are your instruction manuals, and your inventory of hosts are your raw material.

Playbook invokes modules ands tasks which are executed sequentially in order from top to bottom. Modules are executable bits of code pushed out to target machine. Modules act against our inventory which consists of groups of hosts. Where does the inventory come from? Doesn’t matter - there are many inventory provider

Examples of modules: apt/yum, file based (copy, file), git, ping, debug, uri….

Special group of modules; Run command modules (command, shell, script (run a local script on a remote node after transferring it, raw_

Ad hoc commands:

ansible all -m ping
ansible web -m command -a “uptime”
ansible localhost -m setup


Sudo apt-get install ansible

Sudo user add ansible
Ssh workstation
Sudo user add ansible
Sudo passwd ansible
su - ansible
Ssh-keygen
Ssh-copy-id workstation
Ssh workstation

Inventory

Create Inventory in home/amsbible (“Workstation”) Create playbook: .yml file

Defining a local-exec provisioner is as simple as the remote-exec, but it takes a few other arguments. As the playbooks are placed inside the Ansible directory, we use a working_dir variable to tell ansible where it should do its work. By the environment object, we can set environment variables to be used while the script is running, variables that ansible can use instead of using ansibles –extra-vars to set variables (which of course is a legal way of doing it too!).

provisioner "local-exec" {
    environment {
        PUBLIC_IP  = "${self.ipv4_address}"
        PRIVATE_IP = "${self.ipv4_address_private}"
    }

    working_dir = "../Ansible/"
    command     = "ansible-playbook -u root --private-key ${var.ssh_key_private} playbook.yml -i ${self.ipv4_address},"
}

I would recommend that you in either the ansible script or a local-exec provisioner clause remove and then add the server host keys, that way you will not have to manually tell the script that you accept the host keys, and you can make sure that the ansible tasks don’t fail because the host is wrong 


Roles

Roles need to have a certain directory structure.

How to use a role?


Using Terraform with Ansible

https://jite.eu/2018/7/16/terraform-and-ansible/# https://www.hashicorp.com/resources/ansible-terraform-better-together

Terraform is an open source command line tool that can be used to provision any kind of infrastructure on dozens of different platforms and services. Terraform code is written in HCL or the HashiCorp Config Language. With Terraform you simply declare resources and how you want them configured and then Terraform will map out the dependencies and build everything for you. when you run Terraform, it very quickly and efficiently crawls through your code and puts everything in the correct order. When you run Terraform Plan, it builds the graph before it actually goes and deploys your infrastructure. So this graph represents all the variables, resources, and dependencies required to stand up a single virtual machine in Azure. Terraform to call Ansible. Terraform is a great infrastructure provisioning tool, but you may have noticed that it doesn’t come with a config management system. That’s where Ansible comes in. We use Terraform to stand up virtual machines or cloud instances, and then we hand over the reins to Ansible to finish up the configuration of our OS and applications.

Packer builds machine images on different platforms.  https://news.ycombinator.com/item?id=21022067

The remote exec can be handy when you don’t have SSH access to the target host. So you need to run Ansible on that machine, but you might not be able to SSH to it directly. Or, you know, you may not be able to connect to it. We can use remote exec, you just have to figure out a way to get your playbooks onto the machine. So here I’ve kind of hacked together some code that will push all of the Ansible code out there, install Ansible, and now we’re running it from the remote host itself. It’s a little bit more work, but there are some use cases and scenarios where this could be handy.

Can you talk a little bit more about running Ansible directly on a target machine?

Sean: Yes. A little bit more in the way of moving parts, right? Because you need, obviously, the Ansible binary and the command itself to be able to run, and you’ve got to have a way to feed your playbooks or get them somehow onto that machine. So if you can do those two things, either with a shell script or a little wrapper, you can run the remote way where all of the Ansible activity is happening on the machine itself, and you’re just doing it to local host instead of doing it remotely over SSH.

Output the host name and put it in the inventory ->

 baked versus fried discussion,  Are we baking a fully golden image, or are we getting all the pieces together and frying it at the end. I’m more of the latter in my past operations

I feel like it’s better to get as much together as you possibly could and build that image and then do your deployments at the end, because there is always going to be that little bit of configuration management you have to do. When I think of Ansible, the bigger scheme of things, there’s more to it than just the systems being set up, right? You have your application monitoring you have to do and orchestrate. You have to tell that to set up and set up the alerts for all of that.

I think one of the most effective ways is that we could actually have an Ansible provisioner, so that’s one thing that we’re actively working on as two companies and getting that built, so stay tuned for that.

01 June 2021