A DevOps transformation without implementing Infrastructure as Code will remain incomplete: Infrastructure Automation is a pillar of the modern Data Center.
With tools like SaltStack and Docker things are becoming easier.
This tutorial is part of a Painless Docker Course.
If you are interested in discovering our new courses about AWS, register using this form and we will be sharing interesting stuff with you !
Update:
After sharing this story to Reddit, I read some comments saying that I am reinventing AWS Lambda or a serverless system in general.
I’d like to say yeah ! why not !?
After all, the purpose of this course is educational and you can consider it as a reverse hack.
Happy Hacking !
We are going to use a set of tools like SaltStack, Boto3 (Python3), AWS CLI, EC2 in order to create a master and some minions running a Docker container each.
This diagram explains a lot of what we will do.
We will need the ID of the VPC, the ID of the Security Group, the ID of the Subnet, the name of the key pair and the AMI of the OS we are going to use (Ubuntu 16.04). The latter depends on the zone.
Let’s see what we have as VPCs:
aws ec2 describe-vpcs
You should get a detailed list of your active VPCs:
{"Vpcs": [{"DhcpOptionsId": "dopt-xxxxxx","CidrBlock": "172.31.0.0/16","InstanceTenancy": "default","State": "available","IsDefault": true,"VpcId": "vpc-xxxxx"},{"DhcpOptionsId": "dopt-xxxxx","CidrBlock": "172.20.0.0/16","InstanceTenancy": "default","State": "available","Tags": [{"Key": "xxxxx","Value": "xxxxx"},{"Key": "xxxxxx","Value": "xxxxx"},{"Key": "xxxxxx","Value": "xxxxxx"}],"IsDefault": false,"VpcId": "vpc-xxxxxx"}]}
aws ec2 describe-security-groups
You should get a list:
{"IpPermissionsEgress": [{"IpProtocol": "-1","IpRanges": [{"CidrIp": "0.0.0.0/0"}],"UserIdGroupPairs": [
\],
"PrefixListIds": \[
\]
}
\],
"Tags": \[
{
"Key": "Name",
"Value": "xxxxxx"
}
\],
"OwnerId": "xxxxx",
"GroupName": "xxxxxxxx",
"VpcId": "vpc-xxxxxxx",
"Description": "xxxxxx",
"IpPermissions": \[
{
"IpProtocol": "-1",
"IpRanges": \[
{
"CidrIp": "0.0.0.0/0"
}
\],
"UserIdGroupPairs": \[
\],
"PrefixListIds": \[
\]
},
{
"IpRanges": \[
{
"CidrIp": "0.0.0.0/0"
}
\],
"ToPort": 22,
"UserIdGroupPairs": \[
\],
"PrefixListIds": \[
\],
"IpProtocol": "tcp",
"FromPort": 22
},
{
"IpRanges": \[
{
"CidrIp": "0.0.0.0/0"
}
\],
"ToPort": 2376,
"UserIdGroupPairs": \[
\],
"PrefixListIds": \[
\],
"IpProtocol": "tcp",
"FromPort": 2376
}
\],
"GroupId": "sg-xxxxxx"
}
...
aws ec2 describe-availability-zones
Even if we’re not going to use this directly but to find the AMI, we should get this information first:
{"AvailabilityZones": [{"Messages": [],"RegionName": "eu-west-1","ZoneName": "eu-west-1a","State": "available"},{"Messages": [],"RegionName": "eu-west-1","ZoneName": "eu-west-1b","State": "available"},{"Messages": [],"RegionName": "eu-west-1","ZoneName": "eu-west-1c","State": "available"}]}
Since I am going to use Ubuntu, I used this website: https://cloud-images.ubuntu.com/locator/ec2/
aws ec2 describe-subnets
And you get a similar list to this:
{"AvailableIpAddressCount": 4091,"MapPublicIpOnLaunch": true,"AvailabilityZone": "eu-west-1b","VpcId": "vpc-xxxxxx","State": "available","DefaultForAz": true,"CidrBlock": "172.31.0.0/20","Tags": [{"Value": "xxxxxxxxxx","Key": "Name"}],"SubnetId": "subnet-xxxxxxxxxx"}
aws ec2 describe-key-pairs
And it will show the list of existent key pairs:
{"KeyPairs": [{"KeyFingerprint": "xxxxxxxxxxx","KeyName": "xxxxxxxx"},{"KeyFingerprint": "xxxxxxxxx","KeyName": "xxxxxxxx"}]}
aws ec2 run-instances --image-id ami-785db401 --count 1 --instance-type t2.micro --key-name .xxxxx --security-group-ids sg-xxxxx --subnet-id subnet-xxxxx --associate-public-ip-address --query 'Instances[0].InstanceId' --output text
This should show us the id of the instance:
i-0ed688fc95b1feeb2
In order to connect using SSH, we should get this !
aws ec2 describe-instances --instance-ids i-0ed688fc95b1feeb2 --query 'Reservations[0].Instances[0].PublicDnsName' --output text
And the output was:
ec2-34-253-201-12.eu-west-1.compute.amazonaws.com
Same as DNS, type:
aws ec2 describe-instances --instance-ids i-0ed688fc95b1feeb2 --query 'Reservations[0].Instances[0].PublicIpAddress' --output text
Output:
34.253.201.12
In my case I am going to connect the Salt Minion to the Salt Master using the private IP, you can do the same if you will create both machines in the same VPC.
If you want to use the private IP :
aws ec2 describe-instances --instance-ids i-0ed688fc95b1feeb2 --query 'Reservations[0].Instances[0].PrivateIpAddress' --output text
Output:
172.31.4.106
Using the public DNS and your Key Pair connect to the created machine:
apt install salt-master
To use Boto3 (Python), you can create a Python virtual environment if you don’t want to install it to your machine. This is what I do in all cases.
virtualenv -p python3 learning
cd learning/. bin/activatepip install boto3mkdir app && cd app
Now let’s create the Python script:
import boto3import sys
AWS_ACCESS_ID = "xxxxxxxxxx"AWS_SECRET_KEY = "x/xxxxxx"ImageId = "ami-785db401"KeyName = ".xx"InstanceType = "t2.micro"
MinCount = 1MaxCount = 1SubnetId = "subnet-xxxxx"SecurityGroupIds = ["sg-xxxxxx"]UserData = open('init.sh')
conn = boto3.client('ec2',aws_access_key_id=AWS_ACCESS_ID,aws_secret_access_key=AWS_SECRET_KEY)
for i in range( int(sys.argv[1]) ):reservation = conn.run_instances(ImageId=ImageId,KeyName=KeyName,InstanceType=InstanceType,MinCount=MinCount,MaxCount=MaxCount,SecurityGroupIds=SecurityGroupIds,SubnetId = SubnetId,UserData=UserData.read())
This is the user data file:
#!/bin/bash
#https://alestic.com/2010/12/ec2-user-data-output/exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1echo BEGIN
sudo apt-get update -ysudo apt-get upgrade -y
# Install saltstacksudo apt-get install salt-minion -y
# Set salt master location and start minionsudo sed -i 's/#master: salt/master: 34.253.201.12/g' /etc/salt/minionsudo salt-minion -d
# Install Dockercurl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/ubuntu \$(lsb_release -cs) \stable" -y
sudo apt-get update -y || :sudo apt-get install docker-ce -y
sudo apt install python-pip -y && sudo pip install docker-py -y
echo END
It will provision the machine on its startup by installing Salt Minion and Docker.
Note in the script above that we already set the server of the minion to the private IP address of the master.
sudo sed -i 's/#master: salt/master: 34.253.201.12/g' /etc/salt/minion
To create 5 machines, just type:
python <script_name> 5
To run 1000 machines, just type:
python <script_name> NO WAY :)
Now we have 5 machines where Salt Minion and Docker are installed and a a Salt Master.
On the Master type:
salt-key -L
and you’ll get the Minions waiting to be accepted:
Accepted Keys:Denied Keys:Unaccepted Keys:ip-172-31-5-169.eu-west-1.compute.internalip-172-31-5-168.eu-west-1.compute.internalip-172-31-5-167.eu-west-1.compute.internalip-172-31-5-166.eu-west-1.compute.internalip-172-31-5-165.eu-west-1.compute.internalip-172-31-5-164.eu-west-1.compute.internal..etc
Rejected Keys:
Accept all using:
salt-key -A
then uncomment the following part from /etc/salt/master:
file_roots:base:- /srv/salt
We will have a file server for Salt states and Salt high states in the location mentioned above. Now restart the Master:
pkill salt-mastersalt-master -d
In order to install Docker containers on the Minions, you should not forget this:
salt * pip.install docker-py>=1.4.0
This command will install a superior version to 1.4.0 of Python docker package.
Now create two files, the Top file
In Salt, the file which contains a mapping between groups of machines on a network and the configuration roles that should be applied to them is called a
top file
.
And the SLS file:
The core of the Salt State system is the SLS, or SaLt State file. The SLS is a representation of the state in which a system should be in, and is set up to contain this data in a simple format. This is often called configuration management.
(both quotes are taken from the official Salt documentations)
Here is the structure of file tree:
tree /srv/salt/
/srv/salt/├── top.sls└── webserver.sls
We are going to run multiple Nginx containers. Nginx is just an example here, you can run whatever image you want.
Let’s modify the content of the webserver.sls and tell the Master to run Nginx:
my_service:dockerng.running:- name: nginx- image: nginx
Now let’s modify the content of the Top file and tell the Master to apply the state to the servers having Ubuntu as an OS. This is a targeting technique, you can choose any other targeting pattern.
(read this for more information: https://docs.saltstack.com/en/latest/topics/targeting/)
#top.sls file
base:'os:Ubuntu':- match: grain- webserver
In order to start the containers, type:
salt '*' state.apply
and you will get the execution logs:
[...]ip-172-31-5-169.eu-west-1.compute.internal:----------ID: my_serviceFunction: dockerng.runningName: nginxResult: TrueComment: Container 'nginx' is already configured as specifiedStarted: 01:03:40.062033Duration: 28.66 msChanges:
Summary for ip-172-31-5-169.eu-west-1.compute.internal------------Succeeded: 5Failed: 0------------Total states run: 5
This Combination of Salt, Docker, Boto3 and AWS could be used in different automation scenarios and even for self-healing and auto-scalable infrastructures.
You can find similar tutorials in my Painless Docker Course.
That’s all folks !
If you resonated with this article, you can find more interesting contents in Painless Docker Course.
If you liked this course, subscribe to one or more of our newsletters:
You can find me on Twitter, Clarity or my website and you can also check my books: SaltStack For DevOps.
Don’t forget to join our jobboard Jobs For DevOps !
If you liked this post, please recommend it and share it with your followers.