This article shows you how you can use OpenShift to set up and test against AWS APIs using localstack.
Example code to run through this using ShutIt is available here.
Here’s an asciicast of the process:
In this walkthrough you’re going to set up an OpenShift system using minishift, and then run localstack in a pod on it.
OpenShift is a RedHat-sponsored wrapper around Kubernetes that provides extra functionality more suited to enterprise production deployments of Kubernetes. Many features from OpenShift have swum upstream to be integrated into Kubernetes (eg role-based access control).
The open source version of OpenShift is called Origin.
Localstack is a project that aims to give you as complete as possible a set of AWS APIs to develop against without incurring any cost. This is great for testing or trying code out before running it ‘for real’ against AWS and potentially wasting time and money.
Localstack spins up the following core Cloud APIs on your local machine:
At present it supports running in a Docker container, or natively on a machine.
It is built on moto, which is a mocking framework in turn built on boto, which is a python AWS SDK.
Running within an OpenShift cluster gives you the capability to run very many of these AWS API environments. You can then create distinct endpoints for each set of services, and isolate them from one another. Also, you can worry less about resource usage as the cluster scheduler will take care of that.
However, it doesn’t run out of the box, so this will guide you through what needs to be done to get it to work.
If you don’t have an OpenShift cluster to hand, then you can run up minishift, which gives you a standalone VM with a working OpenShift on it.
Installing minishift is documented here. You’ll need to install it first and run ‘minishift start’ successfully.
Once you have started minishift, you will need to set up your shell so that you are able to communicate with the OpenShift server.
$ eval $(minishift oc-env)
Security Context Constraints (scc) are an OpenShift concept that allows more granular control over Docker containers’ powers.
They control seLinux contexts, can drop capabilities from the running containers, can determine which user the pod can run as, and so on.
To get this running you’re going to change the default ‘restricted’ scc, but you could create a separate scc and apply that to a particular project. To change the ‘restricted’ scc you will need to become a cluster administrator:
$ oc login -u system:admin
Then you need to edit the restricted scc with:
$ oc edit scc restricted
You will see the definition of the restricted
At this point you’re going to have to do two things:
The localstack container runs as root by default.
For security reasons, OpenShift does not allow containers to run as root by default. Instead it picks a random UID within a very high range, and runs as that.
To simplify matters, and allow the localstack container to run as root, change the lines:
runAsUser:type: MustRunAsRange
to read:
runAsUser:type: RunAsAny
this allows containers to run as any user.
When localstack starts up it needs to become another user to start up elasticache. The elasticache service does not start up as the root user.
To get round this, localstack su’s the startup command to the localstack user in the container.
Because the ‘restricted’ scc explicitly disallows actions that change your user or group id, you need to remove these restrictions. Do this by deleting the lines:
- SETUID
Once you have done these two steps, save the file.
If you run:
$ minishift console --machine-readable | grep HOST | sed 's/^HOST=\(.*\)/\1/'
you will get the host that the minishift instance is accessible as from your machine. Make a note of this, as you’ll need to substitute it in later.
Deploying the localstack is as easy as running:
$ oc new-app localstack/localstack --name="localstack"
This takes the localstack/localstack image and creates an OpenShift application around it for you, setting up internal services (based on the exposed ports in the Dockerfile), running the container in a pod, and various other management tasks.
If you want to access the services from outside, you need to create OpenShift routes, which create an external address to access services within the OpenShift network.
For example, to create a route for the sqs service, create a file like this:
apiVersion: v1items:- apiVersion: v1kind: Routemetadata:annotations:openshift.io/host.generated: "true"name: sqsselfLink: /oapi/v1/namespaces/test/routes/sqsspec:host: sqs-test.HOST.nip.ioport:targetPort: 4576-tcpto:kind: Servicename: localstackweight: 100wildcardPolicy: Nonestatus:ingress:- conditions:- lastTransitionTime: 2017-07-28T17:49:18Zstatus: "True"type: Admittedhost: sqs-test.HOST.nip.iorouterName: routerwildcardPolicy: Nonekind: Listmetadata: {}resourceVersion: ""selfLink: ""
then create the route with:
$ oc create -f
See above for the list of services and their ports.
If you have multiple localstacks running on your OpenShift cluster, you might want to prepend the host name with a unique name for the instance, eg
host: localstackenv1-sqs-test.HOST.nip.io
Run an ‘oc get all’ to see what you have created within your OpenShift project:
$ oc get allNAME DOCKER REPO TAGS UPDATEDis/localstack 172.30.1.1:5000/myproject/localstack latest 15 hours ago
NAME REVISION DESIRED CURRENT TRIGGERED BYdc/localstack 1 1 1 config,image(localstack:latest)
NAME DESIRED CURRENT READY AGErc/localstack-1 1 1 1 15h
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARDroutes/apigateway apigateway-test.192.168.64.2.nip.io localstack 4567-tcp Noneroutes/cloudformation cloudformation-test.192.168.64.2.nip.io localstack 4581-tcp Noneroutes/cloudwatch cloudwatch-test.192.168.64.2.nip.io localstack 4582-tcp Noneroutes/dynamodb dynamodb-test.192.168.64.2.nip.io localstack 4569-tcp Noneroutes/dynamodbstreams dynamodbstreams-test.192.168.64.2.nip.io localstack 4570-tcp Noneroutes/es es-test.192.168.64.2.nip.io localstack 4578-tcp Noneroutes/firehose firehose-test.192.168.64.2.nip.io localstack 4573-tcp Noneroutes/kinesis kinesis-test.192.168.64.2.nip.io localstack 4568-tcp Noneroutes/lambda lambda-test.192.168.64.2.nip.io localstack 4574-tcp Noneroutes/redshift redshift-test.192.168.64.2.nip.io localstack 4577-tcp Noneroutes/route53 route53-test.192.168.64.2.nip.io localstack 4580-tcp Noneroutes/s3 s3-test.192.168.64.2.nip.io localstack 4572-tcp Noneroutes/ses ses-test.192.168.64.2.nip.io localstack 4579-tcp Noneroutes/sns sns-test.192.168.64.2.nip.io localstack 4575-tcp Noneroutes/sqs sqs-test.192.168.64.2.nip.io localstack 4576-tcp Noneroutes/web web-test.192.168.64.2.nip.io localstack 8080-tcp None
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGEsvc/localstack 172.30.187.65 4567/TCP,4568/TCP,4569/TCP,4570/TCP,4571/TCP,4572/TCP,4573/TCP,4574/TCP,4575/TCP,4576/TCP,4577/TCP,4578/TCP,4579/TCP,4580/TCP,4581/TCP,4582/TCP,8080/TCP 15h
NAME READY STATUS RESTARTS AGEpo/localstack-1-hnvpw 1/1 Running 0 15h
Each route created is now accessible as an AWS service ready to test your code.
Can now hit the services from your host, like this:
$ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis list-streams{"StreamNames": []}
For example, to create a kinesis stream:
$ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis create-stream --stream-name teststream --shard-count 2$ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis list-streams{"StreamNames": ["teststream"]}
This is a work in progress from the second edition of Docker in Practice
Get 39% off with the code: 39miell