DevOps isn’t about just doing CI/CD. But a CI/CD pipeline has an important role inside DevOps. I’ve been investing my time on [OpenFaaS](https://www.openfaas.com/) recently and as I started creating multiple functions, I wanted an easy to use and accessible development and delivery flow, in other words a CI/CD pipeline. One day as I was talking with [Alex](https://twitter.com/alexellisuk) (the creator of [OpenFaaS](https://www.openfaas.com/)), he asked me to put together a guide about CI/CD of OpenFaaS (using Github and Travis CI). It was a perfect timing for me because I was actually thinking about how I could integrate my CI/CD pipelines I apply with other projects to the OpenFaaS serverless one. What I came up with is shown on the following diagram: A high level overview is as below: * Push to GitHub * Pipeline starts Builds function images Creates a temporary Swarm Environment Runs tests for the functions Releases the images to a registry if the tests pass Deploys the functions if the tests pass * Pipeline ends For those who want to just dive in, the repository is published [here](https://github.com/kenfdev/faas-echo). You can look inside the `.travis.yml` file to see the stages and commands of the pipeline. I’ll explain the context step by step from here. ### Using Docker and Swarm in Travis CI Builds Alright, I’m not going into details of “how to use Travis CI”. There are plenty of good articles explaining this, and the official docs are well organized. The key points in using Travis for my use case are: * We’re going to use Docker and Swarm * We’re going to test using Node.js As written in the [docs](https://docs.travis-ci.com/user/docker/) we’ll need to include: sudo: requiredservices: - docker in our `.travis.yml` in order to use docker in builds. In addition, we want to be able to use `docker-compose` `version 3.2` in order to deploy our OpenFaaS stack but this isn’t possible with Travis’ default docker version. Again, as the [docs](https://docs.travis-ci.com/user/docker/#Installing-a-newer-Docker-version) says, you’ll need to write: before_install: - sudo apt-get update - sudo apt-get -y -o Dpkg::Options::="--force-confnew" install docker-ce in order to install a newer version of docker. If you miss this part you’ll see unsupported Compose file version: 3.2 something like this in the pipeline and it’ll fail. ### Creating a Temporary Testing Environment with Swarm  Open FaaS Temporary Testing Environment As shown in the diagram above, we need an OpenFaaS Environment in order to fully test the function (an e2e test). Creating a Swarm environment is easy, but preparing an OpenFaaS environment needs a little tuning for it to work properly. In order to prepare our testing environments we’ll need to: * Fetch the OpenFaaS CLI tool ([faas-cli](https://github.com/openfaas/faas-cli)) * Initialize Swarm * Deploy the Gateway and Function **Fetch the OpenFaaS CLI tool:** The OpenFaaS CLI tool ([faas-cli](https://github.com/openfaas/faas-cli)) makes it easy to use OpenFaaS and we’ll definitely need it for the pipeline. It’s as easy as calling this command: curl -sSL [https://cli.openfaas.com](https://cli.openfaas.com) | sudo sh **Initialize Swarm: **This is straight forward. Just call the command: docker swarm init **Deploy the Gateway and Function:** Okay, this is the tricky part. Normally, deploying OpenFaaS is **extremely easy**. I really mean it. If you haven’t already, you should definitely have a look at Alex’s “[FaaS and Furious — 0 to Serverless in 60 seconds, anywhere](https://skillsmatter.com/skillscasts/10813-faas-and-furious-0-to-serverless-in-60-seconds-anywhere)”. But deploying it for a CI/CD pipeline adds a little concern which you normally don’t need. It’s that the environment needs to be **ready** before the test runs. That means the Swarm Service needs to be ready. To accomplish this, I wrote a simple shell script function: \# This function checks if the service is in Running state check\_service\_is\_running() { local SERVICE\_NAME=$1 local STATE=$(docker service ps --format '{{json .CurrentState}}' $SERVICE\_NAME) if \[\[ $STATE = \\"Running\* \]\]; then echo 1 else echo 0 fi } This uses `docker service ps` to check the state of the service. If it is `Running` it returns `1` and other than that, it returns `0` . In addition to this functionality, we’ll need to retry until the service is ready so I added a retry function as well (it calls`check_service_is_running` internally): \# This function waits for the service to become available. \# Retries for 10 times and 3 second interval (hard coded for now) wait\_for\_service\_to\_start() { local n=1 local max=10 local delay=3 local SERVICE\_NAME=$1 local SERVICE\_IS\_RUNNING=0 while \[ "$SERVICE\_IS\_RUNNING" -eq 0 \]; do if \[\[ $n -gt $max \]\]; then echo "ERROR: Retried $(($n-1)) times but $SERVICE\_NAME didn't start. Exiting" >&2 exit 1 fi SERVICE\_IS\_RUNNING=$(check\_service\_is\_running $SERVICE\_NAME) echo "Waiting for $SERVICE\_NAME to start" n=$\[$n+1\] sleep $delay done echo "$SERVICE\_NAME is Running" } Now that we can ensure the gateway and functions are running as expected, we can setup the environment with the commands below: \# deploy the stack to swarm ./deploy\_stack.sh \# build the functions (assuming 4 cores) faas-cli build --parallel 4 -f stack.yml \# we can't deploy unless the gateway is ready so wait wait\_for\_service\_to\_start func\_gateway \# and then deploy faas-cli deploy -f stack.yml \# wait for functions to become ready for testing wait\_for\_service\_to\_start echo The `deploy_stack.sh` is simply calling: docker stack deploy func --compose-file docker-compose.yml and deploying the stack inside Swarm. I want to reuse this shell on my local machine hence it is in an external file. `faas-cli build` creates the function images and `faas-cli deploy` deploys the functions via gateway. With this tweak, we can ensure a fully working OpenFaaS environment (you can check the complete script inside the [ci-setup.sh](https://github.com/kenfdev/faas-echo/blob/master/ci-setup.sh "ci-setup.sh") file). ### Testing the Function Now that we have a testing environment, testing against it is pretty straight forward. You can choose any framework of your choice, but for this article I chose Node.js and the [chakram](https://github.com/dareid/chakram) library. The following test is a sample test for the `echo` function. Just checking if the response is the text I sent: const chakram = require('chakram'); const expect = chakram.expect; const ENDPOINT = "[http://localhost:8080/function/echo](http://localhost:8080/function/echo)"; describe("FaaS echo function", () => { it("should respond with the data you passed", () => { // Arrange const expected = "echo test"; // Act return chakram.post(ENDPOINT, expected, {json: false}) .then(response => { // Assert expect(response).to.have.status(200); expect(response.body).to.contain(expected); }); }); }); Don’t forget to add the following lines to `.travis.yml` in order to use Node.js and cache libraries: language: node\_js node\_js: - "8" cache: yarn ### Cleaning up Swarm This is probably not necessary but I like to be symmetric so I’ve decided to clean up the Swarm environment: docker swarm leave -f ### Release and Deploy After the tests pass as expected, we’d like to release the function images and deploy them to other environments (dev, stage, prod, whatever). This is straight forward and it is written in the travis ci [docs](https://docs.travis-ci.com/user/docker/#Branch-Based-Registry-Pushes), too: after\_success: - if \[ "$TRAVIS\_BRANCH" == "master" \]; then docker login -u="$DOCKER\_USERNAME" -p="$DOCKER\_PASSWORD"; faas-cli push -f echo.yml fi This means the images are pushed only if the **tests succeed** and it is on the **master branch**. `faas-cli push` pushes the functions to the registry. Another interesting part is the `$DOCKER_USERNAME` and `$DOCKER_PASSWORD` environment variable. You can set these credentials via the travis ci management console.  travis ci dashboard From the Settings, you can set Environment Variables:  Setting Environment Variables Hence you don’t have to reveal your credentials inside the `.travis.yml`. While the commands above aren’t deploying to any environment, it is just a matter of triggering some deploy webhook (or something similar to that). So you’ll be executing some kind of trigger command here regardless of what backend you’re using (Swarm, Kubernetes, Cattle, etc). ### Run the Pipeline! Everything is prepared! Now let’s make some edits and trigger the pipeline. You should see something similar to this:  You should see the tests passing and the images getting released as well. **An automated full pipeline!** ### Wrapping Up In this post we’ve created a full CI/CD pipeline for OpenFaaS. You can replace the services to the ones you prefer, GitLab, Gogs, Jenkins, Circle CI, GoCD, Drone, etc. (you name it) and you can use them seamlessly on Cloud or On-Premise, anywhere you’d like. Needless to say but being able to completely take over control of the pipeline is a huge advantage! Still, there are some things we could probably improve including: * Updating the docker every time to use the docker-compose `ver: 3.2` in the beginning of the pipeline takes a bit time. Wish there was some kind of way to switch the docker version in Travis * The CI downloads the `faas-cli` every time. Perhaps we could create a `faas-cli` container and reuse it in the pipeline taking benefits of image caching. If you have any thoughts, please feel free to share them with [me](https://twitter.com/kenfdev)! Would love feedback, too! Further reading: * You can find the complete repo of this article’s project here: [https://github.com/kenfdev/faas-echo](https://github.com/kenfdev/faas-echo) * If you haven’t already you should definitely take a look at Alex’s “OpenFaaS: From Zero to Serverless in 60 Seconds Anywhere with Alex Ellis”:  _If you’re interested in OpenFaaS, please show support by giving a_ **_Star_** _to the_ [_GitHub repo_](https://github.com/openfaas/faas)_!_