Photo by Nate Grant on Unsplash
At Datawire, all of our cloud services are developed and deployed on Kubernetes. When we started developing services, we noticed that getting code changes into Kubernetes was a fairly tedious process. Typically, we had to:
kubectl apply
We first automated these steps, but the latency introduced for a two line code change was still annoying (especially for those of us who were used to live reload of interpreted languages).
So we took a step back and asked ourselves what would we like the development process to look like? We came up with two answers. First, we wanted consistency in environments between development and production. And second, we wanted zero latency when testing code changes during development.
We had experienced the pains of trying to figure out why a service that works in development doesn’t work in production or in our continuous integration system. Inevitably, these pains come down to environmental differences. We were keen to create environmental consistency to minimize the chances of this happening.
Luckily, containers provide a great solution for this problem. We create a standard Docker image that is used for both development and production. This Docker image contains all the dependencies necessary to run the service. The Docker client also lets us mount a local filesystem into the container, which lets us edit code using our favorite editor, while running it in the container.
This approach gives us a fast feedback cycle during development, while creating consistency between different environments. Any developer working on the service is able to use the same image, which is also the same image that is run in production.
We loved the container approach for fast development. However, some of our services depend on other running services, and we wanted a way to develop multi-container applications as well.
We first started by experimenting with minikube, but decided it wasn’t a great fit because the container deployment process still added latency. Moreover, minikube required a substantial amount of RAM for some of our services (e.g., ones that required a JVM).
We also looked at Docker Compose, which was easy to try since we were already using containers. We decided not to use Compose because it fundamentally introduced a different runtime environment for our application (Docker) than production (Kubernetes/AWS). This meant we had to maintain two different environments for development and production. This problem became even more acute when we started to factor in applications we run in the cloud (e.g., AWS RDS).
We then experimented with a networking-oriented approach. We were already familiar with port forwarding as a way to access applications in a cluster, so we asked ourselves if there was a way to expand on this concept. We just needed to figure out a way for the local service to access the Kubernetes cluster, and vice versa.
We implemented this concept in Telepresence, which we open sourced early this year. Telepresence substitutes a two-way network proxy for your normal pod running in the Kubernetes cluster. This pod proxies data from your Kubernetes environment (e.g., environment variables, secrets, ConfigMap, TCP connections) to the local process. The local process has its networking transparently overridden so that DNS calls and TCP connections are routed through the proxy to the remote cluster.
Here’s an example. Clone the following repository:
$ git clone [https://github.com/datawire/hello-world-python](https://github.com/datawire/hello-world-python)
This repository contains a simple Python application using the Flask web framework:
#!/usr/bin/python
import timefrom flask import Flaskapp = Flask(__name__)
START = time.time()
def elapsed():running = time.time() - STARTminutes, seconds = divmod(running, 60)hours, minutes = divmod(minutes, 60)return "%d:%02d:%02d" % (hours, minutes, seconds)
@app.route('/')def root():return "Hello World (Python)! (up %s)\n" % elapsed()
if __name__ == "__main__":app.run(debug=True, host="0.0.0.0", port=8080)
It also contains a Dockerfile that specifies how to build the runtime container:
FROM python:3-alpineWORKDIR /serviceCOPY requirements.txt .RUN pip install -r requirements.txtCOPY . ./EXPOSE 8080ENTRYPOINT ["python3", "app.py"]
Let’s build the development environment locally:
$ cd hello-world-python
$ docker build -t hello-world-dev .
Get the service running in Kubernetes (we’re using the Datawire image so you don’t have to push to a Docker registry):
$ kubectl run hello --image=datawire/hello-world-python --port=8080 --expose
Now, let’s test this service out. In another terminal, let’s start a pod on the Kubernetes cluster to talk to the service.
$ kubectl run -i --tty alpine --image=alpine -- sh$ wget -q -O - http://hello:8080Hello World (Python)! (up 0:00:45)
Normally, when you’re coding this service, you have to go through a process of building your container, pushing it to the registry, and redeploying. Let’s see how this works with Telepresence. Make sure you’re in the hello-world-python directory, and type:
$ telepresence --swap-deployment hello --docker-run --rm -it -v $(pwd):/service hello-world-dev:latest
This command does three things:
We can test this out by making a change to app.py
. Open app.py
in your preferred editor, and change the “Hello, World” string to anything you’d like. Now, rerun the wget
command from remote Kubernetes pod:
$ wget -q -O - http://hello:8080Hello New World (Python)! (up 0:03:12)
And there you have it: you edit your code locally, and changes are reflected immediately to clients inside the Kubernetes cluster without having to redeploy, create Docker images, and so on.
If you use a server that supports auto reload, Telepresence makes this feature useful again — you can edit your server code, save, and immediately test the functionality.
Telepresence has simplified our coding cycle. We’ve made it open source and created OS-native packages for Mac OS X and Linux. We’d love for you to try it out and see if it makes your life easier. For more information, visit https://www.telepresence.io.