paint-brush
Notes on moving from etcd2 to etcd3by@mhausenblas
4,264 reads
4,264 reads

Notes on moving from etcd2 to etcd3

by Michael HausenblasJuly 2nd, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

TL;DR: Moving from etcd v2 to v3 is in general well documented, however there are a few gotchas you might wanna be aware of.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Notes on moving from etcd2 to etcd3
Michael Hausenblas HackerNoon profile picture

TL;DR: Moving from etcd v2 to v3 is in general well documented, however there are a few gotchas you might wanna be aware of.

I’m ATM working on ReShifter—a tool for backing up and restoring Kubernetes clusters—and in the context of this work I came across a few things related to etcd that did cost me some cycles to sort out, so I thought I share it here to spare you the pain ;)

In general, the etcd v2 to etcd v3 migration story is well documented, see this blog post here as well as the official docs. Here are a couple of things to be aware of, both from a CLI perspective (i.e. when using etcdctl) as well as from an API perspective (i.e. moving from the Go client lib v2 to v3):

  • The v2 data model is a tree, that is, a key identifies either a directory (potentially serving as the root for a sub-tree) or a leaf in the tree, in which case the payload can actually be a value. A key can not at the same time be a leaf note and a directory. In v3, the data model has been flattened, that is, there’s no hierarchy information anymore available amongst entries. So, while you can pretend that, the following is true, in v3 you really are dealing with (flat) key ranges:

/kubernetes.io/namespaces/kube-system -->



/kubernetes.io└──namespaces└── kube-system

  • One consequence of the data model change is that code that queries and manipulates etcd2 and etcd3 looks different. In the former case, you can, for example utilize the hierarchy information to recursively traverse the tree; in case of etcd3 you effectively determine the range (start and end key, pretty similar to what you’d do in HBase) and then iterate over the result set; see for examplediscovery.Visit2() and discovery.Visit3() in the ReShifter code base.
  • There is a difference between the wire-protocol used (HTTP vs. gRPC) and the API version/data model in use. For example, you might have an etcd3 server running, but using it in an etcd2 mode. Be aware of how you’ve configured etcd and in which mode you’re communicating with it.
  • One thing really caused me some pain: forgetting to set the environment variable ETCDCTL_API=3. This unremarkable switch causes etcdctl to switch from talking v2 to v3. Run etcdctl before and after setting the env variable and compare the commands you’ve got available, for example get/set in v2 vs. get/put in v3 (see also the screen shot at the top of this post, showing that ls is only available in the v2 API).
  • In an etcd3 server, the v2 and v3 data stores exist in parallel and are independent, see also the terminal session, below.

Let’s have a look now at a simple interaction with etcd3 and how to use the v2 and v3 API. First we launch etcd3, containerized:





$ docker run --rm -p 2379:2379 --name test-etcd \--dns 8.8.8.8 quay.io/coreos/etcd:v3.1.0 /usr/local/bin/etcd \--advertise-client-urls http://0.0.0.0:2379 \--listen-client-urls http://0.0.0.0:2379 \--listen-peer-urls http://0.0.0.0:2380

Now, let’s put a value into etcd, using the v2 API:

curl http://127.0.0.1:2379/v2/keys/kubernetes.io/namespaces/kube-system -XPUT -d value="value for v2"

Next, we switch to the v3 API:

$ export ETCDCTL_API=3

And now, we first check if we can read the value we’ve previously set using the v2 API:


$ etcdctl --endpoints=http://127.0.0.1:2379 get \/kubernetes.io/namespaces/kube-system

Which returns empty, so no way to write to the etcd2 datastore and read it out via v3. Now, let’s put something into etcd using the v3 API and query it right after it to confirm the write:






$ etcdctl --endpoints=http://127.0.0.1:2379 put \/kubernetes.io/namespaces/kube-system "value for v3"$ etcdctl --endpoints=http://127.0.0.1:2379 get \/kubernetes.io/namespaces/kube-system/kubernetes.io/namespaces/kube-systemvalue for fv3

With that I’ll wrap up this post and hope you’re successful in migrating from etcd v2 to etcd v3! If you’ve got additional insights or comments on the above, please do share them here, hit me up on Twitter (DMs are open), or come and join us on the Kubernetes Slack where I’m usually hanging out on #sig-cluster-lifecycle and #sig-apps.

Last but not least, I’d like to give Serg of CoreOS huge kudos: he patiently helped me through issues I experienced around using the v3 API. Thank you and I owe you one!