TL;DR: Moving from etcd v2 to v3 is in general well documented, however there are a few gotchas you might wanna be aware of. I’m ATM working on —a tool for and restoring clusters—and in the context of this work I came across a few things related to etcd that did cost me some cycles to sort out, so I thought I share it here to spare you the pain ;) ReShifter backing up Kubernetes In general, the etcd v2 to etcd v3 migration story is well documented, see this here as well as the . Here are a couple of things to be aware of, both from a CLI perspective (i.e. when using ) as well as from an API perspective (i.e. moving from the Go client lib to ): blog post official docs etcdctl v2 v3 The v2 data model is a tree, that is, a key identifies either a directory (potentially serving as the root for a sub-tree) or a leaf in the tree, in which case the payload can actually be a value. A key can not at the same time be a leaf note and a directory. In v3, the data model has been flattened, that is, there’s no hierarchy information anymore available amongst entries. So, while you can pretend that, the following is true, in v3 you really are dealing with (flat) : key ranges /kubernetes.io/namespaces/kube-system --> /kubernetes.io└──namespaces└── kube-system One consequence of the data model change is that code that queries and manipulates etcd2 and etcd3 looks different. In the former case, you can, for example utilize the hierarchy information to recursively traverse the tree; in case of etcd3 you effectively determine the range (start and end key, pretty similar to what you’d do in HBase) and then iterate over the result set; see for example and in the . discovery.Visit2() discovery.Visit3() ReShifter code base There is a difference between the wire-protocol used (HTTP vs. gRPC) and the API version/data model in use. For example, you might have an etcd3 server running, but using it in an etcd2 mode. Be aware of how you’ve configured etcd and in which mode you’re communicating with it. One thing really caused me some pain: forgetting to set the environment variable . This unremarkable switch causes to switch from talking v2 to v3. Run before and after setting the env variable and compare the commands you’ve got available, for example in v2 vs. in v3 (see also the screen shot at the top of this post, showing that is only available in the v2 API). ETCDCTL_API=3 etcdctl etcdctl get/set get/put ls In an etcd3 server, the v2 and v3 data stores exist in parallel and are independent, see also the terminal session, below. Let’s have a look now at a simple interaction with etcd3 and how to use the v2 and v3 API. First we launch etcd3, containerized: $ docker run --rm -p 2379:2379 --name test-etcd \--dns 8.8.8.8 quay.io/coreos/etcd:v3.1.0 /usr/local/bin/etcd \--advertise-client-urls \--listen-client-urls \--listen-peer-urls http://0.0.0.0:2379 http://0.0.0.0:2379 http://0.0.0.0:2380 Now, let’s put a value into etcd, using the v2 API: curl -XPUT -d value="value for v2" http://127.0.0.1:2379/v2/keys/kubernetes.io/namespaces/kube-system Next, we switch to the v3 API: $ export ETCDCTL_API=3 And now, we first check if we can read the value we’ve previously set using the v2 API: $ etcdctl --endpoints= get \/kubernetes.io/namespaces/kube-system http://127.0.0.1:2379 Which returns empty, so no way to write to the etcd2 datastore and read it out via v3. Now, let’s put something into etcd using the v3 API and query it right after it to confirm the write: $ etcdctl --endpoints= put \/kubernetes.io/namespaces/kube-system "value for v3"$ etcdctl --endpoints= get \/kubernetes.io/namespaces/kube-system/kubernetes.io/namespaces/kube-systemvalue for fv3 http://127.0.0.1:2379 http://127.0.0.1:2379 With that I’ll wrap up this post and hope you’re successful in migrating from etcd v2 to etcd v3! If you’ve got additional insights or comments on the above, please do share them here, hit me up on (DMs are open), or come and join us on the where I’m usually hanging out on #sig-cluster-lifecycle and #sig-apps. Twitter Kubernetes Slack Last but not least, I’d like to give Serg of CoreOS huge kudos: he patiently helped me through issues I experienced around using the v3 API. Thank you and I owe you one!
Share Your Thoughts