Running Kubernetes in production means taking inventory. A LOT. Are any of our pods running that version of Ubuntu base image affected by the new CVE?
Do we even use Alpine Linux anywhere? What versions of MySQL are we currently running (and where)? The standard output of
kubectl get pods
doesn't help to answer any of these questions.That's okay, though, because we have the
custom-columns
output format!$ kubectl get pods -A -o \
custom-columns=NAMESPACE:metadata.namespace,NAME:metadata.name
NAMESPACE NAME
fail fail-856f678c66-dn282
interactive interactive-797dbc7d9-ch9bd
kube-system calico-kube-controllers-dc6cb64cb-pfhqr
kube-system calico-node-nk854
kube-system coredns-5644d7b6d9-9776g
kube-system coredns-5644d7b6d9-zccn5
kube-system csi-linode-controller-0
kube-system csi-linode-node-82qgt
kube-system kube-proxy-xxpvf
Whew! That's a fingerful to type. If you'd like, you can put the details of the format in a file, and reference that file via the
custom-columns-file
output format:$ cat pods.fmt
NAMESPACE NAME
metadata.namespace metadata.name
$ kubectl get pods -A -o custom-columns-file=pods.fmt
NAMESPACE NAME
fail fail-856f678c66-dn282
interactive interactive-797dbc7d9-ch9bd
kube-system calico-kube-controllers-dc6cb64cb-pfhqr
kube-system calico-node-nk854
kube-system coredns-5644d7b6d9-9776g
kube-system coredns-5644d7b6d9-zccn5
kube-system csi-linode-controller-0
kube-system csi-linode-node-82qgt
kube-system kube-proxy-xxpvf
Doing so allows you to swap out selectors, like namespace and
-l
label filters, while re-using the same format (and not having to retype it!)We can do so much more than just list off names and namespaces. We can go so far as to list out the images in use by the first container of each pod:
$ cat images.fmt
NAMESPACE NAME IMAGE
metadata.namespace metadata.name spec.containers[0].image
$ kubectl get pods -A -o custom-columns-file=images.fmt
NAMESPACE NAME IMAGE
fail fail-856f678c66-dn282 huntprod/run
interactive interactive-797dbc7d9-ch9bd huntprod/run
kube-system calico-kube-controllers-dc6cb64cb-pfhqr calico/kube-controllers:v3.9.2
kube-system calico-node-nk854 calico/node:v3.9.2
kube-system coredns-5644d7b6d9-9776g k8s.gcr.io/coredns:1.6.2
kube-system coredns-5644d7b6d9-zccn5 k8s.gcr.io/coredns:1.6.2
kube-system csi-linode-controller-0 quay.io/k8scsi/csi-provisioner:v1.0.0
kube-system csi-linode-node-82qgt quay.io/k8scsi/driver-registrar:v1.0-canary
kube-system kube-proxy-xxpvf k8s.gcr.io/kube-proxy:v1.16.3
Now we're talkin'!
Keep in mind that the tag reported is whatever was in the pod spec – if it is missing, you can assume
:latest
, but that doesn't tell you much about the actual version in use. If you want anything more specific out of Kubernetes, you'll need to use the status
object, instead of spec
.$ cat versions.fmt
NAMESPACE NAME IMAGE
metadata.namespace metadata.name status.containerStatuses[0].imageID
$ kubectl get pods -A -o custom-columns-file=versions.fmt
NAMESPACE NAME IMAGE
fail fail-856f678c66-dn282 docker-pullable://huntprod/run@sha256:1d8debb90a76fcc434cd5452e61eb9f55fb71d82b8fbbe2fd54ad423e17a996d
interactive interactive-797dbc7d9-ch9bd docker-pullable://huntprod/run@sha256:1d8debb90a76fcc434cd5452e61eb9f55fb71d82b8fbbe2fd54ad423e17a996d
kube-system calico-kube-controllers-dc6cb64cb-pfhqr docker-pullable://calico/kube-controllers@sha256:5d525a6c6cec7f1e9a2b35723ffc63223f9fd067619cc1db209339792927dd02
kube-system calico-node-nk854 docker-pullable://calico/node@sha256:ffbe7b00344065007154b81f50d0f3960ce35fc790cdba0c2e2f0ae60e08cae2
kube-system coredns-5644d7b6d9-9776g docker-pullable://k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
kube-system coredns-5644d7b6d9-zccn5 docker-pullable://k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5
kube-system csi-linode-controller-0 docker-pullable://quay.io/k8scsi/csi-attacher@sha256:e57bb6abf0d78e638f70d38bdb07ee30ffe42d423a14fb2f910c11afab3a5e01
kube-system csi-linode-node-82qgt docker-pullable://linode/linode-blockstorage-csi-driver@sha256:6a466fea4f597f274d646839ff1f40363e6c5bac5871fb2f4e23ee0eaadb56ee
kube-system kube-proxy-xxpvf docker-pullable://k8s.gcr.io/kube-proxy@sha256:6c09387bbee4e58eb923695da4fdfa3c37adec632862e79f419f0b5b16865f34
Kubernetes tracks the raw SHA256 checksum of each image it executes, in
status.containerStatuses[n].imageID
, and the resolved image name and its tag in .image
. We use those two, above, to get a more complete picture of our running image inventory.So far we've been using the
[0]
-th index, which limits us to the first container in any multi-container pods. We could instead use [*]
, and kubectl
will separate multiple values with commas. This gets unwieldy fast, especially with the imageID
fields, but it is there when you need it.Pro Tip: Since we know that the only instance of a comma in our output will be to separate multiple image IDs, we can use(a POSIX utility) to reformat the table to ease readability:column
$ kubectl get pods -A -o \
'custom-columns=POD:metadata.name,IMAGE:spec.containers[*].image' | \
column -t -s,
POD IMAGE
fail-856f678c66-dn282 huntprod/run
interactive-797dbc7d9-ch9bd huntprod/run
calico-kube-controllers-dc6cb64cb-pfhqr calico/kube-controllers:v3.9.2
calico-node-nk854 calico/node:v3.9.2
coredns-5644d7b6d9-9776g k8s.gcr.io/coredns:1.6.2
coredns-5644d7b6d9-zccn5 k8s.gcr.io/coredns:1.6.2
csi-linode-controller-0 quay.io/k8scsi/csi-provisioner:v1.0.0 quay.io/k8scsi/csi-attacher:v1.0.0 linode/linode-blockstorage-csi-driver:v0.1.3
csi-linode-node-82qgt quay.io/k8scsi/driver-registrar:v1.0-canary linode/linode-blockstorage-csi-driver:v0.1.3
kube-proxy-xxpvf k8s.gcr.io/kube-proxy:v1.16.3
It's also important to note that our above formats completely ignore any and all init containers, in case those are important to you.
If you like that, check out this video adaptation, wherein I go a bit more in-depth, and touch on some stuff you can do with files holding all your sweet, sweet custom output formats.
Previously published at https://starkandwayne.com/blog/silly-kubectl-trick-2-listing-images/