Kubernetes is shipped with many namespaces. Some of them are critical for Kubernetes to function correctly.
Messing around in one of these namespaces can damage the Kubernetes system.
And these are:
default
The home of the homelesskube-system
The namespace for objects created by the Kubernetes systemkube-public
This namespace is created automatically and is readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. This is useful for exposing any cluster information necessary to bootstrap components. It is primarily managed by Kubernetes itself.kube-node-lease
This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.
All Kubernetes system's namespaces will be regenerated again even when you delete them accidentally. That's what Kubernetes components will try to do.
But sometimes if you are unfortunate enough deleting namespaces might be stuck at the terminating stage giving no option to regenerate the namespace again.
So the below cases mention what is the importance of each namespace in order to know what the symptoms might look like.
default
?default
namespace is used as the default place of any objects you create with no namespace specified.
kube-system
?Kube-system
is the namespace for objects and service accounts with high-level privileges within Kubernetes. Utilization of the Kubernetes controller stems from this namespace, in other words we will have some issues with the controllers and maybe in deploying new pods/deployment.
Not only does this namespace also contain other important objects such as kube-dns
,kube-proxy
,
kube-dns
is the authoritative name server for the cluster domain (cluster.local) and it resolves external names recursively. Short names that are not fully qualified, such as myservice, are completed first with local search paths.
More details can be found here & here.
kube-proxy
manages the forwarding of traffic addressed to the virtual IP addresses (VIPs) of the cluster’s Kubernetes Service objects to the appropriate backend pods more details here & here
This means you will have trouble with resolving external/internal communications.
kube-public
?kube-public
contains a single ConfigMap object, cluster-info, that aids discovery and security bootstrap.
All the above namespaces if you tried to delete them the server will respond with
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Expect the lucky kube-node-release
which was added in Kubernetes v1.14, would get deleted the same as any normal namespace.
kube-node-release
?kube-node-lease
This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane (Node controller) can detect node failure.
So what would happen if we deleted kube-node-lease
? Usually, Kubernetes will create another one with Lease object for each node, but sometimes the namespace removal gets stuck at the terminating
status.
Which by then we will have a node Lease with an outdated heartbeat which might tell the Node controller that this node is not reachable and impact the overall communication between the nodes.
terminating
?Of course, you can try to figure out why the namespace is stuck at terminating but sometimes you can't so here we go with the force delete
kubectl get namespace <terminating-namespace> -o json >tmp.json
$ kubectl proxy
Starting to serve on 127.0.0.1:8001
curl -k -H "Content-Type: application/json" -X PUT --data-binary @tmp.json http://127.0.0.1:8001/api/v1/namespaces/<terminating-namespace>/finalize
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "<terminating-namespace>",
"selfLink": "/api/v1/namespaces/<terminating-namespace>/finalize",
"uid": "b50c9ea4-ec2b-11e8-a0be-fa163eeb47a5",
"resourceVersion": "1602981",
"creationTimestamp": "2021-10-18T18:48:30Z",
"deletionTimestamp": "2021-10-18T18:59:36Z"
},
"spec": {
},
"status": {
"phase": "Terminating"
}
}
Hope this helps!