paint-brush
Kubernetes Explained Simply: Getting At Those Logs [Part 4]by@jameshunt
250 reads

Kubernetes Explained Simply: Getting At Those Logs [Part 4]

by James HuntDecember 3rd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

UNIX/Linux system administrators the world over regularly use log files to get to the bottom of outages and malfunctions. An indispensable tool in that regard is tail(1), particularly its follow mode flag (-f). When we're in a Kubernetes world, we'd love to use something similar.

Company Mentioned

Mention Thumbnail
featured image - Kubernetes Explained Simply: Getting At Those Logs [Part 4]
James Hunt HackerNoon profile picture

UNIX/Linux system administrators the world over regularly use log files to get to the bottom of outages and malfunctions. An indispensable tool in that regard is 

tail(1)
, particularly its follow mode flag (
-f
). When we're in a Kubernetes world, we'd love to use something similar.

We're in luck.

The 

logs
 command has two flags that are most helpful for watching live log streams: the aptly-named 
--tail
 flag and the context-limiting 
-n
.

kubectl logs my-pod-name --tail -n10

This one-two combo is super helpful for tracing ongoing issues on a running system. Limiting the context with 

-n
 prevents us from fixating on old problems that may have since been resolved. Streaming the log as it happens gives us the opportunity to poke and prod at the system, seeing new log entries appear as we do things in the application or service under scrutiny.

Pro Tip: While tailing, you can hit <Enter> a few times to insert lots of blank lines, thus demarcating the stretches of time between trying things in the front-end.

When Pods Crash

$ kubectl logs crashing-pod
[Mon Feb 10 16:29:19 UTC 2020] starting up...
[Mon Feb 10 16:29:34 UTC 2020] startup complete; initializing.
[Mon Feb 10 16:29:34 UTC 2020] - frobbing the data cache...
[Mon Feb 10 16:29:34 UTC 2020] - allocating grokbase fooblers...
[Mon Feb 10 16:29:34 UTC 2020] - reticulating splines...
[Mon Feb 10 16:29:34 UTC 2020] - reclaiming unused memory...
[Mon Feb 10 16:29:34 UTC 2020] - checking in on friends and family...
[Mon Feb 10 16:29:34 UTC 2020] initialization complete; entering main loop.

What I really need is the end of the log buffer from the failed / crash instance:

$ kubectl logs -p crashing-pod
[Mon Feb 10  7:18:20 UTC 2020] starting up...
[Mon Feb 10  7:18:35 UTC 2020] startup complete; initializing.
[Mon Feb 10  7:18:35 UTC 2020] - frobbing the data cache...
[Mon Feb 10  7:18:35 UTC 2020] - allocating grokbase fooblers...
[Mon Feb 10  7:18:35 UTC 2020] - reticulating splines...
[Mon Feb 10  7:18:35 UTC 2020] - reclaiming unused memory...
[Mon Feb 10  7:18:35 UTC 2020] - checking in on friends and family...
[Mon Feb 10  7:18:35 UTC 2020] initialization complete; entering main loop.
[Mon Feb 10  7:18:35 UTC 2020] ...
[Mon Feb 10  7:18:35 UTC 2020] ...
[Mon Feb 10  7:18:35 UTC 2020] ...
[Mon Feb 10  7:18:35 UTC 2020] ...
[Mon Feb 10  7:18:35 UTC 2020] UNMITIGATED DISASTER!
[Mon Feb 10  7:18:35 UTC 2020] THREAD PANIC detected; no assurances given.
[Mon Feb 10  7:18:35 UTC 2020] IT'S A BAD SCENE, MAN!
[Mon Feb 10  7:18:35 UTC 2020] <crashed>

With that information in hand, I can continue my diagnostic efforts and hopefully resolve this early morning nuisance.

Previously published at https://starkandwayne.com/blog/silly-kubectl-trick-4-getting-at-those-logs/