[EDIT: check out the awesome comments on reddit that add to and correct a lot of the following]
Open up a terminal. Run ‘type time’. You’ll be told that “time is a shell keyword”. Now run ‘which time’ and you’ll see ‘/usr/bin/time’, which looks like a path to a binary. Are they the same thing? Nope. In fact, one of them can give you a whole lot of interesting information that the other can’t.
Try ‘man time’.
Huh, did you know time had options?
I have very little idea what those mean, but they probably won’t wipe my bootleg Hamilton shakycam recording in ~/Downloads, so let’s try one of them out:
I was expecting something more exciting. Let’s read over that man page a little more carefully:
Aha, maybe this is just a bash thing. Let’s try zsh, which based on the number of stars on GitHub looks to be more web scale.
No dice! Last try: fish, the single-origin fixed-gear shell of choice.
Gee willikers, that sure is a lot of information.
So, the ‘time’ command I’ve been using all these years hasn’t actually been the one described in the man pages. I wonder why bash and zsh have time as a builtin? But before I dive into that, let’s look at what ‘time -l’ can tell us.
The man page for getrusage (a system call) has detailed descriptions of every bit of the output from ‘time -l’. E.g.
ru_inblock: the number of times the file system had to perform input
When I read in a 410KiB PDF:
It does one “block input operation”, so it read the whole thing in one go from my SSD. I tried it with a 120MB JSON file, and it also did it in one operation. Interesting.
How about voluntary context switches?
the number of times a context switch resulted due to a process voluntarily giving up the processor before its time slice was completed (usually to await availability of a resource).
Okay, so that’s when wc tries to read a file from disk, but is told by the kernel to wait, either because the disk is being used by someone else, or it’s still trying to perform its operation (e.g. if the process is consuming the input faster than the disk can spit it out).
That doesn’t happen at all with the small PDF, but happens thrice in the case of the 120MB JSON file.
The “resident set size” that headlines the stats we get back from time is complicated, but you can think of it as a measure of how much memory the process used at its peak, in bytes.
There’s barely any difference in memory usage between running wc on a big file or a small one, in case you were wondering. Which makes sense, because the only thing that wc needs to keep in its head while it’s streaming in a file is a few small counters (characters, lines, etc.), along with whichever part of the file is in its buffer.
In any case, there you have it. time is not the time you thought it was, and if you want to get an idea of how your program is doing IO, or how much memory it’s using, try /usr/bin/time -l.