Skip to Content

Measuring a command's execution time

Posted on

Sometimes you need to find out how much time a command needs to finish. By adding the time command you can measure the execution time.

As an example, let’s see how long it takes to create a 512MB file on our system:

$ time dd if=/dev/random of=/tmp/output.file bs=1 count=0 seek=536870912
.. output ..

real	0m2.021s
user	0m0.002s
sys	0m0.314s

The command returns three statistics:

  • real: This is all elapsed time, from start to finish.
  • user: The amount of CPU time spent in user-mode code, outside the kernel, but only the time used by the process itself.
  • sys: The amount of CPU time spent in the kernel within the process.

In this case, I’m interested in the real statistic, which is 2.021 seconds in the example above.

Note that the sum of user and sys could be greater than real time. If your system runs on multiple CPU cores and the process is running in multiple threads, the total CPU time can exceed the real time.

Getting different results?

You might get a different output on your machine when using the time command, depending on the shell you are using. The example above is run inside a bash shell. If you would run the same command inside a zsh shell, it will probably looks like this:

$ /bin/zsh -c "time dd if=/dev/random of=/tmp/output.file bs=1 count=0 seek=536870912"
.. output ..

dd if=/dev/random of=/tmp/output.file bs=1 count=0 seek=536870912  0.00s user 0.37s system 12% cpu 2.989 total

That’s because both bash and zsh define their own built-in time command. There is also the /usr/bin/time command, which should be available on every system. To determine which one will be used, you can check with type:

$ type -a time
time is a shell word
time is /usr/bin/time

Which indicates that on my bash shell, time is a builtin word and will be executed when I run the command.

comments powered by Disqus