I wanted to use gnu time to measure running time of some little .c programs. In the man it is written that:
-f FORMAT, --format FORMAT
Use FORMAT as the format string that controls the output of time. See the below more information.
Then in examples we have:
To run the command `ls -Fs' and show just the user, system, and total time:
time -f "%E real,%U user,%S sys" ls -Fs
But when I try to issue this command from example i get:
time -f '%E real,%U user,%S sys' ls -Fs
-f: command not found
real 0m0.134s
user 0m0.084s
sys 0m0.044s
I am wondering where is the problem, where am I making a mistake? I just want to show the user time, that is why I am toying with time output format.
Bash for one has a shell builtin named time. One way to get past it is to type command time - command will ignore the builtins and run the time program from your $PATH. Another way is alias time=/usr/bin/time. On the other hand the bash builtin respects environment variable TIMEFORMAT.
The documentation also mentions env time to use the time command from the system (it uses /usr/bin/env or alike, so it should be independent of the shell).
Related
This question already has answers here:
How to trick an application into thinking its stdout is a terminal, not a pipe
(9 answers)
Closed 5 years ago.
Various bash commands I use -- fancy diffs, build scripts, etc, produce lots of color output.
When I redirect this output to a file, and then cat or less the file later, the colorization is gone -- presumably b/c the act of redirecting the output stripped out the color codes that tell the terminal to change colors.
Is there a way to capture colorized output, including the colorization?
One way to capture colorized output is with the script command. Running script will start a bash session where all of the raw output is captured to a file (named typescript by default).
Redirecting doesn't strip colors, but many commands will detect when they are sending output to a terminal, and will not produce colors by default if not. For example, on Linux ls --color=auto (which is aliased to plain ls in a lot of places) will not produce color codes if outputting to a pipe or file, but ls --color will. Many other tools have similar override flags to get them to save colorized output to a file, but it's all specific to the individual tool.
Even once you have the color codes in a file, to see them you need to use a tool that leaves them intact. less has a -r flag to show file data in "raw" mode; this displays color codes. edit: Slightly newer versions also have a -R flag which is specifically aware of color codes and displays them properly, with better support for things like line wrapping/trimming than raw mode because less can tell which things are control codes and which are actually characters going to the screen.
Inspired by the other answers, I started using script. I had to use -c to get it working though. All other answers, including tee, different script examples did not work for me.
Context:
Ubuntu 16.04
running behavior tests with behave and starting shell command during the test with python's subprocess.check_call()
Solution:
script --flush --quiet --return /tmp/ansible-output.txt --command "my-ansible-command"
Explanation for the switches:
--flush was needed, because otherwise the output is not well live-observable, coming in big chunks
--quiet supresses the own output of the script tool
-c, --command directly provides the command to execute, piping from my command to script did not work for me (no colors)
--return to make script propagate the exit code of my command so I know if my command has failed
I found that using script to preserve colors when piping to less doesn't really work (less is all messed up and on exit, bash is all messed up) because less is interactive. script seems to really mess up input coming from stdin even after exiting.
So instead of running:
script -q /dev/null cargo build | less -R
I redirect /dev/null to it before piping to less:
script -q /dev/null cargo build < /dev/null | less -R
So now script doesn't mess with stdin and gets me exactly what I want. It's the equivalent of command | less but it preserves colors while also continuing to read new content appended to the file (other methods I tried wouldn't do that).
some programs remove colorization when they realize the output is not a TTY (i.e. when you redirect them into another program). You can tell some of those to use color forcefully, and tell the pager to turn on colorization, for example use less -R
This question over on superuser helped me when my other answer (involving tee) didn't work. It involves using unbuffer to make the command think it's running from a shell.
I installed it using sudo apt install expect tcl rather than sudo apt-get install expect-dev.
I needed to use this method when redirecting the output of apt, ironically.
I use tee: pipe the command's output to teefilename and it'll keep the colour. And if you don't want to see the output on the screen (which is what tee is for: showing and redirecting output at the same time) then just send the output of tee to /dev/null:
command| teefilename> /dev/null
I have a perl script that calls external executables using system(). I would like to measure the CPU seconds taken by these external programs. Ideally, I would like to run them using the shell builtin time command (this is on a Linux system). Something like this:
system("time /path/to/command")
Now, time prints its output to stderr, but in order to do so, launches the command it is given in a separate subshell. This means that in order to capture time's output when running manually in the shell, you need to explicitly use a subshell and redirect the subshell's stderr:
$ time ( command > command.log 2> command.er) 2> time.out
The file time.out will have the output of the time command while command.er has the stderr of command. Unfortunately, the parentheses break perl's system call:
$ time ( ls ) 2> er ## works
$ perl -e 'system("time (ls)")'
sh: 1: Syntax error: word unexpected (expecting ")")
And this means I can't capture the output of time. To make matters wors, this seems to be version dependent:
$ perl --version | head -n2
This is perl 5, version 18, subversion 2 (v5.18.2) built for x86_64-linux-gnu-thread-multi
But if I try the same thing with a newer version:
$ perl --version | head -n2
This is perl 5, version 24, subversion 1 (v5.24.1) built for x86_64-linux-thread-multi
$ perl -e 'system("time (ls)")'
file1
real 0m0.002s
user 0m0.000s
sys 0m0.000s
Unfortunately, I need this to run on a production machine so upgrading Perl is not an option. So, how can I time a system call in Perl 5.18? I need the user and sys values, so simply recording the start and end times won't help. I am willing to use a dedicated module if that's necessary although I would prefer a trick that lets me use the shell's time.
UPDATE: it turns out the difference in behavior is not because of the newer perl version but, instead, it is because I tested it on an Arch system whose /bin/sh is bash while the other commands were being run on Ubuntu systems whose /bin/sh is dash, a minimal shell that doesn't support parentheses for subshells.
You can use Capture::Tiny to capture the STDOUT and STDERR of pretty much anything in Perl.
use Capture::Tiny 'capture';
my ($stdout, $stderr, $exit) = capture { system "time ls" };
print $stderr;
For some reason the output is missing some whitespace on my system, but is clear enough to parse out what you need.
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 2272maxresident)k
0inputs+8outputs (0major+111minor)pagefaults 0swaps
You've tested the command with bash, but you passed it to sh.
system("time (ls)")
is short for
system("/bin/sh", "-c", "time (ls)")
but you want
system("/bin/bash", "-c", "time (ls)")
$ time ( ls ) 2> er ## works
$ perl -e 'system("time (ls)")'
sh: 1: Syntax error: word unexpected (expecting ")")
The problem is that in first case, your shell is probably /bin/bash whereas in second case it is /bin/sh. If you want to run your command with another shell, you could use system LIST form:
system("/bin/bash", "-c", "time(ls)")
Note 1: There's PERL5SHELL environmnet value, but that seems to take effect only on Win32.
Note 2: If you want to measure CPU time of child process, you could use Unix::Getrusage or BSD::Resource modules.
This is a reduced example of a makefile which illustrates my problem:
exec:
time (ls > ls.txt; echo $$? > code) 2> time.txt
make exec runs fine under one Linux installation:
Linux-2.6.32-642.4.2.el6.x86_64-x86_64-with-centos-6.8-Final
but it fails under my Ubuntu installation:
Linux-4.4.0-64-generic-x86_64-with-Ubuntu-16.04-xenial
and produces the message:
/bin/sh: 1: Syntax error: word unexpected (expecting ")")
No problems if I run the command time directly from the terminal.
Are there different versions of the command in different Linux installations? I need the version which allows a sequence of commands.
Make always invokes /bin/sh to run the recipe. On some systems, /bin/sh is an alias for bash which has a lot of extra extensions to the standard POSIX shell (sh). On other systems (like Ubuntu), /bin/sh is an alias for dash which is a smaller, simpler, closer to plain POSIX shell.
Bash has a built-in time operation which accepts an entire pipeline and shows the time taken for it (run help time at a bash shell command prompt to see documentation). Other shells like dash don't have a built-in time, so when you run it you get the program /usr/bin/time; run man time to see documentation. As a separate program it of course cannot time an entire pipeline (because a pipeline is a feature of the shell); it can only time one individual command.
You have various options:
You can force your makefile to always use bash as its shell by adding:
SHELL := /bin/bash
to it. I recommend adding a comment there as well describing why bash specifically is needed.
Or you can modify your rule to work in a portable way by making the shell invocation explicit so that time only has one command to invoke:
exec:
time /bin/sh -c 'ls > ls.txt; echo $$? > code' 2>/time.txt
Put a semicolon in front of "time". As is, make is trying to parse your command as a list of dependencies.
The only suggestion that worked is to force bash in my makefile:
SHELL := /bin/bash
I checked: on my Ubuntu machine, /bin/sh is really /bin/dash whereas on the CentOS machine it is /bin/bash!
Thanks!
I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf
Every time I use the "at" command, I get this message:
warning: commands will be executed using /bin/sh
What is it trying to warn me about? More importantly, how do I turn the warning off?
It serves as a good warning to those of us that don't use bash as our shell, because we we'll forget that a feature that's in our day-to-day shell isn't going to be available when this code is run at the appointed time.
i.e.
username#hostname$ at 23:00
warning: commands will be executed using /bin/sh
at> rm **/*.pyc
at> <EOT>
job 1 at 2008-10-08 23:00
The use of '**' there is perfectly valid zsh, but not /sbin/sh! It's easy to make these mistakes if you're used to using a different shell, and it's your responsibility to remember to do the right thing.
Does the warning have any harmful effect aside from being annoying? The man page doesn't mention any way of turning it off, so I don't think you can stop it from being emitted without rebuilding your at from source.
Now, if you want to just not see it, you can use at [time] 2>/dev/null to send it off to oblivion, but, unfortunately, the at> prompts are printed to STDERR for some reason (a bug, IMO - they really should go to STDOUT), so they're also hidden by this.
It may be possible to work up some shell plumbing which will eliminate the warning without also eating the prompts, but
my attempt at this (at [time] 2>&1 | grep -v warning) doesn't work and
even if you can find a combination that works, it won't be suitable for aliasing (since the time goes in the middle rather than at the end), so you'll need to either type it in full each time you use it or else write a wrapper script around at to handle it.
So, unless it causes actual problems, I'd say you're probably best off just ignoring the warning like the rest of us.
The source code for at.c (from Debian at 3.1.20-3 version) contains an answer:
/* POSIX.2 allows the shell specified by the user's SHELL environment
variable, the login shell from the user's password database entry,
or /bin/sh to be the command interpreter that processes the at-job.
It also alows a warning diagnostic to be printed. Because of the
possible variance, we always output the diagnostic. */
fprintf(stderr, "warning: commands will be executed using /bin/sh\n");
You can work around it with shell redirection:
% echo "echo blah" | at now+30min 2>&1 | fgrep -v 'warning: commands will be executed using /bin/sh'
job 628 at Fri Mar 8 23:25:00 2019
Or you even create an function for your leisure use (for example for interactive shell in ~/.zshrc or ~/.profile):
at() {
/usr/bin/at "$#" 2>&1 | fgrep -v 'warning: commands will be executed using /bin/sh'
}
After that, it will not warn your about with that specific warning (while other warning/errors messages will still reach you)
If you wish to get around that message, have 'at' run a script that calls a specified environment, be it ksh, bash, csh, zsh, perl, etc.
addition - see the 'at' man page http://www.rt.com/man/at.1.html for more information.
at and batch read commands from standard input or a specified file which are to be executed at a later time, using /bin/sh.