how to avoid linux command messages in linux - linux

How can I avoid messages that are shown on shell after I enter a command in ubuntu?
For example dd command in linux outputs something like " n bytes copied ...." and I want to avoid these outputs.
Any help?

You can redirect output with the '>' operator:
dd ... 1>/dev/null
1 is the standard output (stdout), 2 the standard error (stderr).
See http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html for a complete explanation of IO redirection.

Related

Why doesn't the Linux redirection operator capture the output of my command?

Context: I have a program (go-sigma-rule-engine by Markus Kont) on my EC2 instance that runs against a logfile and produces some output to screen.
The command used to run this program is ./gsre/go-sigma-rule-engine run --rules-dir ./gsre/rules/ --sigma-input ./logs/exampleLog.json
The program produces output of the form:
INFO[2021-09-22T21:51:06Z] MATCH at offset 0 : [{[] Example Activity Found}]
INFO[2021-09-22T21:51:06Z] All workers exited, waiting on loggers to finish
INFO[2021-09-22T21:51:06Z] Stats logger done
INFO[2021-09-22T21:51:06Z] Done
Goal: I would like to capture this output and store it in a file.
Attempted Solution: I used the redirection operator to capture the output like so:
./gsre/go-sigma-rule-engine run --rules-dir ./gsre/rules/ --sigma-input ./logs/exampleLog.json > output.txt
Problem: The output.txt file is empty and didn't capture the output of the command invoking the rule engine.
Maybe the output you want to capture goes to standard error rather than standard output. Try using 2> instead of > to redirect stderr.

Chronologically capturing STDOUT and STDERR

This very well may fall under KISS (keep it simple) principle but I am still curious and wish to be educated as to why I didn't receive the expected results. So, here we go...
I have a shell script to capture STDOUT and STDERR without disturbing the original file descriptors. This is in hopes of preserving the original order of output (see test.pl below) as seen by a user on the terminal.
Unfortunately, I am limited to using sh, instead of bash (but I welcome examples), as I am calling this from another suite and I may wish to use it in a cron in the future (I know cron has the SHELL environment variable).
wrapper.sh contains:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
${command} >${out} 2>${err}
test.pl contains:
#!/usr/bin/perl
print "1: stdout1\n";
print STDERR "2: stderr1\n";
print "3: stdout2\n";
In the scenario:
sh wrapper.sh /tmp/xxx perl test.pl
STDOUT contains:
1: stdout1
3: stdout2
STDERR contains:
2: stderr1
All good so far...
/tmp/xxx contains:
2: stderr1
1: stdout1
3: stdout2
However, I was expecting /tmp/xxx to contain:
1: stdout1
2: stderr1
3: stdout2
Can anyone explain to me why STDOUT and STDERR are not appending /tmp/xxx in the order that I expected? My guess would be that the backgrounded tee processes are blocking the /tmp/xxx resource from one another since they have the same "destination". How would you solve this?
related: How do I write stderr to a file while using "tee" with a pipe?
It is a feature of the C runtime library (and probably is imitated by other runtime libraries) that stderr is not buffered. As soon as it is written to, stderr pushes all of its characters to the destination device.
By default stdout has a 512-byte buffer.
The buffering for both stderr and stdout can be changed with the setbuf or setvbuf calls.
From the Linux man page for stdout:
NOTES: The stream stderr is unbuffered. The stream stdout is line-buffered when it points to a terminal. Partial lines will not appear until fflush(3) or exit(3) is called, or a newline is printed. This can produce unexpected results, especially with debugging output. The buffering mode of the standard streams (or any other stream) can be changed using the setbuf(3) or setvbuf(3) call. Note that in case stdin is associated with a terminal, there may also be input buffering in the terminal driver, entirely unrelated to stdio buffering. (Indeed, normally terminal input is line buffered in the kernel.) This kernel input handling can be modified using calls like tcsetattr(3); see also stty(1), and termios(3).
After a little bit more searching, inspired by #wallyk, I made the following modification to wrapper.sh:
#!/bin/sh
stdout_and_stderr=$1
shift
command=$#
out="${TMPDIR:-/tmp}/out.$$"
err="${TMPDIR:-/tmp}/err.$$"
mkfifo ${out} ${err}
trap 'rm ${out} ${err}' EXIT
> ${stdout_and_stderr}
tee -a ${stdout_and_stderr} < ${out} &
tee -a ${stdout_and_stderr} < ${err} >&2 &
script -q -F 2 ${command} >${out} 2>${err}
Which now produces the expected:
1: stdout1
2: stderr1
3: stdout2
The solution was to prefix the $command with script -q -F 2 which makes script quite (-q) and then forces file descriptor 2 (STDOUT) to flush immediately (-F 2).
I am now researching to determine how portable this is. I think -F pipe may be Mac and FreeBSD and -f or --flush may be other distros...
related: How to make output of any shell command unbuffered?

How can I avoid terminal messages screwing up vim?

Sometimes I use vim in TTY1/2/etc. I am experiencing a problem with this. Messages such as the following keep flooding my terminal:
[ 1050.29303] wlp3s0: failed to set TX queue parameters for AC 2
[ 1059.29340] wlp3s0: failed to set TX queue parameters for AC 2
[ 1020.12309] wlp3s0: failed to set TX queue parameters for AC 2
[ 1029.12899] something_else: some other logging message here
[ 1292.21300] yet_another_thing: hey look a distraction
This can be quite disruptive, especially when I'm using vim to work, and sometimes it even results in me screwing up my text without realizing it. Is there any way to eliminate messages like this, at least when using vim? Using :redraw, editing the messed up lines, etc. don't seem to make the messages disappear.
Your sample of lines looks like kernel messages.
You can turn off output of dmesg messages by typing in terminal
sudo dmesg -D
This is a temporary solution and will work until the system is rebooted. For permanent disabling edit /etc/sysctl.conf file to set kernel.printk parameter.
kernel.printk = 1 4 1 3
I've set the first digit to 1 as the third was 1. Read more about kernel.printk and klogctl(3) {see description of SYSLOG_ACTION_CONSOLE_OFF command}
You can redirect output to a file in sh script.
In bash this would be using the redirect operator >.
If what you are trying to get rid of is standard output, the redirection arrow is defaulted to that. If the output is error output, this would be file descriptor 2 so the operand would be 2>
for example if I was going to run a python script in the background while using vim I could run the script like this
$ python3 script.py >stdoutput.txt 2>errors.txt

Difference between "command > log.txt" and "command 1>& log.txt" in Linux command shell?

When I run the command haizea -c simulated.conf > result.txt, the program (haizea) still prints its output to the screen. But when I try haizea -c simulated.conf 1>& result.txt, the output is now on the file result.txt. I'm quite confused about this situation. What is the difference between > and 1>&, then?
What you're seeing on the terminal is the standard error of your process. Both of these are directed to the same terminal device by default (assuming no redirection put into effect).
The redirection >&xyz redirects both standard output and error to the file xyz.
I've never used it but I would think, by extension, that N>&xyz would redirect file handle N and standard error to your file. So 1>&xyz is equivalent to >&xyz which is also equivalent to >xyz 2>&1.
The number before the > stands for the descriptor.
Standard Input - 0
Standard Output - 1
Standard Error - 2
The & will direct both standard output and standard error.
http://linuxdevcenter.com/pub/a/linux/lpt/13_01.html#doc2ac15b1c13
> redirects standard output alone.
>& or &> or 1>& redirect both standard output and standard error.
Your program is printing on standard error which is not getting redirected in case 1.

Is it possible to make stdout and stderr output be of different colors in XTerm or Konsole?

Is it even achievable?
I would like the output from a command’s stderr to be rendered in a different color than stdout (for example, in red).
I need such a modification to work with the Bash shell in the Konsole, XTerm, or GNOME Terminal terminal emulators on Linux.
Here's a solution that combines some of the good ideas already presented.
Create a function in a bash script:
color() ( set -o pipefail; "$#" 2>&1>&3 | sed $'s,.*,\e[31m&\e[m,' >&2 ) 3>&1
Use it like this:
$ color command -program -args
It will show the command's stderr in red.
Keep reading for an explanation of how it works. There are some interesting features demonstrated by this command.
color()... — Creates a bash function called color.
set -o pipefail — This is a shell option that preserves the error return code of a command whose output is piped into another command. This is done in a subshell, which is created by the parentheses, so as not to change the pipefail option in the outer shell.
"$#" — Executes the arguments to the function as a new command. "$#" is equivalent to "$1" "$2" ...
2>&1 — Redirects the stderr of the command to stdout so that it becomes sed's stdin.
>&3 — Shorthand for 1>&3, this redirects stdout to a new temporary file descriptor 3. 3 gets routed back into stdout later.
sed ... — Because of the redirects above, sed's stdin is the stderr of the executed command. Its function is to surround each line with color codes.
$'...' A bash construct that causes it to understand backslash-escaped characters
.* — Matches the entire line.
\e[31m — The ANSI escape sequence that causes the following characters to be red
& — The sed replace character that expands to the entire matched string (the entire line in this case).
\e[m — The ANSI escape sequence that resets the color.
>&2 — Shorthand for 1>&2, this redirects sed's stdout to stderr.
3>&1 — Redirects the temporary file descriptor 3 back into stdout.
Here's an extension of the same concept that also makes STDOUT green:
function stdred() (
set -o pipefail;
(
"$#" 2>&1>&3 |
sed $'s,.*,\e[31m&\e[m,' >&2
) 3>&1 |
sed $'s,.*,\e[32m&\e[m,'
)
You can also check out stderred: https://github.com/sickill/stderred
I can't see that there is any way for the terminal emulator to do this.
The interface between the terminal emulator and the shell/app is via a pseudo-tty, where the terminal emulator is on the master side and the shell/app on the other. The shell/app have both stdout and stderr connected to the same pty, so when the terminal emulator reads from the pty for the shell/app output it can no longer tell which was written to stdout and which to stderr.
You will have to use one of the solutions that intercepts the data between the application and the slave-pty and inserts escape codes to control the terminal output colo(u)r.
Here is a little Awk script that will print everything you pass it in red.
#!/usr/bin/awk -f
{ printf("%c[%dm%s%c[0m\n", 0x1B, 31, $0, 0x1B); fflush() }
It simply prints each line it receives on stdin within the necessary escape codes to display it in red. It is followed by an escape code to reset the terminal.
(If you need a different color, change the second argument in the above printf call from 31 to the number corresponding to the desired color.)
Save it to colr.awk, do a chmod a+x, and use it like so:
$ my_program | ./colr.awk
It has the drawback that lines may not be displayed in order, because stderr goes directly to the console, while stdout is piped through an additional process.
A simple solution to color stdout in red is to pipe it through grep:
program | grep .
This should not require installing anything, as grep should be already installed everywhere.
Taken from Dennis’s comment on superuser.com.
I think you should use the standard escape sequences on stderr. Have a look at this.
Hilite will do this. It's a lightweight solution, but you have to invoke it for each command, eg. hilite gcc myprog.c. A more radical approach is built in to my experimental shell Gush which shows stderr from all commands run in red, stdout in black. Either way is very useful for software builds where you have lots of output with a few error messages that could easily be missed if not highlighted.

Resources