Output is not been captured in shell script - linux

I am trying to capture the output in a variable but I am not able to do so. I tried below scenarios:
verify=$(su - omc -c "ldapsearch -x -n -D "uid=rac3gp,ou=people,ou=accounts,dc=netact,dc=net" -w hee_120" 2> /dev/null)
When I do echo $verify, it displays blank output
su - omc -c "ldapsearch -x -n -D "uid=rac3gp,ou=people,ou=accounts,dc=netact,dc=net" -w hee_120" >>dd.txt
The output is not captured in another file too. The expected output is
ldap_bind: Invalid credentials (49)
which should be displayed after successful execution.

This sounds like an error to me.
ldap_bind: Invalid credentials (49)
So this could be being printed to stderr? If you change the 2> /dev/null to 2>&1 in your first attempt to store it in a variable then that should work.

As you can see, that's an error message. Error messages are typically written to the stderr.
So do a redirection before capturing: 2>&1 (and don't send it to /dev/null).

Related

Redirect both standard output and standard error to different file in the same command [duplicate]

I know this much:
$ command 2>> error
$ command 1>> output
Is there any way I can output the stderr to the error file and output stdout to the output file in the same line of bash?
Just add them in one line command 2>> error 1>> output
However, note that >> is for appending if the file already has data. Whereas, > will overwrite any existing data in the file.
So, command 2> error 1> output if you do not want to append.
Just for completion's sake, you can write 1> as just > since the default file descriptor is the output. so 1> and > is the same thing.
So, command 2> error 1> output becomes, command 2> error > output
Try this:
your_command 2>stderr.log 1>stdout.log
More information
The numerals 0 through 9 are file descriptors in bash.
0 stands for standard input, 1 stands for standard output, 2 stands for standard error. 3 through 9 are spare for any other temporary usage.
Any file descriptor can be redirected to a file or to another file descriptor using the operator >. You can instead use the operator >> to appends to a file instead of creating an empty one.
Usage:
file_descriptor > filename
file_descriptor > &file_descriptor
Please refer to Advanced Bash-Scripting Guide: Chapter 20. I/O Redirection.
Like that:
$ command >>output 2>>error
Or if you like to mix outputs (stdout & stderr) in one single file you may want to use:
command > merged-output.txt 2>&1
Multiple commands' output can be redirected. This works for either the command line or most usefully in a bash script. The -s directs the password prompt to the screen.
Hereblock cmds stdout/stderr are sent to seperate files and nothing to display.
sudo -s -u username <<'EOF' 2>err 1>out
ls; pwd;
EOF
Hereblock cmds stdout/stderr are sent to a single file and display.
sudo -s -u username <<'EOF' 2>&1 | tee out
ls; pwd;
EOF
Hereblock cmds stdout/stderr are sent to separate files and stdout to display.
sudo -s -u username <<'EOF' 2>err | tee out
ls; pwd;
EOF
Depending on who you are(whoami) and username a password may or may not be required.

How to redirect standard error to a file

In linux if I want to redirect standard error to a file, I can do this:
$ls -l /bin/usr 2> ls-error.txt
But when I try:
$foo=
$echo ${foo:?"parameter is empty"} 2> ls-error.txt
The result in terminal is:
bash: foo: parameter is empty
It doesn't work!
Can somebody explain why?
I thought ${parameter:?word} would send the value of word to standard error.
echo ${foo:?"parameter is empty"} 2>ls-error.txt redirects the stderr of echo, but the error message is produced by the shell while expanding
${foo:?"parameter is empty"}.
You can get the result you want by redirecting a block (or a subshell) instead so that the shell's stderr is included in the redirection:
{ echo "${foo:?"parameter is empty"}"; } 2>ls-error.txt
Try this command:
($echo ${foo:?"parameter is empty"}) 2> ls-error.txt
In case you would like to redirect both sandard and error output, AND to still get these messages when executing your command, you can use the tee command:
$echo ${foo:?"parameter is empty"} |& tee -a ls-error.txt

"stdin: is not a tty" from cronjob

I'm getting the following mail every time I execute a specific cronjob. The called script runs fine when I'm calling it directly and even from cron. So the message I get is not an actual error, since the script does exactly what it is supposed to do.
Here is the cron.d entry:
* * * * * root /bin/bash -l -c "/opt/get.sh > /tmp/file"
and the get.sh script itself:
#!/bin/sh
#group and url
groups="foo"
url="https://somehost.test/get.php?groups=${groups}"
# encryption
pass='bar'
method='aes-256-xts'
pass=$(echo -n $pass | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
encrypted=$(wget -qO- ${url})
decoded=$(echo -n $encrypted | awk -F '#' '{print $1}')
iv=$(echo $encrypted | awk -F '#' '{print $2}' |base64 --decode | xxd -ps | sed 's/[[:xdigit:]]\{2\}/&/g')
# base64 decode input and save to file
output=$(echo -n $decoded | base64 --decode | openssl enc -${method} -d -nosalt -nopad -K ${pass} -iv ${iv})
if [ ! -z "${output}" ]; then
echo "${output}"
else
echo "Error while getting information"
fi
When I'm not using the bash -l syntax the script hangs during the wget process. So my guess would be that it has something to do with wget and putting the output to stdout. But I have no idea how to fix it.
You actually have two questions here.
Why it prints stdin: is not a tty?
This warning message is printed by bash -l. The -l (--login) options asks bash to start the login shell, e.g. the one which is usually started when you enter your password. In this case bash expects its stdin to be a real terminal (e.g. the isatty(0) call should return 1), and it's not true if it is run by cron—hence this warning.
Another easy way to reproduce this warning, and the very common one, is to run this command via ssh:
$ ssh user#example.com 'bash -l -c "echo test"'
Password:
stdin: is not a tty
test
It happens because ssh does not allocate a terminal when called with a command as a parameter (one should use -t option for ssh to force the terminal allocation in this case).
Why it did not work without -l?
As correctly stated by #Cyrus in the comments, the list of files which bash loads on start depends on the type of the session. E.g. for login shells it will load /etc/profile, ~/.bash_profile, ~/.bash_login, and ~/.profile (see INVOCATION in manual bash(1)), while for non-login shells it will only load ~/.bashrc. It seems you defined your http_proxy variable only in one of the files loaded for login shells, but not in ~/.bashrc. You moved it to ~/.wgetrc and it's correct, but you could also define it in ~/.bashrc and it would have worked.
in your .profile, change
mesg n
to
if `tty -s`; then
mesg n
fi
I ended up putting the proxy configuration in the wgetrc. There is now no need to execute the script on a login shell anymore.
This is not a real answer to the actual problem, but it solved mine.
If you run into this problem check if you are getting all the environment variables set as you expect. Thanks to Cyrus for putting me to the right direction.

how to redirect result of linux time command to some file

I'm running the following command (on Ubuntu)
time wget 'http://localhost:8080/upLoading.jsp' --timeout=0
and get a result in the command line
real 0m0.042s
user 0m0.000s
sys 0m0.000s
I've tried the following:
time -a o.txt wget 'http://localhost:8080/upLoading.jsp' --timeout=0
and get the following error
-a: command not found
I want to get the result to be redirected to some file. How can I do that?
-a is only understood by the time binary (/usr/bin/time), When just using time you're using the bash built-in version which does not process the -a option, and hence tries to run it as a command.
/usr/bin/time -o foo.txt -a wget 'http://localhost:8080/upLoading.jsp' --timeout=0
Checking man time, I guess what you need is
time -o o.txt -a ...
(Note you need both -a and -o).
[EDIT:] If you are in bash, you must also take care to write
/usr/bin/time
(check manpage for explanation)
You can direct the stdout output of any commmand to a file using the > character.
To append the output to a file use >>
Note that unless done explicitly, output to stderr will still go to the console. To direct both stderr and stdout to the same output stream use
command 2>&1 outfile.txt (with bash)
or
command >& outfile.txt (with t/csh)
If you are working with bash All about redirection will give you more details and control about redirection.
\time 2> time.out.text command
\time -o time.out.text command
This answer based on earlier comments. It is tested it works. The advantage of the \ over /usr/bin/ is that you don't have to know the install directory of time.
These answers also only capture the time, not other output.
Exactly the time from GNU writes it's output to stderr and if you want to redirect it to file, you can use --output=PATH parameter of time
See this http://unixhelp.ed.ac.uk/CGI/man-cgi?time
And if you want to redirect stdout to some file, you can use > filename to create file and fill it or >> filename to append to some file after the initial command.
If you want to redirect stderr by yourself, you can use $ command >&2 your_stderr_output
Try to use /usr/bin/time since many shells have their own implementation of time which may or may not support the same flags as /usr/bin/time
so change your command to
/usr/bin/time -a -o foo.txt wget ....
How about your LANG ?
$ time -ao o.txt echo 1
bash: -ao: コマンドが見つかりません
real 0m0.001s
user 0m0.000s
sys 0m0.000s
$ export|grep LANG
declare -x LANG="ja_JP.utf8"
$ LANG=C time -ao o.txt echo 1
1
$ cat o.txt
0.00user 0.00system 0:00.00elapsed 0%CPU (0avgtext+0avgdata 1984maxresident)k
0inputs+0outputs (0major+158minor)pagefaults 0swaps
Try:
command 2> log.txt
and the real-time output from "command" can be seen in another console window with:
tail -f log.txt
This worked for me:
( time command ) |& tee output.txt
https://unix.stackexchange.com/questions/115980/how-can-i-redirect-time-output-and-command-output-to-the-same-pipe
You can do that with > if you want to redirect the output.
For example:
time wget 'http://localhost:8080/upLoading.jsp' --timeout=0 > output.txt 2>&1
2>&1 says to redirect STDERR to the same file.
This command will erase any output.txt files and creates a new one with your output. If you use >> it will append the output at the end of any existing output.txt file. If it doesn't exist, it will create it.

How do I pipe or redirect the output of curl -v?

For some reason the output always gets printed to the terminal, regardless of whether I redirect it via 2> or > or |. Is there a way to get around this? Why is this happening?
add the -s (silent) option to remove the progress meter, then redirect stderr to stdout to get verbose output on the same fd as the response body
curl -vs google.com 2>&1 | less
Your URL probably has ampersands in it. I had this problem, too, and I realized that my URL was full of ampersands (from CGI variables being passed) and so everything was getting sent to background in a weird way and thus not redirecting properly. If you put quotes around the URL it will fix it.
The answer above didn't work for me, what did eventually was this syntax:
curl https://${URL} &> /dev/stdout | tee -a ${LOG}
tee puts the output on the screen, but also appends it to my log.
If you need the output in a file you can use a redirect:
curl https://vi.stackexchange.com/ -vs >curl-output.txt 2>&1
Please be sure not to flip the >curl-output.txt and 2>&1, which will not work due to bash's redirection behavior.
Just my 2 cents.
The below command should do the trick, as answered earlier
curl -vs google.com 2>&1
However if need to get the output to a file,
curl -vs google.com > out.txt 2>&1
should work.
I found the same thing: curl by itself would print to STDOUT, but could not be piped into another program.
At first, I thought I had solved it by using xargs to echo the output first:
curl -s ... <url> | xargs -0 echo | ...
But then, as pointed out in the comments, it also works without the xargs part, so -s (silent mode) is the key to preventing extraneous progress output to STDOUT:
curl -s ... <url> | perl -ne 'print $1 if /<sometag>([^<]+)/'
The above example grabs the simple <sometag> content (containing no embedded tags) from the XML output of the curl statement.
The following worked for me:
Put your curl statement in a script named abc.sh
Now run:
sh abc.sh 1>stdout_output 2>stderr_output
You will get your curl's results in stdout_output and the progress info in stderr_output.
This simple example shows how to capture curl output, and use it in a bash script
test.sh
function main
{
\curl -vs 'http://google.com' 2>&1
# note: add -o /tmp/ignore.png if you want to ignore binary output, by saving it to a file.
}
# capture output of curl to a variable
OUT=$(main)
# search output for something using grep.
echo
echo "$OUT" | grep 302
echo
echo "$OUT" | grep title
Solution = curl -vs google.com 2>&1 | less
BUT, if you want to redirect the output to a file and the output is still on the screen, then the URL response contains a newline char \n which messed up your shell.
To avoit this put everything in a variable:
result=$(curl -v . . . . )

Resources