Python 3.5: Subprocess sudo shutdown -h returns empty response, other command works fine - python-3.5

I am using following code to run linux command with python3.5
process = subprocess.run(['sudo shutdown -h'], check=True,stdout=subprocess.PIPE, shell=True)
output = process.stdout.decode('utf-8')
print("Response")
print(output)
It returns empty string as response
Shutdown scheduled for Tue 2019-09-10 22:32:34 CEST, use 'shutdown -c' to cancel.
Response
But when I replace command wit something with
ls -l
or
sudo su
it works, it returs string containing e.g. list of files in directory, like it should work
Edit
Apparently
Commands may send whatever they want to stdout or stderr, and this is completed unrelated to the status the command returns. While stderr is meant for diagnostic messages`, it is up to the program to decide where is sends messages
So one of solutions is to redirect stderr to stdout adding stderr=subprocess.STDOUT
subprocess.run(['sudo shutdown -h'], check=True,stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)

Related

remote wget command with & symbol doesn't behave as expected

Here are some test results:
I run command on my localhost, and try to execute some command on the remote host 11.160.48.88
Command 1:
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme"
expect:
File can be downloaded and be renamed to wgetReadme
result:
work as expected
Command 2:
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme&"
I simply add the & at the end of command, because I want this command to run in background
result:
the file wgetReadme is null on the remote server, I don't know why
Command 3:
To test if the Command 2 can be run on the remote server, I try to run the command directly on the server 11.160.48.88
wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme&"
result:
There are some wget transport message print to stdout, and the file is downloaded to wgetReadme. Work corretly.
Command 4:
I want to figure out if it is the SIGHUP signal kill the subprocess, and I found two evidences to prove it is not.
I found this question, and I try to run this on remote server 11.160.48.88
$shopt|grep hup
huponexit off
So the subprocess will not receive SIGHUP when ssh exits
I try to run another command to prove it
ssh 11.160.48.88 "wget https://raw.githubusercontent.com/mirror/wget/master/README -O - 2>&1 > wgetReadme&"
result:
The file can be downloaded to the target file correctly.
My question is why Command 2 cannot work as expected?
Because backgrounded jobs in ssh can cause the shell to hang on logout due to a race condition that occurs when two or more threads can access shared data and they try to change it at the same time and you can also solve the problem by redirecting all three I/O streams such as > /dev/null 2>&1 & So Nohup command is useful in your case and it is a POSIX command to ignore the HUP (hangup) signal. The HUP signal is, by convention, the way a terminal warns dependent processes of logout. So I change your code as following way:
ssh -f 11.160.48.88 "sh -c 'nohup wget https://raw.githubusercontent.com/mirror/wget/master/README -O - > wgetReadme 2>&1 &'"
You can read more at https://en.wikipedia.org/wiki/Nohup
& is a bash special characters which make process running in background. Then , ssh will not capture anymore output of command when you run this remotely.
You should escape it with \ to be able to run your command
in your example :
wget https://raw.githubusercontent.com/mirror/wget/master/README -O wgetReadme\&"
regards

Shell script doesn't exit if output redirected to logger

I was looking for a way to route the output of my shell scripts to syslog, and found this article, which suggests putting the following line at the top of the script:
exec 1> >(logger -s -t $(basename $0)) 2>&1
I've tried this with the following simple script:
#!/bin/bash
exec 1> >(logger -s -t $(basename $0)) 2>&1
echo "testing"
exit 0
When I run this script from the shell, I do indeed get the message in the syslog, but the script doesn't seem to return--in order to continue interacting with the shell, I need to hit Enter or send a SIGINT signal. What's going on here? FWIW, I'm mostly using this to log the results of cron jobs, so in the wild I probably don't need it to work properly in an interactive shell session, but I'm nervous using something I don't really understand in production. I am mostly worried about spawning a bunch of processes that don't terminate cleanly.
I've tested this on Ubuntu 15.10, Ubuntu 16.04, and OSX, all with the same result.
Cutting a long story short: the shell script does exit and so does the logger — there isn't actually a problem — but the output from the logger lead to confusion.
Converting comments into an answer.
Superficially, given the symptoms you describe, what's going on is that Bash isn't exiting until all its child processes exit. You could try exec >/dev/null 2>&1 before exit 0 to see if that stops the logger — basically, the redirection closes its inputs, so it should terminate, allowing the script to exit.
However, when I try your script (bash logtest.sh) on macOS Sierra 10.12.2 (though I'd not expect it to change in earlier versions), the command exits promptly and produces a log message on the terminal like this (I use Osiris JL: as my prompt):
Osiris JL: bash logtest.sh
Osiris JL: Dec 26 12:23:50 logtest.sh[6623] <Notice>: testing
Osiris JL: ps
PID TTY TIME CMD
71792 ttys000 0:00.25 -bash
534 ttys002 0:00.57 -bash
543 ttys003 0:01.71 -bash
558 ttys004 0:00.44 -bash
Osiris JL:
I hit return on the blank line and got the prompt before the ps command.
Note that the message from logger arrived after the prompt.
When I ran bash logtest.sh (where logtest.sh contained your script), the only key I hit was the return to enter the command (which the shell read before running the command). I then got a prompt, the output from logger, and a blank line with the terminal waiting for input. That's normal. The logger was not still running — I could check that in other windows.
Try typing ls instead of just hitting return. The shell is waiting for input. It wrote its prompt, but the logger output confused the on-screen layout. For me, I got:
Osiris JL: bash logtest.sh
Osiris JL: Dec 26 13:28:28 logtest.sh[7133] <Notice>: testing
ls
README.md ix37.sql mq13.c sh11.o
Safe lib mq13.dSYM so-4018-8770
Untracked ll89 oddascevendesc so-4018-8770.c
ci11 ll89.cpp oddascevendesc.c so-4018-8770.dSYM
ci11.c ll89.dSYM oddascevendesc.dSYM sops
ci11.dSYM ll97 rav73 src
data ll97.c rav73.c tf17
doc ll97.dSYM rav73.dSYM tf17.cpp
es.se-36764 logtest.sh rd11 tf17.dSYM
etc mac-clock-get-time rd11.c tf19
fa37.sh mac-clock-get-time.c rd11.dSYM tf19.c
fileswap.sh mac-clock-get-time.dSYM rn53 tf19.dSYM
gm11 makefile rn53.c x-paste.c
gm11.c matrot13 rn53.dSYM xc19
gm11.dSYM matrot13.c sh11 xc19.c
inc matrot13.dSYM sh11.c xc19.dSYM
infile mq13 sh11.dSYM
Osiris JL:

Bash trigger wget command and don't wait for response, continue to the next command

Is there a way I can trigger a wget command via bash script and continue to the next command in the bash script without waiting for a response from the wget? i am executing a command which should take lots of time and don't want wget to hold for a response nor re-trigger it after timeout has been timeout limit
You should use the --background option, as it goes to background and saves the output to a log
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log
Example:
$ wget http://cdimage.ubuntu.com/ubuntu-server/daily/current/wily-server-ppc64el.iso --background
Continuing in background, pid 79783.
Output will be written to ‘wget-log’.
$ cat wget-log
--2015-05-12 11:21:35-- http://cdimage.ubuntu.com/ubuntu-server/daily/current/wily-server-ppc64el.iso
Resolving cdimage.ubuntu.com (cdimage.ubuntu.com)... 91.189.92.164, 2001:67c:1360:8c01::1f
Connecting to cdimage.ubuntu.com (cdimage.ubuntu.com)|91.189.92.164|:80... connected.
....

Redirect output of Whatsapp bash script to file interactively for automation purpose

Yowsup-cli is a library that can allow you to send message to whatsapp users,once authenticated.
By the coommand
yowsup-cli -a --interactive <PHONE_NUMBER_HERE> --wait --autoack --keepalive --config yowsup-master/src/yowsup-cli.config
I can interactively send or receive messages.
Once executed the command you get a prompt like
MY_PHONE_NUMBER#s.whatsapp.net [27-12-2014 18:33]:THIS IS MY MESSAGE,TYPED ON MY PHONE. OPEN DOOR GARAGE
Enter Message or command: (/available, /lastseen, /unavailable)
I'm a totally beginner, but I would like to redirect this content that gets printed on terminal to a file,to further analyze it or to write a script that search into this file keyword as "OPEN GARAGE DOOR", so i could automate something.
This file obviously has to sync with the program output,but I don't know how to do.
yowsup-cli -a --interactive <PHONE_NUMBER_HERE> --wait --autoack --keepalive --config yowsup-master/src/yowsup-cli.config > /path/to/my_file
doesn't work
Running Ubuntu 12.04.
I know yowsup is a python library, but i don't know this language. I'm beginning learniing C and I would like to do that in BASH, or if not possible in C.
Thanks
Pipe the output into tee instead of redirecting it into a file:
yowsup-cli -a --interactive <PHONE_NUMBER_HERE> --wait --autoack --keepalive --config yowsup-master/src/yowsup-cli.config 2>&1 | tee -a /path/to/my_file
The reason: With redirection you don't see the command's output which makes interacting with it hard.
Piping into the tee command will echo all output the the terminal and append it to given file.
Interestingly, in your command line (using redirection) you can still type blindly or even according to the yowsup-cli ouptut you read in another terminal with:
tail -f /path/to/my_file
Tail with the -f option prints the last 10 lines of the file as well as any new ouptut from the yowsup-cli command.

where is the output goes when running as a background process?

My process output some log information to the console windows. When I run it as a background process, where can I find the output logs?
Depends on the process and how you started it. If it writes to stdout (which is probable, given that the output is usually to the terminal), you can redirect the output to a file with
command > logfile &
If you also want to log error message from stderr, do
command > logfile 2> errorlogfile &
or
command > logfile 2>&1 &
to get everything in one file.
If it's a systemd service you can run journalctl -u <service-name>
You can check for latest logs by clicking **SHIFT + G **
Make sure systems is installed apt-get install systemd

Resources