How do I execute the following command using subprocess? - python-3.x

I want to execute the following command using subprocess:
sudo sh -c "echo nameserver 1.1.1.1 > /etc/resolv.conf"
In the shell it does work well.
This is what I did:
update_resolv_conf_cmd = (["sudo", "sh", "-c", '"echo nameserver 1.1.1.1 > /etc/resolv.conf"'])
subprocess.Popen(update_resolv_conf_cmd, stdout=subprocess.PIPE, shell=True)
However, this does not work.

I think this may work out
import subprocess
subprocess.call("sudo sh -c 'echo nameserver 1.1.1.1 > /etc/resolv.conf'", shell=True)

Related

How to parse json data correctly using jq to set to var inside shell script [duplicate]

I do this in a script:
read direc <<< $(basename `pwd`)
and I get:
Syntax error: redirection unexpected
in an ubuntu machine
/bin/bash --version
GNU bash, version 4.0.33(1)-release (x86_64-pc-linux-gnu)
while I do not get this error in another suse machine:
/bin/bash --version
GNU bash, version 3.2.39(1)-release (x86_64-suse-linux-gnu)
Copyright (C) 2007 Free Software Foundation, Inc.
Why the error?
Does your script reference /bin/bash or /bin/sh in its hash bang line? The default system shell in Ubuntu is dash, not bash, so if you have #!/bin/sh then your script will be using a different shell than you expect. Dash does not have the <<< redirection operator.
Make sure the shebang line is:
#!/bin/bash
or
#!/usr/bin/env bash
And run the script with:
$ ./script.sh
Do not run it with an explicit sh as that will ignore the shebang:
$ sh ./script.sh # Don't do this!
If you're using the following to run your script:
sudo sh ./script.sh
Then you'll want to use the following instead:
sudo bash ./script.sh
The reason for this is that Bash is not the default shell for Ubuntu. So, if you use "sh" then it will just use the default shell; which is actually Dash. This will happen regardless if you have #!/bin/bash at the top of your script. As a result, you will need to explicitly specify to use bash as shown above, and your script should run at expected.
Dash doesn't support redirects the same as Bash.
Docker:
I was getting this problem from my Dockerfile as I had:
RUN bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)
However, according to this issue, it was solved:
The exec form makes it possible to avoid shell string munging, and
to RUN commands using a base image that does not contain /bin/sh.
Note
To use a different shell, other than /bin/sh, use the exec form
passing in the desired shell. For example,
RUN ["/bin/bash", "-c", "echo hello"]
Solution:
RUN ["/bin/bash", "-c", "bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/master/binscripts/gvm-installer)"]
Notice the quotes around each parameter.
You can get the output of that command and put it in a variable. then use heredoc. for example:
nc -l -p 80 <<< "tested like a charm";
can be written like:
nc -l -p 80 <<EOF
tested like a charm
EOF
and like this (this is what you want):
text="tested like a charm"
nc -l -p 80 <<EOF
$text
EOF
Practical example in busybox under docker container:
kasra#ubuntu:~$ docker run --rm -it busybox
/ # nc -l -p 80 <<< "tested like a charm";
sh: syntax error: unexpected redirection
/ # nc -l -p 80 <<EOL
> tested like a charm
> EOL
^Cpunt! => socket listening, no errors. ^Cpunt! is result of CTRL+C signal.
/ # text="tested like a charm"
/ # nc -l -p 80 <<EOF
> $text
> EOF
^Cpunt!
do it the simpler way,
direc=$(basename `pwd`)
Or use the shell
$ direc=${PWD##*/}
Another reason to the error may be if you are running a cron job that updates a subversion working copy and then has attempted to run a versioned script that was in a conflicted state after the update...
On my machine, if I run a script directly, the default is bash.
If I run it with sudo, the default is sh.
That’s why I was hitting this problem when I used sudo.
In my case error is because i have put ">>" twice
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> >> $LOG_PATH
i just correct it as
mongodump --db=$DB_NAME --collection=$col --out=$BACKUP_LOCATION/$DB_NAME-$BACKUP_DATE >> $LOG_PATH
Before running the script, you should check first line of the shell script for the interpreter.
Eg:
if scripts starts with /bin/bash , run the script using the below command
"bash script_name.sh"
if script starts with /bin/sh, run the script using the below command
"sh script_name.sh"
./sample.sh - This will detect the interpreter from the first line of the script and run.
Different Linux distributions having different shells as default.

VSTS SSH tasks uses STDERR

I am configuring release step with VSTS to update database and use SSH (https://learn.microsoft.com/fr-fr/vsts/build-release/tasks/deploy/ssh) to run our script to update mongodb.
Script works just fine but somehow all output goes to STDERR.
Run: Inline Script
Arguments:
cd /home/ubuntu/Project/root/Deploy
dos2unix sync_mongo.sh
sh ./mongosync.sh
Here is beginning for step log:
2018-01-18T17:39:55.7603461Z dos2unix sync_mongo.sh
2018-01-18T17:39:55.7603748Z sh ./mongosync.sh
2018-01-18T17:39:55.7604695Z Trying to setup SSH connection to ********#****:22
2018-01-18T17:39:57.5259303Z Successfully connected.
2018-01-18T17:39:59.7115141Z tr -d '\015' <"./sshscript_1516297195734" > "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.0197880Z chmod +x "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.2617249Z "./sshscript_1516297195734._unix"
2018-01-18T17:40:00.5124617Z ##[error]dos2unix:
2018-01-18T17:40:00.5124929Z
2018-01-18T17:40:00.5125475Z ##[error]converting file sync_mongo.sh to Unix format ...
It turns out that many unix commands uses stderr to show progress.
Solution was to ignore stderr output:
dos2unix sync_mongo.sh 2> /dev/null
sh ./mongosync.sh 2> /dev/null

Run cmd in background in shell script

How do i run a cmd in background and return to next line in shell script
test.sh ->
bash --rcfile <(echo '. ~/.bashrc; ./cloud_sql_proxy -instances=connectionstring && exit')
mysql -u root -p --host 127.0.0.1
The first cmd gives the output as
2018/01/05 01:49:28 Listening on 127.0.0.1:3306 for connectionstring
2018/01/05 01:49:28 Ready for new connections
so 2 cmd is never executed.
Basically I want to run 1 cmd in background, so that 2 cmd is executed.
I got the way to fix the issue
cmd="./cloud_sql_proxy -instances=connectionstring"
echo "starting service"
$cmd </dev/null >proxy.out 2>proxy.err &
mysql -u root -p --host 127.0.0.1
Appending '&' to the end of the line will run it in background.
eg..
$ ls -l &

how to daemonize a script

I am trying to use daemon on Ubuntu, but I am not sure how to use it even after reading the man page.
I have the following testing script foo.sh
#!/bin/bash
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
Then I tried this command but nothing happened:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- foo.sh
The file hihihi was not updated, and I found this in the errlog:
20161221 12:12:36 foo: client (pid 176193) exited with 1 status
How could I use the daemon command properly?
AFAIK, most daemon or deamonize programs change the current dir to root as part of the daemonization process. That means that you must give the full path of the command:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /path/to/foo.sh
If it still did not work, you could try to specify a shell:
daemon --name="foo" -b ~/daemon.out -l ~/daemon.err -v -- /bin/bash -c /path/to/foo.sh
It is not necessary to use daemon command in bash. You can daemonize your script manually. For example:
#!/bin/bash
# At first you have to redirect stdout and stderr to /dev/null
exec >/dev/null
exec 2>/dev/null
# Fork and go to background
(
while true; do
echo 'hi' >> ~/hihihi
sleep 10
done
)&
# Parent process finished but child still working

SSH: Execute a command and keep the shell open [duplicate]

I find myself needing to log into various servers, set environment variables, and then work interactively.
e.g.
$ ssh anvil
jla#anvil$ export V=hello
jla#anvil$ export W=world
jla#anvil$ echo $V $W
hello world
How can I combine the first few commands, and then leave myself at a prompt?
Something like:
$ ssh anvil --on-login 'export V=hello; export W=world;'
jla#anvil$ echo $V $W
hello world
Obviously this is a model problem. What I am really asking is 'how do I ssh to a different machine, run some commands, and then continue as if I'd run them by hand?'
Probably the simplest thing is:
$ ssh -t host 'cmd1; cmd2; sh -i'
If you want to set variables, do:
$ ssh -t host 'cmd1; cmd2; FOO=hello sh -i'
Note that this is a terrible hack, and you would be much better off putting your desired initial commands in a script and doing:
$ scp setup host:~
$ ssh host
host$ . setup
You could also use the following expect script:
#!/usr/bin/expect -f
spawn ssh $argv
send "export V=hello\n"
send "export W=world\n"
send "echo \$V \$W\n"
interact
Turns out this is answered by this question:
How can I ssh directly to a particular directory?
to ssh:
ssh -t anvil "export V=hello; export W=world; bash"
followed by:
jla#anvil$ echo $V $W
hello world
It is worth to note that ssh -t can actually be used to connect to one host via another host.
So for example if you want to execute a command on anvil, but anvil is only accessible from host gateway (by firewall etc.), you can do like this:
ssh gateway -t 'ssh anvil -t "export V=hello; export W=world;bash -l";'
Exiting the anvil, will also log you out of gateway (if you want to stay on gatway after leaving anvil than just add another bash -l before closing the command.
Another approach is to execute this beast (also gives me a colored shell):
ssh host -t "echo 'rm /tmp/initfile; source ~/.bashrc; cd foo/; git status' > /tmp/initfile; bash --init-file /tmp/initfile"

Resources