Don't execute second command if first fails - linux

I have to run two scripts, but 2nd script will only run if first script success.
First script
server_name=$1;
src_path=$2;
dst_path=$3;
dst_dir=$4;
cd $src_path;
smbclient //$server_name -A ~/\.smbclient -c "cd $dst_path;mkdir $dst_dir;cd $dst_dir;pwd; recurse; prompt; mput *; exit;";
Above script copy the contents from source path to a directory in remote server.
second script
#!/usr/bin/python
import sys
import os
import subprocess
project = sys.argv[1]
path = sys.argv[2]
print "\n inside second script"
fileName = "myfile.html"
file = open(fileName, "a")
if file:
file.write('Release Locations:<br>\n' +
'<ul>\n' +
'<li>file://' + path + '</li>\n' +
'<li>smb://' + path + '</li>\n' +
'</ul>')
else:
exit(1)
command(control don't go to 2nd script)
./first_script.sh server_name/Shared /home/myname/work/target Software/target required_files && python second_script.py sony server_name/Shared/Software/target/required_files
command(control don't go to 2nd script)
./first_script.sh server_name/Shared /home/myname/work/target Software/target required_files || python second_script.py sony server_name/Shared/Software/target/required_files
command(control go to 2nd script)
./first_script.sh server_name/Shared /home/myname/work/target Software/target required_files;python second_script.py sony server_name/Shared/Software/target/required_files
On executing first script && or || with second script, control is not going into second script.
status of first_script when run individually: 0 (checked by running echo $?)
First script is successfully copying contents with some warnings.
If i run 2nd script individually it runs.
warnings
WARNING: The "idmap uid" option is deprecated
WARNING: The "idmap gid" option is deprecated
Questions
Because of this warnings 2nd script is not executing ?
How to make second script script execute ?

Related

python & shell : how to capture value out side scope in shell script which has embedded python code

The below embedded py code, and get a value from shell , and again in python code scope value updated and to be used shell script scope.. I am not getting the output value of pvar variable, which is at last line
#!/bin/bash
export ans=100
cat << EOF > pyscript.py
#!/usr/bin/python3 -tt
import subprocess
print('............This is Python')
pvar=$ans
pvar=pvar+400
print(" Updated value of pvar =", pvar)
EOF
chmod 770 pyscript.py`enter code here`
$./pyscript.py
echo "The value of pvar in bash = " $pvar
===========
You need to use command substitution to get the results of the script into a bash variable:
pvar=$(./pyscript.py)
echo $pvar
# ............This is Python Updated value of pvar = 500

Scons PreAction Command is printed but apparently not executed

I'm building a large project with SCONS, for reasons out of this topic (large story) I need to pass the object files options in the final linkage command inside a file.
Eg:
gcc -o program.elf #objects_file.txt -T linker_file.ld
This command works since I've tested it manually. But now I need to run it embedded in the Project build files. My first approach/idea has been to collect all the options into a file in the following way:
dbg_exe = own_env.Program('../' + target_path, components)
own_env.AddPreAction(dbg_exe, 'echo \'$SOURCES\' > objects_file.txt')
note: the $sources contains all the object files I need.
As I expected the command seems to be executed , I see the command printed in the terminal but for some reason it has not been executed since I don't find the objects_file.txt anywhere.
It's curious that if I copy & paste the printed lines in the same terminal the command execution is successful so I suppose the syntax constructed is correct.
I tried also a shorter test code:
own_env.AddPreAction(dbg_exe, 'ls -l > salida_ls.txt')
... and another surprise , this time I get syntax error in the console:
scons: done reading SConscript files.
scons: Building targets ...
ls -l > salida_ls.txt
ls: cannot access '>': No such file or directory
ls: cannot access 'salida_ls.txt': No such file or directory
a simple 'ls -l' works fine.
Any idea why this kind of bash commands don't work as expected? Is the > redirection symbol affecting the SCONS?
Some maybe useful information:
OS Windows10
Terminal mingw32
SCons v2.3.1
After searching I've found out that this is something related with the redefinition of the SPAWN construction variable:
def w32api_spawn(sh, escape, cmd, args, e_env):
print "CMD value"
print sh
print escape
print cmd
print args
print e_env
print " ********************************** "
if cmd == "SHELL":
return SCons.Platform.win32.spawn(sh,escape,args[1], args[1:],e_env)
cmdline = cmd + ' ' + string.join(args[1:], ' ')
startupinfo = subprocess.STARTUPINFO()
startupinfo.dwFlags |= _subprocess.STARTF_USESHOWWINDOW
proc = subprocess.Popen(
cmdline,
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
startupinfo=startupinfo,
shell = False,
env = None
)
data, err = proc.communicate()
print data
rv = proc.wait()
if rv:
print "====="
print err
print "====="
return rv
Looks like you'll need to swap back to the default SPAWN for that Program().
Add this to the top of that SConscript
from SCons.Platform.win32 import spawn
Then replace the logic you pasted above with:
dbg_exe = own_env.Program('../' + target_path, components, SPAWN=spawn)
own_env.AddPreAction(dbg_exe, 'echo \'$SOURCES\' > objects_file.txt')
This assumes that you're only building on win32. If that's not true you'll need to conditionally add the SPAWN to your Program() above only when you're on win32.
Finally I found a workaround running a python native function to build th efile I needed. Unfortunately I cannot afford more time with this issue, I didn't find the real reason and solution but it is clear is not something related with the normal SCONS performing but with the trick performed in the SPAWN.
scons_common.GenerateObjectsFile('../' + objects_file, components)

Command not works in subprocess or os.popen but works in terminal

I've tried lots of methods to run my shell script but none of these works from python3 script. The command is very simple and works in terminal without problem. Here's what I've tried without sucsess:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
dat = os.popen('sh commonBash.sh').read()
print(dat)
if "no" in dat:
print('Not found')
status = 'Install'
else:
print('Found')
status = 'Remove'
When I run in terminal the output is correct and working, but when I try to run in python script it won't work.
Here's the shell script:
name="libreoffice" && dpkg-query -W $name && echo "$name"
The output of the python script is here:
dpkg-query: no packages found matching libreoffice # Here the $name is correct
# This is $name, which is an empty newline
Found # I don't know why, but the output is found
However when I run the actual program, the output of the same part is somehow different. Here it is:
# Empty lines for print(dat) and echo "$name" also
# Why?
Found # And the result is still Found...
Ok, now it works with these changes:
Here's the shell script (commonBash.sh):
name=libreoffice
dat=$(dpkg-query -W $name)
echo "Checking for "$name": "
echo "$dat"
if [ "" == "$dat" ]; then
echo "No "$name". Setting up "$name"..."
else
echo $name" is installed already."
fi
Here's the python script:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
dat = os.popen('bash commonBash.sh').read()
print(dat)
if "No" in str(dat):
print('Not found')
status = 'Install'
else:
print('Found')
status = 'Remove'
And here's the output (now it's correct):
dpkg-query: no packages found matching libreoffice
Checking for libreoffice:
No libreoffice. Setting up libreoffice...
Not found

Can I use wild cards when executing a shell process in a different working directory?

This is my current directory structure
/ <-- current working Dir
/php/
file1.php
file2.php
file3.txt
I am trying to execute the following groovy commands
def cp = 'cp -f *.php /tmp/'
def cpProc = cp.execute(null, new File('./php/')
cpProc.waitfor()
log.info 'current exitvalue :' + cpProc.exitValue()
log.info 'current proc out : ' + cpProc.text
but I keep getting cp: cannot stat *.php': No such file or directory, I've verified the files exist and I've verified my current working directory
if I execute log.info 'ls -la'.execute(null, new File('./php/')) I see the PHP and .txt files.
This seems like a long shot but I think there might be a bug with using wild cards for commands when executing them in a specified working directory, unless there's something I'm missing?
I'm using groovy 1.7.5
this version works for me, just try it out:
#!/usr/bin/env groovy
command = ["sh", "-c", "cp -f *.php /tmp/"]
def cpProc = command.execute(null, new File('./php/'))
cpProc.waitFor()
print 'current exitvalue :' + cpProc.exitValue() + '\n'
print 'current proc out : ' + cpProc.text + '\n'
print 'ls -la'.execute(null, new File('/tmp/')).text
The first answer on this question explains why your version did not work: Groovy execute "cp *" shell command

How to write data to existing process's STDIN from external process?

I'm seeking for ways to write data to the existing process's STDIN from external processes, and found similar question How do you stream data into the STDIN of a program from different local/remote processes in Python? in stackoverlow.
In that thread, #Michael says that we can get file descriptors of existing process in path like below, and permitted to write data into them on Linux.
/proc/$PID/fd/
So, I've created a simple script listed below to test writing data to the script's STDIN (and TTY) from external process.
#!/usr/bin/env python
import os, sys
def get_ttyname():
for f in sys.stdin, sys.stdout, sys.stderr:
if f.isatty():
return os.ttyname(f.fileno())
return None
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > {0}".format(get_ttyname()))
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
print("read :: [" + sys.stdin.readline() + "]")
This test script shows paths of STDIN and TTY and then, wait for one to write it's STDIN.
I launched this script and got messages below.
Try commands below
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/3308/fd/0
So, I executed the command echo 'foobar' > /dev/pts/6 and echo 'foobar' > /proc/3308/fd/0 from other terminal. After execution of both commands, message foobar is displayed twice on the terminal the test script is running on, but that's all. The line print("read :: [" + sys.stdin.readline() + "]") was not executed.
Are there any ways to write data from external processes to the existing process's STDIN (or other file descriptors), i.e. invoke execution of the lineprint("read :: [" + sys.stdin.readline() + "]") from other processes?
Your code will not work.
/proc/pid/fd/0 is a link to the /dev/pts/6 file.
$ echo 'foobar' > /dev/pts/6
$ echo 'foobar' > /proc/pid/fd/0
Since both the commands write to the terminal. This input goes to terminal and not to the process.
It will work if stdin intially is a pipe.
For example, test.py is :
#!/usr/bin/python
import os, sys
if __name__ == "__main__":
print("Try commands below")
print("$ echo 'foobar' > /proc/{0}/fd/0".format(os.getpid()))
while True:
print("read :: [" + sys.stdin.readline() + "]")
pass
Run this as:
$ (while [ 1 ]; do sleep 1; done) | python test.py
Now from another terminal write something to /proc/pid/fd/0 and it will come to test.py
I want to leave here an example I found useful. It's a slight modification of the while true trick above that failed intermittently on my machine.
# pipe cat to your long running process
( cat ) | ./your_server &
server_pid=$!
# send an echo to your cat process that will close cat and in my hypothetical case the server too
echo "quit\n" > "/proc/$server_pid/fd/0"
It was helpful to me because for particular reasons I couldn't use mkfifo, which is perfect for this scenario.

Resources