I have a requirement, where I have my linux script which does my work. But I want to use a simple keyword to execute my script file and get the output.
example:
sudo save
The above command should execute my script file which is saved as abc.sh
How to do it in linux or mac. Please help
Both in PHP and Python I have the option to execute a single command line right in the shell, without needing to enter on some REPL console and neither calling some script.
This enables me make fast checkings and testings, as using time from shel utility, like following:
How do I do this with Node?
node -e "<code>"
I have a Python script which loops through a folder, creating a shell command for each file.
Each command is written to a shell script and this script is then run using subprocess.Popen. (I need to do this because I also need to set up the environment before for the commands to work).
Here is some pseudocode:
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
# command to run
base_command="run this"
#list of files
command_list=[]
#loop through the files to create a folder
for file in my_folder:
command_list.append(base_command+" "+file)
# open the shell script
scriptname="shell.sh"
shellscript = open(scriptname,'w')
# set the environment using '.' as using bash below to run shell script
shellscript.write("#!/bin/bash\n. /etc/profile.d/set_environment\n")
#loop through commands and write to shellscript
for command in command_list:
shellscript.write(command+"\n")
# use subprocess to run the shell script. Use bash to interpret the environment
cmd="bash "+scriptname
proc = subprocess.Popen([cmd], stderr=subprocess.PIPE, stdout=subprocess.PIPE, shell=True)
When I run this python script only the first 6 commands within the shell script are executed. The error message from the command suggests the command is truncated as it is read by subprocess.
When I run the shell script manually all commands are executed as expected so I know the shell script is correct.
Each command is pretty instantaneous but I can't imagine the speed causing an issue.
I did try running a subprocess command for each file but I ran into difficulties setting the environment and I like the approach of creating a single sh script as it also serves as a log file.
I have read the subprocess docs but haven't spotted anything and google hasn't helped.
You should close the shellscript file object after writing the commands to it and before running it via Popen. Otherwise, the file might not be written completely before you execute it.
The most elegant solution is to use a context manager, which closes the file automatically:
with open(scriptname, "w") as f:
f.write(...)
Don't use Popen if you don't understand what it does. It creates a process, but it will not necessarily run to completion until you take additional steps.
You are probably looking simply for subprocess.check_call. Also, storing the commands in a file is unnecessary and somewhat fragile. Just run subprocess.check_call(['bash', '-c', string_of_commands) where string_of_commands has the commands separated by newlines or semicolons.
If you do want or need to use Popen, you will need to call communicate() or at least wait() on the Popen object.
Finally, avoid shell=True if you are passing in a list of commands; the purpose of the shell is to parse a string argument, and does nothing (useful) if that has already been done, or isn't necessary.
Here is an attempt at refactoring your script.
def create_shell_script(self):
'''loop through a folder, create a command for each file and write this to a shell script'''
command_list=['. /etc/profile.d/set_environment']
for file in my_folder:
command_list.append("run this "+file)
subprocess.check_call(['bash', '-c', ''.join(command_list)],
stderr=subprocess.PIPE, stdout=subprocess.PIPE)
If you want check_call to catch any error in any individual command, pass in the -e option, or include set -e before the commands which must not fail (but be aware that many innocent-looking constructs technically produce an error, such as false or grep nomatch file).
The majority of functions in the subprocess module are simple wrappers which use and manipulate a Popen object behind the scenes. You should only resort to the fundamental Popen if none of the other functions are suitable for your scenario.
If you genuinely need both stderr and stdout, perhaps you do need Popen, but then your question needs to describe how exactly your program manipulates the input and output of the shell.
I am trying to write a bash script that utilizes the command mkvirtualenv.
I can use it in the console without a problem but as soons I as try to use it in a bash script I get ./run: line 1: mkvirtualenv: command not found
I am not aware of anything that would create such a situation.
Does anyone know why the bash script behaves like that?
The reason emerged form the comments below the question: mkvirtualenv is a function.
If you want the function to exist in the script, you can export it from your shell by
export -f mkvirtualenv
I have a shell script file (run.sh) that contains the following:
#!/bin/bash
%JAVA_HOME%/bin/java -jar umar.jar
when i try to run it (./run.sh), it gives me following:
umar/bin/run.sh: line 1: fg: no job control
However if I run same command directly on shell, it works perfectly.
What's wrong with the script file?
Thanks
%foo% is not how you do command substitution in a bourne/BASH shell script. I assume you're running this from a Windows command line, which is why it works when you run it directly. Try using proper bourne syntax:
${JAVA_HOME}/bin/java -jar umar.jar
Try turning on monitor mode
set -m
%JAVA_HOME% will substitute a Windows environment variable and is appropriate in a .bat file.
Try the following shell script which should work on most UNIX like systems.
#!/bin/bash
$JAVA_HOME/bin/java -jar umar.jar