There is, in a file, some multi-command line like this:
cd /home/user; ls
In a bash script, I would like to execute these commands, adding some arguments to the last one. For example:
cd /home/user; ls -l *.png
I thought it would be enough to do something like this:
#!/bin/bash
commandLine="$(cat theFileWithCommandInside) -l *.png"
$commandLine
exit 0
But it says:
/home/user;: No such file or directory
In other words, the ";" character doesn't mean anymore "end of the command": The shell is trying to find a directory called "user;" in the home folder...
I tried to replace ";" with "&&", but the result is the same.
the point of your question is to execute command stored in string. there are thousands of ways to execute that indirectly. but eventually, bash has to involve.
so why not explicitly invoke bash to do the job?
bash -c "$commandLine"
from doc:
-c string
If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.
http://linux.die.net/man/1/bash
Why dont you execute the commands themselves in the script, instead of "importing" them?
#!/bin/bash
cd /home/user; ls -l *.png
exit 0
Wrap the command into a function:
function doLS() {
cd user; ls $#
}
$# expands to all arguments passed to the function. If you (or the snippet authors) add functions expecting a predefined number of arguments, you may find the positional parameters $1, $2, ... useful instead.
As the maintainer of the main script, you will have to make sure that everyone providing such a snippet provides that "interface" your code uses (i.e. their code defines the functions your program calls and their functions process the arguments your program passes).
Use source or . to import the function into your running shell:
#!/bin/bash
source theFileWithCommandInside
doLS -l *.png
exit 0
I'd like to add a few thoughts on the ; topic:
In other words, the ";" character doesn't mean anymore "end of the
command": The shell is trying to find a directory called "user;" in
the home folder...
; is not used to terminate a statement as in C-style languages. Instead it is used to separate commands that should be executed sequentially inside a list. Example executing two commands in a subshell:
( command1 ; command2 )
If the list is part of a group, it must be succeeded by a ;:
{ command1 ; command2 ; }
In your example, tokenization and globbing (replacing the *) will not be executed (as you may have expected), so your code will not be run successfully.
The key is: eval
Here, the fixed script (look at the third line):
#!/bin/bash
commandLine="$(cat theFileWithCommandInside) -l *.png"
eval $commandLine
exit 0
Using the <(...) form
sh <(sed 's/$/ *.png/' theFileWithCommandInside)
Related
I want to define a custom bash function, which gets an argument as a part of a dir path.
I'm new to bash scripts. The codes provided online are somehow confusing for me or don't work properly.
For example, the expected bash script looks like:
function my_copy() {
sudo cp ~/workspace/{$1} ~/tmp/{$2}
}
If I type my_copy a b,
then I expect the function executes sudo cp ~/workspace/a ~/tmp/b
in the terminal.
Thanks in advance.
If you have the below function in say copy.sh file and if you source it ( source copy.sh or . copy.sh) then the function call my_copy will work as expected.
$1 and $2 are positional parameters.
i.e. when you call my_copy a b, $1 will have the first command line argument as its value which is a in your case and $2 which is second command line argument, will have the value b. The function will work as expected.
Also you have a logical error in the function, you have given {$1} instead of ${1}. It will expand to {a} instead of a in your function and it will throw an error that says cp: cannot stat '~/workspace/{a}': No such file or directory when you run it.
Additionally, if the number of positional parameters are greater than 10, only then it is required to use {} in between otherwise you can avoid it. eg: ${11} instead of $11.
function my_copy() {
sudo cp ~/workspace/$1 ~/tmp/$2
}
So above function will execute the statement sudo cp ~/workspace/a ~/tmp/b as expected.
To understand the concept, you can try echo $1, echo ${1}, echo {$1}, echo {$2}, echo ${2} and echo $2 inside the script to see the resulting values. For more special $ sign shell variables
There is a syntax error in your code. You don't call a variable like {$foo}. If 1=a and 2=b then you execute
sudo cp ~/workspace/{$1} ~/tmp/{$2}
BASH is going to replace $1 with a and $2 with b, so, BASH is going to execute
sudo cp ~/workspace/{a} ~/tmp/{b}
That means tha cp is going to fail because there is no file with a name like {a}
There is some ways to call a variable
echo $foo
echo ${foo}
echo "$foo"
echo "${foo}"
Otherwise, your code looks good, should work.
Take a look a this links first and second, it's really important to quoting your variables. If you want more information about BASH or can't sleep at night try with the Official Manual, it have everything you must know about BASH and it's a good somniferous too ;)
PS: I know $1, $2, etc are positional parameters, I called it variables because you treat it as a variable, and my anwser can be applied for both.
Hi… Need a little help here…
I tried to emulate the DOS' dir command in Linux using bash script. Basically it's just a wrapped ls command with some parameters plus summary info. Here's the script:
#!/bin/bash
# default to current folder
if [ -z "$1" ]; then var=.;
else var="$1"; fi
# check file existence
if [ -a "$var" ]; then
# list contents with color, folder first
CMD="ls -lgG $var --color --group-directories-first"; $CMD;
# sum all files size
size=$(ls -lgGp "$var" | grep -v / | awk '{ sum += $3 }; END { print sum }')
if [ "$size" == "" ]; then size="0"; fi
# create summary
if [ -d "$var" ]; then
folder=$(find $var/* -maxdepth 0 -type d | wc -l)
file=$(find $var/* -maxdepth 0 -type f | wc -l)
echo "Found: $folder folders "
echo " $file files $size bytes"
fi
# error message
else
echo "dir: Error \"$var\": No such file or directory"
fi
The problem is when the argument contains an asterisk (*), the ls within the script acts differently compare to the direct ls command given at the prompt. Instead of return the whole files list, the script only returns the first file. See the video below to see the comparation in action. I don't know why it behaves like that.
Anyone knows how to fix it? Thank you.
Video: problem in action
UPDATE:
The problem has been solved. Thank you all for the answers. Now my script works as expected. See the video here: http://i.giphy.com/3o8dp1YLz4fIyCbOAU.gif
The asterisk * is expanded by the shell when it parses the command line. In other words, your script doesn't get a parameter containing an asterisk, it gets a list of files as arguments. Your script only works with $1, the first argument. It should work with "$#" instead.
This is because when you retrieve $1 you assume the shell does NOT expand *.
In fact, when * (or other glob) matches, it is expanded, and broken into segments by $IFS, and then passed as $1, $2, etc.
You're lucky if you simply retrieved the first file. When your first file's path contains spaces, you'll get an error because you only get the first segment before the space.
Seriously, read this and especially this. Really.
And please don't do things like
CMD=whatever you get from user input; $CMD;
You are begging for trouble. Don't execute arbitrary string from the user.
Both above answers already answered your question. So, i'm going a bit more verbose.
In your terminal is running the bash interpreter (probably). This is the program which parses your input line(s) and doing "things" based on your input.
When you enter some line the bash start doing the following workflow:
parsing and lexical analysis
expansion
brace expansion
tidle expansion
variable expansion
artithmetic and other substitutions
command substitution
word splitting
filename generation (globbing)
removing quotes
Only after all above the bash
will execute some external commands, like ls or dir.sh... etc.,
or will do so some "internal" actions for the known keywords and builtins like echo, for, if etc...
As you can see, the second last is the filename generation (globbing). So, in your case - if the test* matches some files, your bash expands the willcard characters (aka does the globbing).
So,
when you enter dir.sh test*,
and the test* matches some files
the bash does the expansion first
and after will execute the command dir.sh with already expanded filenames
e.g. the script get executed (in your case) as: dir.sh test.pas test.swift
BTW, it acts exactly with the same way for your ls example:
the bash expands the ls test* to ls test.pas test.swift
then executes the ls with the above two arguments
and the ls will print the result for the got two arguments.
with other words, the ls don't even see the test* argument - if it is possible - the bash expands the wilcard characters. (* and ?).
Now back to your script: add after the shebang the following line:
echo "the $0 got this arguments: $#"
and you will immediatelly see, the real argumemts how your script got executed.
also, in such cases is a good practice trying to execute the script in debug-mode, e.g.
bash -x dir.sh test*
and you will see, what the script does exactly.
Also, you can do the same for your current interpreter, e.g. just enter into the terminal
set -x
and try run the dir.sh test* = and you will see, how the bash will execute the dir.sh command. (to stop the debug mode, just enter set +x)
Everbody is giving you valuable advice which you should definitely should follow!
But here is the real answer to your question.
To pass unexpanded arguments to any executable you need to single quote them:
./your_script '*'
The best solution I have is to use the eval command, in this way:
#!/bin/bash
cmd="some command \"with_quetes_and_asterisk_in_it*\""
echo "$cmd"
eval $cmd
The eval command takes its arguments and evaluates them into the command as the shell does.
This solves my problem when I need to call a command with asterisk '*' in it from a script.
I am installing a AMP server in OSX (much easier in Ubuntu) using the MacPorts methods. I would like to add a bash script in my path called apachectl that will refer to /opt/local/apache2/bin/apachectl. I have been able to do this, but I was wondering how I can then pass parameters to apachectl that would then pass it to /opt/local/apache2/bin/apachectl?
e.g. apachectl -t >>> /opt/local/apache2/bin/apachectl -t
For those wondering why I don't just reorder my path, I was asking so that I could do the same thing with other commands, such ls -l which I currently have as ll (Ubuntu style) that looks like
ls -l $1
in the file.
Is the only way to do this why positional parameters such as what I have done above?
For what you want, you want to use "$#"
Explanation is from this answer that is in turn from this page
$# -- Expands to the positional parameters, starting from one.
When the expansion occurs within double quotes, each parameter
expands to a separate word. That is, "$#" is equivalent to "$1"
"$2" ... If the double-quoted expansion occurs within a word, the
expansion of the first parameter is joined with the beginning part
of the original word, and the expansion of the last parameter is
joined with the last part of the original word. When there are no
positional parameters, "$#" and $# expand to nothing (i.e., they are removed).
That would mean that you could call your ll script as follows:
ll -a /
"$#" will expand -a / into separate positional parameters, meaning that your script actually ran
ls -l -a /
You could also use a function:
apachectl() {
/opt/local/apache2/bin/apachectl "$#"
}
Is there a way to execute only a specified number of lines from a shell script? I will try copying them with head and putting them on a separate .sh, but I wonder if there's a shortcut...
Reorganize the shell script and create functions.
Seriously, put every line of code into a function.
Then (using ksh as an example), source the script with "." into an interactive shell.
You can now run any of the functions by name, and only the code within that function will run.
The following trivial example illustrates this. You can use this in two ways:
1) Link the script so you can call it by the name of one of the functions.
2) Source the script (with . script.sh) and you can then reuse the functions elsewhere.
function one {
print one
}
function two {
print two
}
(
progname=${0##*/}
case $progname in
(one|two)
$progname $#
esac
)
Write your own script /tmp/headexecute for example
#!/bin/ksh
trap 'rm -f /tmp/somefile' 0
head -n $2 $1 > /tmp/somefile
chmod 755 /tmp/somefile
/tmp/somefile
call it with the name of the files and the number of lines to execute
/tmp/headexecute /tmp/originalscript 10
Most shells have no such facility. You will have to do it the hard way.
This might work for you (GNU sed):
sed -n '1{h;d};H;2{x;s/.*/&/ep;q}' script
This executes the first two lines of a script.
x=starting line
y=number of lines to execute
eval "$(tail -n +$x script | head -$y)"
I'm writing a small bash script and I'm trying to create a directory by this:
mkdir ~/deploy.$1
I would think it should produce deploy.scriptFoo or what ever the valuable of $1 is.
It's only producing "deploy." and leaving off the $1 variable. I have tested the $1 variable in the output and I am positive it is being passed into the script. Any ideas?
The Problem
The $1 position parameter is the first argument to your script, not the name of the script itself.
The Solution
If you want the script name, use $0. For example, given this sample script stored in /tmp/param_test.sh:
#!/bin/bash
mkdir "/tmp/deploy.$(basename "$0" .sh)"
ls -d /tmp/deploy*
the script ignores any arguments, but correctly returns the following output:
/tmp/deploy.param_test
If you want to vary the name, then you have to use a positional parameter in your script and call the script with an argument. For example:
#!/bin/bash
mkdir "/tmp/deploy.$1"
ls -d /tmp/deploy*
On the command line, you pass the argument to your script. For example:
$ bash /tmp/param_test.sh foo
/tmp/deploy.foo
See Also
http://www.gnu.org/software/bash/manual/bashref.html#Positional-Parameters