I need to pass a string argument to a bash script that may contain a $ character. I don't want to force a \ to be inserted into the string outside of the script.
I tried to do that within the script, but couldn't figure out how to do this.
I had a similar issue at a later point in the script where I read in a string using "read". I could only get it to work by forcing the user to enter \$, which is not going to work for my application.
Any suggestions ?
If you don't want to have to escape the $ with a backslash, then the only other alternative is to surround the argument in single quotes. It's not possible to pass a 'naked' $ into your script because the shell will try to expand it. Using single quotes prevents shell expansion and will preserve the $.
For example:
myscript.sh '$foo'
Related
This question already has answers here:
Inserting “local” before variable name in a shell script leads to an error
(3 answers)
Closed 1 year ago.
I have method that should receive string, but it does not work as intended
receive_string()
{
local string=$1
echo "$string"
}
When I call it I get unrelated error.
receive_string "Catch a string my friend!"
Returns:
4: local: friend!: bad variable name
Instead of
Catch a string my friend!
What is the problem, and how to solve it?
The problem is that variable expansion is literal in bash, so
local string=$1 will behave as local string=Catch a string my friend!.
It is a good practice to enclose between double quotes your string variables, so they don't separate during expansion. Try this:
local string="$1"
It won't add quotes to your string and it will keep it together.
About the ! specific behavior, check KamilCuk's answer.
You are not running your script with bash. You are running your script under dash shell. The behavior does not happen on bash - in bash the command local is very specially handled like ex. export and arguments have same semantics as on assignment. Most probably the shebang of your script is #!/bin/sh and sh is linked to dash on your system. Use shebang with bash to run bash.
local string=$1
is expanding $1 so it becomes:
local string=Catch a string my friend!
which creates a variable string with value Catch, empty variables a string and my and friend! is invalid variable value.
As always, quote variable expansion.
local string="$1"
Research when to quote variables in shell. Check your scripts with http://shellcheck.net
Side note: the ! in "something!" triggers history expansion in bash. In bash when in interactive shell with history expansion enabled, you would put it ex. in single quotes "something"'!'.
I'm using WSL (Ubuntu 18.04) on Windows 10 and bash.
I have a file filename.gpg with the content:
export SOME_ENV_VAR='123'
Now I run the following commands:
$ $(gpg -d filename.gpg)
$ echo $SOME_ENV_VAR
'123' <-- with quotes
However, if I run it directly in the shell:
$ export SOME_ENV_VAR='123'
$ echo $SOME_ENV_VAR
123 < -- without quotes
Why does it behave like this? Why is there a difference between running a command using $() and running it directly?
Aside: I got it working using eval $(gpg -d filename), I have no idea why this works.
Quotes in shell scripts do not behave differently from quotes in shell commands.
With the $(gpg -d filename.gpg) syntax, you are not executing a shell script, but a regular single command.
What your command does
It executes gpg -d filename.gpg
From the result, it takes the first (IFS-separated) word as the command to execute
It takes every other (IFS-separated) words, including words from additional lines, as its parameters
It executs the command
From the following practical examples, you can see how it differs from executing a shell script:
Remove the word export from filename.gpg: the command is then SOME_ENV_VAR='123' which is not understood as a variable assignment (you will get SOME_ENV_VAR='123': command not found).
If you add several lines, they won't be understood as separated command lines, but as parameters to the very first command (export).
If you change export SOME_ENV_VAR='123' to export SOME_ENV_VAR=$PWD, SOME_ENV_VAR will not contain the content of variable PWD, but the string $var
Why is it so?
See how bash performs expansion when analyzing a command.
There are many steps. $(...) is called "command substitution" and is the fourth step. When it is done, none of the previous steps will be performed again. This explains why your command does not work when you remove the export word, and why variables are not substituted in the result.
Moreover "quote Removal" is the last step and the manual reads:
all unquoted occurrences of the characters ‘\’, ‘'’, and ‘"’ that did
not result from one of the above expansions are removed
Since the single quotes resulted from the "command substitution" expansion, they were not removed. That's why the content of SOME_ENV_VAR is '123' and not 123.
Why does eval work?
Because eval triggers another complete parsing of its parameters. The whole set of expansions is run again.
From the manual:
The arguments are concatenated together into a single command, which is then read and executed
Note that this means that you are still running one single command, and not a shell script. If your filename.gpg script has several lines, subsequent lines will be added to the argument list of the first (and only) command.
What should I do then?
Just use source along with process substitution.
source <(gpg -d filename.gpg)
Contrary to eval, source is used to execute a shell script in the current context. Process substitution provides a pseudo-filename that contains the result of the substitution (i.e. the output of gpg).
I have one variable, which is coming from some where like:
VAR1='hhgfhfghhgf"";2Ddgfsaj!!!$#^$\'&%*%~*)_)(_{}||\\/'
Now i have command like this
./myscript.sh '$VAR1'
I am getting that $VAR1 from some diff process and when I display it look exactly as its above.
Now that command is failing as there is already single quote inside variable. In the process where I use it it is expanded at that point, which causes that error.
I have control over myscript.sh but not above command.
Is there any way I can get variable inside my script?
What you are saying is not possible to failing when passing to your script. Might your script has processing issue (or a command where this argument will passing into it) which cannot expand the variable correctly. You can either use printf with %q modifier to escape all special characters then pass it to your script:
./myscript.sh "$(printf '%q\n' "$VAR1")"
... or do the same within your script before you wanted to pass to some other commands:
VAR2="$(printf '%q\n' "$VAR1")"
I am writing a shell script where parameter will be a path to a location. I am using readlink -f command to get the absolute path of the path send by user. Suppose if the path send by the user has spaces like,
/home/stack over flow/location
I am excepting the user to send with quotes like
"/home/stack over flow/location"
I have 2 issues here,
1) Even though if the path is passed with quotes, when I iterate over $#, quotes are suppressed and get path without quotes.
2) I did a work around to check if the parameter contain spaces and I add explicitly like
if [[ $1 = *\ * ]] ; then
temp=\"$1\"
fi
where I added quotes " explicitly, but the problem now I am facing is even though I added variable with spaces now readlink is not working.
When I do
full_path=`readlink -f ${temp}`
Its saying
usage: readlink [-n] [-f] symlink
If I execute it as a normal unix command in shell like
readlink -f "/home/stack over flow/location"
which is working and I am getting full path. Why even I append the spaces readlink is not working in shell script? Please help me out with this.
Well it makes sense that you get the path without quotes in the script parameters: the quotes are meant for the shell processing the call of your script, not for the script itself.
I assume you call the command like this:
./test "/home/stack over flow/location"
where 'test' is the script you implement. The quotes around the path make sure the shell that executes this command treats the path as one single argument, not as three separate strings as it would do without the quotes. But the quotes are not treated as part of the parameter itself. So when the parameter is handed over to your script you get one single parameter holding the whole path, not a parameter holding a modified string based on the path: the string with padded quotes.
You can use that paramter without problems. Just put quotes around it again:
readlink -f "$#"
will protect the blanks contained in the specified path, just as in the original call.
I've run into a really silly problem with a Linux shell script. I want to delete all files with the extension ".bz2" in a directory. In the script I call
rm "$archivedir/*.bz2"
where $archivedir is a directory path. Should be pretty simple, shouldn't it? Somehow, it manages to fail with this error:
rm: cannot remove `/var/archives/monthly/April/*.bz2': No such file or directory
But there is a file in that directory called test.bz2 and if I change my script to
echo rm "$archivedir/*.bz2"
and copy/paste the output of that line into a terminal window the file is removed successfully. What am I doing wrong?
TL;DR
Quote only the variable, not the whole expected path with the wildcard
rm "$archivedir"/*.bz2
Explanation
In Unix, programs generally do not interpret wildcards themselves. The shell interprets unquoted wildcards, and replaces each wildcard argument with a list of matching file names.
if $archivedir might contain spaces, then rm $archivedir/*.bz2 might not do what you
You can disable this process by quoting the wildcard character, using double or single quotes, or a backslash before it. However, that's not what you want here - you do want the wildcard expanded to the list of files that it matches.
Be careful about writing rm $archivedir/*.bz2 (without quotes). The word splitting (i.e., breaking the command line up into arguments) happens after $archivedir is substituted. So if $archivedir contains spaces, then you'll get extra arguments that you weren't intending. Say archivedir is /var/archives/monthly/April to June. Then you'll get the equivalent of writing rm /var/archives/monthly/April to June/*.bz2, which tries to delete the files "/var/archives/monthly/April", "to", and all files matching "June/*.bz2", which isn't what you want.
The correct solution is to write:
rm "$archivedir"/*.bz2
Your original line
rm "$archivedir/*.bz2"
Can be re-written as
rm "$archivedir"/*.bz2
to achieve the same effect. The wildcard expansion is not taking place properly in your existing setup. By shifting the double-quote to the "front" of the file path (which is legitimate) you avoid this.
Just to expand on this a bit, bash has fairly complicated rules for dealing with metacharacters in quotes. In general
almost nothing is interpreted in single-quotes:
echo '$foo/*.c' => $foo/*.c
echo '\\*' => \\*
shell substitution is done inside double quotes, but file metacharacters aren't expanded:
FOO=hello; echo "$foo/*.c" => hello/*.c
everything inside backquotes is passed to the subshell which interprets them. A shell variable that is not exported doesn't get defined in the subshell. So, the first command echoes blank, but the second and third echo "bye":
BAR=bye echo `echo $BAR`
BAR=bye; echo `echo $BAR`
export BAR=bye; echo `echo $BAR`
(And getting this to print the way you want it in SO takes several tries is apparently impossible...)
The quotes are causing the string to be interpreted as a string literal, try removing them.
I've seen similar errors when calling a shell script like
./shell_script.sh
from another shell script. This can be fixed by invoking it as
sh shell_script.sh
Why not just rm -rf */*.bz2? Works for me on OSX.