Expanding the path stored in a variable [duplicate] - linux

Say I have a folder called Foo located in /home/user/ (my /home/user also being represented by ~).
I want to have a variable
a="~/Foo" and then do
cd $a
I get
-bash: cd: ~/Foo: No such file or directory
However if I just do cd ~/Foo it works fine. Any clue on how to get this to work?

You can do (without quotes during variable assignment):
a=~/Foo
cd "$a"
But in this case the variable $a will not store ~/Foo but the expanded form /home/user/Foo. Or you could use eval:
a="~/Foo"
eval cd "$a"

You can use $HOME instead of the tilde (the tilde is expanded by the shell to the contents of $HOME).
Example:
dir="$HOME/Foo";
cd "$dir";

Although this question is merely asking for a workaround, this is listed as the duplicate of many questions that are asking why this happens, so I think it's worth giving an explanation. According to https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html#tag_18_06:
The order of word expansion shall be as follows:
Tilde expansion, parameter expansion, command substitution, and arithmetic expansion shall be performed, beginning to end.
When the shell evaluates the string cd $a, it first performs tilde expansion (which is a no-op, since $a does not contain a tilde), then it expands $a to the string ~/Foo, which is the string that is finally passed as the argument to cd.

A much more robust solution would be to use something like sed or even better, bash parameter expansion:
somedir="~/Foo/test~/ing";
cd "${somedir/#\~/$HOME}"
or if you must use sed,
cd $(echo "$somedir" | sed "s#^~#$HOME#")

If you use double quotes the ~ will be kept as that character in $a.
cd $a will not expand the ~ since variable values are not expanded by the shell.
The solution is:
eval "cd $a"

Related

Increment a variable name in ksh [duplicate]

Seems that the recommended way of doing indirect variable setting in bash is to use eval:
var=x; val=foo
eval $var=$val
echo $x # --> foo
The problem is the usual one with eval:
var=x; val=1$'\n'pwd
eval $var=$val # bad output here
(and since it is recommended in many places, I wonder just how many scripts are vulnerable because of this...)
In any case, the obvious solution of using (escaped) quotes doesn't really work:
var=x; val=1\"$'\n'pwd\"
eval $var=\"$val\" # fail with the above
The thing is that bash has indirect variable reference baked in (with ${!foo}), but I don't see any such way to do indirect assignment -- is there any sane way to do this?
For the record, I did find a solution, but this is not something that I'd consider "sane"...:
eval "$var='"${val//\'/\'\"\'\"\'}"'"
A slightly better way, avoiding the possible security implications of using eval, is
declare "$var=$val"
Note that declare is a synonym for typeset in bash. The typeset command is more widely supported (ksh and zsh also use it):
typeset "$var=$val"
In modern versions of bash, one should use a nameref.
declare -n var=x
x=$val
It's safer than eval, but still not perfect.
Bash has an extension to printf that saves its result into a variable:
printf -v "${VARNAME}" '%s' "${VALUE}"
This prevents all possible escaping issues.
If you use an invalid identifier for $VARNAME, the command will fail and return status code 2:
$ printf -v ';;;' '%s' foobar; echo $?
bash: printf: `;;;': not a valid identifier
2
eval "$var=\$val"
The argument to eval should always be a single string enclosed in either single or double quotes. All code that deviates from this pattern has some unintended behavior in edge cases, such as file names with special characters.
When the argument to eval is expanded by the shell, the $var is replaced with the variable name, and the \$ is replaced with a simple dollar. The string that is evaluated therefore becomes:
varname=$value
This is exactly what you want.
Generally, all expressions of the form $varname should be enclosed in double quotes, to prevent accidental expansion of filename patterns like *.c.
There are only two places where the quotes may be omitted since they are defined to not expand pathnames and split fields: variable assignments and case. POSIX 2018 says:
Each variable assignment shall be expanded for tilde expansion, parameter expansion, command substitution, arithmetic expansion, and quote removal prior to assigning the value.
This list of expansions is missing the parameter expansion and the field splitting. Sure, that's hard to see from reading this sentence alone, but that's the official definition.
Since this is a variable assignment, the quotes are not needed here. They don't hurt, though, so you could also write the original code as:
eval "$var=\"the value is \$val\""
Note that the second dollar is escaped using a backslash, to prevent it from being expanded in the first run. What happens is:
eval "$var=\"the value is \$val\""
The argument to the command eval is sent through parameter expansion and unescaping, resulting in:
varname="the value is $val"
This string is then evaluated as a variable assignment, which assigns the following value to the variable varname:
the value is value
The main point is that the recommended way to do this is:
eval "$var=\$val"
with the RHS done indirectly too. Since eval is used in the same
environment, it will have $val bound, so deferring it works, and since
now it's just a variable. Since the $val variable has a known name,
there are no issues with quoting, and it could have even been written as:
eval $var=\$val
But since it's better to always add quotes, the former is better, or
even this:
eval "$var=\"\$val\""
A better alternative in bash that was mentioned for the whole thing that
avoids eval completely (and is not as subtle as declare etc):
printf -v "$var" "%s" "$val"
Though this is not a direct answer what I originally asked...
Newer versions of bash support something called "parameter transformation", documented in a section of the same name in bash(1).
"${value#Q}" expands to a shell-quoted version of "${value}" that you can re-use as input.
Which means the following is a safe solution:
eval="${varname}=${value#Q}"
Just for completeness I also want to suggest the possible use of the bash built in read. I've also made corrections regarding -d'' based on socowi's comments.
But much care needs to be exercised when using read to ensure the input is sanitized (-d'' reads until null termination and printf "...\0" terminates the value with a null), and that read itself is executed in the main shell where the variable is needed and not a sub-shell (hence the < <( ... ) syntax).
var=x; val=foo0shouldnotterminateearly
read -d'' -r "$var" < <(printf "$val\0")
echo $x # --> foo0shouldnotterminateearly
echo ${!var} # --> foo0shouldnotterminateearly
I tested this with \n \t \r spaces and 0, etc it worked as expected on my version of bash.
The -r will avoid escaping \, so if you had the characters "\" and "n" in your value and not an actual newline, x will contain the two characters "\" and "n" also.
This method may not be aesthetically as pleasing as the eval or printf solution, and would be more useful if the value is coming in from a file or other input file descriptor
read -d'' -r "$var" < <( cat $file )
And here are some alternative suggestions for the < <() syntax
read -d'' -r "$var" <<< "$val"$'\0'
read -d'' -r "$var" < <(printf "$val") #Apparently I didn't even need the \0, the printf process ending was enough to trigger the read to finish.
read -d'' -r "$var" <<< $(printf "$val")
read -d'' -r "$var" <<< "$val"
read -d'' -r "$var" < <(printf "$val")
Yet another way to accomplish this, without eval, is to use "read":
INDIRECT=foo
read -d '' -r "${INDIRECT}" <<<"$(( 2 * 2 ))"
echo "${foo}" # outputs "4"

Newlines not quoted properly in ls -Q

Using ls -Q with --quoting-style=shell, newlines in file names (yes, I know...) are turned into ?. Is this a bug? Is there a way how to get the file names in a format that's 100% compatible with a shell (sh or bash if possible)?
Example (bash):
$ touch a$'\n'b
$ for s in literal shell shell-always c c-maybe escape locale clocale ; do
ls -Q a?b --quoting-style=$s
done
a?b
'a?b'
'a?b'
"a\nb"
"a\nb"
a\nb
‘a\nb’
‘a\nb’
coreutils 8.25 has the new 'shell-escape' quoting style, and in fact enables it by default to allow the output from ls to be always usable, and to be safe to copy and paste back to other commands.
Maybe not quite what you are looking for, but the "escape" style seems to work well with the upcoming ${...#E} parameter expansion in bash 4.4.
$ touch $'a\nb' $'c\nd'
$ ls -Q --quoting-style=escape ??? | while IFS= read -r fname; do echo =="${fname#E}==="; done
==a
b==
==c
d==
Here is the relevant part of the man page (link is to the raw source):
${parameter#operator}
Parameter transformation. The expansion is either a transforma-
tion of the value of parameter or information about parameter
itself, depending on the value of operator. Each operator is a
single letter:
Q The expansion is a string that is the value of parameter
quoted in a format that can be reused as input.
E The expansion is a string that is the value of parameter
with backslash escape sequences expanded as with the
$'...' quoting mechansim.
P The expansion is a string that is the result of expanding
the value of parameter as if it were a prompt string (see
PROMPTING below).
A The expansion is a string in the form of an assignment
statement or declare command that, if evaluated, will
recreate parameter with its attributes and value.
a The expansion is a string consisting of flag values rep-
resenting parameter's attributes.
If parameter is # or *, the operation is applied to each posi-
tional parameter in turn, and the expansion is the resultant
list. If parameter is an array variable subscripted with # or
*, the case modification operation is applied to each member of
the array in turn, and the expansion is the resultant list.
The result of the expansion is subject to word splitting and
pathname expansion as described below.
From a bit of experimentation, it looks like --quoting-style=escape is compatible with being wrapped in $'...', with two exceptions:
it escapes spaces by prepending a backslash; but $'...' doesn't discard backslashes before spaces.
it doesn't escape single-quotes.
So you could perhaps write something like this (in Bash):
function ls-quote-shell () {
ls -Q --quoting-style=escape "$#" \
| while IFS= read -r filename ; do
filename="${filename//'\ '/ }" # unescape spaces
filename="${filename//"'"/\'}" # escape single-quotes
printf "$'%s'\n" "$filename"
done
}
To test this, I've created a directory with a bunch of filenames with weird characters; and
eval ls -l $(ls-quote-shell)
worked as intended . . . though I won't make any firm guarantees about it.
Alternatively, here's a version that uses printf to process the escapes followed by printf %q to re-escape in a shell-friendly manner:
function ls-quote-shell () {
ls -Q --quoting-style=escape "$#" \
| while IFS= read -r escaped_filename ; do
escaped_filename="${escaped_filename//'\ '/ }" # unescape spaces
escaped_filename="${escaped_filename//'%'/%%}" # escape percent signs
# note: need to save in variable, rather than using command
# substitution, because command substitution strips trailing newlines:
printf -v filename "$escaped_filename"
printf '%q\n' "$filename"
done
}
but if it turns out that there's some case that the first version doesn't handle correctly, then the second version will most likely have the same issue. (FWIW, eval ls -l $(ls-quote-shell) worked as intended with both versions.)

how to pass asterisk into ls command inside bash script

Hi… Need a little help here…
I tried to emulate the DOS' dir command in Linux using bash script. Basically it's just a wrapped ls command with some parameters plus summary info. Here's the script:
#!/bin/bash
# default to current folder
if [ -z "$1" ]; then var=.;
else var="$1"; fi
# check file existence
if [ -a "$var" ]; then
# list contents with color, folder first
CMD="ls -lgG $var --color --group-directories-first"; $CMD;
# sum all files size
size=$(ls -lgGp "$var" | grep -v / | awk '{ sum += $3 }; END { print sum }')
if [ "$size" == "" ]; then size="0"; fi
# create summary
if [ -d "$var" ]; then
folder=$(find $var/* -maxdepth 0 -type d | wc -l)
file=$(find $var/* -maxdepth 0 -type f | wc -l)
echo "Found: $folder folders "
echo " $file files $size bytes"
fi
# error message
else
echo "dir: Error \"$var\": No such file or directory"
fi
The problem is when the argument contains an asterisk (*), the ls within the script acts differently compare to the direct ls command given at the prompt. Instead of return the whole files list, the script only returns the first file. See the video below to see the comparation in action. I don't know why it behaves like that.
Anyone knows how to fix it? Thank you.
Video: problem in action
UPDATE:
The problem has been solved. Thank you all for the answers. Now my script works as expected. See the video here: http://i.giphy.com/3o8dp1YLz4fIyCbOAU.gif
The asterisk * is expanded by the shell when it parses the command line. In other words, your script doesn't get a parameter containing an asterisk, it gets a list of files as arguments. Your script only works with $1, the first argument. It should work with "$#" instead.
This is because when you retrieve $1 you assume the shell does NOT expand *.
In fact, when * (or other glob) matches, it is expanded, and broken into segments by $IFS, and then passed as $1, $2, etc.
You're lucky if you simply retrieved the first file. When your first file's path contains spaces, you'll get an error because you only get the first segment before the space.
Seriously, read this and especially this. Really.
And please don't do things like
CMD=whatever you get from user input; $CMD;
You are begging for trouble. Don't execute arbitrary string from the user.
Both above answers already answered your question. So, i'm going a bit more verbose.
In your terminal is running the bash interpreter (probably). This is the program which parses your input line(s) and doing "things" based on your input.
When you enter some line the bash start doing the following workflow:
parsing and lexical analysis
expansion
brace expansion
tidle expansion
variable expansion
artithmetic and other substitutions
command substitution
word splitting
filename generation (globbing)
removing quotes
Only after all above the bash
will execute some external commands, like ls or dir.sh... etc.,
or will do so some "internal" actions for the known keywords and builtins like echo, for, if etc...
As you can see, the second last is the filename generation (globbing). So, in your case - if the test* matches some files, your bash expands the willcard characters (aka does the globbing).
So,
when you enter dir.sh test*,
and the test* matches some files
the bash does the expansion first
and after will execute the command dir.sh with already expanded filenames
e.g. the script get executed (in your case) as: dir.sh test.pas test.swift
BTW, it acts exactly with the same way for your ls example:
the bash expands the ls test* to ls test.pas test.swift
then executes the ls with the above two arguments
and the ls will print the result for the got two arguments.
with other words, the ls don't even see the test* argument - if it is possible - the bash expands the wilcard characters. (* and ?).
Now back to your script: add after the shebang the following line:
echo "the $0 got this arguments: $#"
and you will immediatelly see, the real argumemts how your script got executed.
also, in such cases is a good practice trying to execute the script in debug-mode, e.g.
bash -x dir.sh test*
and you will see, what the script does exactly.
Also, you can do the same for your current interpreter, e.g. just enter into the terminal
set -x
and try run the dir.sh test* = and you will see, how the bash will execute the dir.sh command. (to stop the debug mode, just enter set +x)
Everbody is giving you valuable advice which you should definitely should follow!
But here is the real answer to your question.
To pass unexpanded arguments to any executable you need to single quote them:
./your_script '*'
The best solution I have is to use the eval command, in this way:
#!/bin/bash
cmd="some command \"with_quetes_and_asterisk_in_it*\""
echo "$cmd"
eval $cmd
The eval command takes its arguments and evaluates them into the command as the shell does.
This solves my problem when I need to call a command with asterisk '*' in it from a script.

How can I use a shell command parameter as an argument? [duplicate]

I often find myself using mv to rename a file. E.g.
mv app/models/keywords_builder.rb app/models/keywords_generator.rb
Doing so I need to write (ok, tab complete) the path for the second parameter. In this example it isn't too bad but sometimes the path is deeply nested and it seems like quite a bit of extra typing.
Is there a more efficient way to do this?
And another way: brace expansion.
mv app/models/keywords_{builder,generator}.rb
In general,
before{FIRST,SECOND}after
expands to
beforeFIRSTafter beforeSECONDafter
So it's also useful for other renames, e.g.
mv somefile{,.bak}
expands to
mv somefile somefile.bak
It works in bash and zsh.
More examples:
Eric Bergen > Bash Brace Expansion
Bash Brace Expansion | Linux Journal
You can use history expansion like this:
mv app/modules/keywords_builder.rb !#^:h/keywords_generator.rb
! introduces history expansion.
# refers to the command currently being typed
^ means the first argument
:h is a modifier to get the "head", i.e. the directory without the file part
It's supported in bash and zsh.
Docs:
bash history expansion
zsh history expansion
One way is to type the first file name and a space, then press Ctrl+w to delete it. Then press Ctrl+y twice to get two copies of the file name. Then edit the second copy.
For example,
mv app/models/keywords_builder.rb <Ctrl+W><Ctrl+Y><Ctrl+Y><edit as needed>
or cd apps/models && mv keywords_builder.rb keywords_generator.rb && cd -
Combined answers of Mikel and geekosaur with additonal use of ":p"
use brace expansion to avoid first argument repetition:
mv -iv {,old_}readme.txt # 'readme.txt' -> 'old_readme.txt'
mv -iv file{,.backup} # 'file' -> 'file.backup'
use history expansion to avoid first argument repetition:
mv -iv "system file" !#$.backup # 'system file' -> 'system file.backup'
the filename can be printed using the "p" designator for further edition:
mv -iv "file with a long name" !#$:p
then press "↑" to edit the command

A way to get run directory name

I can't understand the following code in a bash.
set `pwd` ; mfix=$1
It actually get the run directory name.But I don't how does it work.
What is the set command mean?
From the doc for the set:
This builtin is so complicated that it deserves its own section. set
allows you to change the values of shell options and set the
positional parameters, or to display the names and values of shell
variables.
e.g.
set v1 v2 v3 ; echo $1
will print
v1
The comand inside backticks is called as "command substitution". From the docs:
Bash performs the expansion by executing command and replacing the
command substitution with the standard output of the command, with any
trailing newlines deleted.
In your example, it sets the 1st positional argument $1 to the value of the result of execution of command inside the backticks. (called as command substitution). The command is pwd what shows the current working directory.
Anyway, if the path to the directory contains an space, the $1 will get only the first part of the path., e.g.
$ pwd
/some/path with/space
$ set `pwd`
$ echo $1
/some/path
$echo $2
with/space
Finally the all above is strange design, because you can simply:
mfix=$(pwd) #old school: mfix=`pwd`
It is better to use the $(command) instead of the backticks.
This code in bash put the result of the command pwd in the variable mfix.
You can print the result of the mfix variable by running
echo $mfix

Resources