Escaping characters in bash - linux

I use thunar as a file manager
I want to use "custom actions" on certain files (Thunar --> Edit --> Configure Custom Actions...)
the command I use is: xfce4-terminal -e "md5sum '%F'" --hold
This works fine, except when the file path or file name contains a space. It just won't work as intended because the file can then not be found.
I think this is because the spaces in the file path are not automatically escaped
How do I solve this problem?
Thank you in advance

Seems like Thunar replaces %F with (potentially multiple) correctly quoted paths. Putting this inside quotes will ruin the already perfect quoting. From https://docs.xfce.org/xfce/thunar/custom-actions
Never quote field codes
You need a way to pass an argument list to a command running inside xfce4-terminal. Luckily man xfce4-terminal lists:
-x, --execute Execute the remainder of the command line inside the terminal
Therefore, try
xfce4-terminal --hold -x md5sum %F

You can easily escape a character with by placing a \ in front of it.

Related

Bash: How to split a command line into a list of arguments verbatim?

For example, when given a command line:
ls "aaa bbb"
I'd like to have a list of arguments:
args[0]='ls'
args[1]='"aaa bbb"'
Is this possible with Bash and common UNIX utilities (sed, awk, xargs, etc.)?
Please note that the arguments must be verbatim. Below is NOT a correct answer:
args[0]='ls'
args[1]='aaa bbb'
This is not possible because the shell always performs quote removal before it executes a command. In consequence, commands will never see (be able to access) the removed quotes.
This sounds like an XY-problem. Why do you think you need the original quoting? What is your actual problem?

Shell Script Not Executing Asterisk When Part Of Filename

So the use case is pretty simple, if I run the command in the terminal like this
myCommand -input file_prefix* -output outputFolder
It will grab all of my files that have that prefix that are inside the directory, run the command and spit it into the output folder. Pretty easy on its own. However if I copy paste that exact command into a Shell Script to be run when in PHP or similar it says file_prefix* no file found. This is because it is reading the asterisk as a literal character and not the wildcard. I have tried wrapping the asterisk in single quotes, double quotes etc. Any idea on how this can be done? I have seen a few examples of people echoing the asterix, but no one using it in the way I am as part of a file prefix. Thanks for any help!
It looks like the proper answer to this was actually to put my file_prefix into quotes and to leave the asterisk alone. Final working copy
myCommand -input 'file_prefix'* -output outputFolder
my file_prefix was an input param so the actual final code is
myCommand -input '$1'* -output $2

Why control characters appended after bash command?

I used the bash commands to append several lines to multiple configuration files:
> for filename in *.ovpn; do
> printf 'configurationscript-security 2\nup /etc/openvpn/update-resolv-conf\ndown /etc/openvpn/update-resolv-conf' >> $filename;
> done
However the control character "^M" appeared at end of each line in the configuration file:
I opened the files in vim, the files before bash commands looked like as folows:
I am curious why "^M" appears at end of each line? Thanks.
It is Windows' carriage return, use dos2unix to convert file. Vim recognize the file format and displays it correctly.
The ^M can also be removed via a regular expression in vim, if dos2unix isn't available.
:%s/^M//g, which can be entered as: Esc:%s/ctrl+Vctrl+M//g
Not sure why this has occurred for you with just a simple printf command on a linux system, maybe have a look that you're picking up the correct version of printf. I've given this a go on a linux system, and the local printf keeps the correct line-endings, as you would expect.

Bash script prints "Command Not Found" on empty lines

Every time I run a script using bash scriptname.sh from the command line in Debian, I get Command Not found and then the result of the script.
The script works but there is always a Command Not Found statement printed on screen for each empty line. Each blank line is resulting in a command not found.
I am running the script from the /var folder.
Here is the script:
#!/bin/bash
echo Hello World
I run it by typing the following:
bash testscript.sh
Why would this occur?
Make sure your first line is:
#!/bin/bash
Enter your path to bash if it is not /bin/bash
Try running:
dos2unix script.sh
That wil convert line endings, etc from Windows to unix format. i.e. it strips \r (CR) from line endings to change them from \r\n (CR+LF) to \n (LF).
More details about the dos2unix command (man page)
Another way to tell if your file is in dos/Win format:
cat scriptname.sh | sed 's/\r/<CR>/'
The output will look something like this:
#!/bin/sh<CR>
<CR>
echo Hello World<CR>
<CR>
This will output the entire file text with <CR> displayed for each \r character in the file.
You can use bash -x scriptname.sh to trace it.
I also ran into a similar issue. The issue seems to be permissions. If you do an ls -l, you may be able to identify that your file may NOT have the execute bit turned on. This will NOT allow the script to execute. :)
As #artooro added in comment:
To fix that issue run chmod +x testscript.sh
This might be trivial and not related to the OP's question, but I often made this mistaken at the beginning when I was learning scripting
VAR_NAME = $(hostname)
echo "the hostname is ${VAR_NAME}"
This will produce 'command not found' response. The correct way is to eliminate the spaces
VAR_NAME=$(hostname)
On Bash for Windows I've tried incorrectly to run
run_me.sh
without ./ at the beginning and got the same error.
For people with Windows background the correct form looks redundant:
./run_me.sh
If the script does its job (relatively) well, then it's running okay. Your problem is probably a single line in the file referencing a program that's either not on the path, not installed, misspelled, or something similar.
One way is to place a set -x at the top of your script or run it with bash -x instead of just bash - this will output the lines before executing them and you usually just need to look at the command output immediately before the error to see what's causing the problem
If, as you say, it's the blank lines causing the problems, you might want to check what's actaully in them. Run:
od -xcb testscript.sh
and make sure there's no "invisible" funny characters like the CTRL-M (carriage return) you may get by using a Windows-type editor.
use dos2unix on your script file.
for executing that you must provide full path of that
for example
/home/Manuel/mywrittenscript
Try chmod u+x testscript.sh
I know it from here:
http://www.linuxquestions.org/questions/red-hat-31/running-shell-script-command-not-found-202062/
If you have Notepad++ and you get this .sh Error Message: "command not found"
or this autoconf Error Message "line 615:
../../autoconf/bin/autom4te: No such file or directory".
On your Notepad++, Go to Edit -> EOL Conversion then check Macinthos(CR).
This will edit your files. I also encourage to check all files with this command,
because soon such an error will occur.
Had the same problem. Unfortunately
dos2unix winfile.sh
bash: dos2unix: command not found
so I did this to convert.
awk '{ sub("\r$", ""); print }' winfile.sh > unixfile.sh
and then
bash unixfile.sh
Problems with running scripts may also be connected to bad formatting of multi-line commands, for example if you have a whitespace character after line-breaking "\". E.g. this:
./run_me.sh \
--with-some parameter
(please note the extra space after "\") will cause problems, but when you remove that space, it will run perfectly fine.
I was also having some of the Cannot execute command. Everything looked correct, but in fact I was having a non-breakable space right before my command which was ofcourse impossible to spot with the naked eye:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Which, in Vim, looked like:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Only after running the Bash script checker shellcheck did I find the problem.
I ran into this today, absentmindedly copying the dollar command prompt $ (ahead of a command string) into the script.
Make sure you havenĀ“t override the 'PATH' variable by mistake like this:
#!/bin/bash
PATH="/home/user/Pictures/"; # do NOT do this
This was my mistake.
Add the current directory ( . ) to PATH to be able to execute a script, just by typing in its name, that resides in the current directory:
PATH=.:$PATH
You may want to update you .bashrc and .bash_profile files with aliases to recognize the command you are entering.
.bashrc and .bash_profile files are hidden files probably located on your C: drive where you save your program files.

rm fails to delete files by wildcard from a script, but works from a shell prompt

I've run into a really silly problem with a Linux shell script. I want to delete all files with the extension ".bz2" in a directory. In the script I call
rm "$archivedir/*.bz2"
where $archivedir is a directory path. Should be pretty simple, shouldn't it? Somehow, it manages to fail with this error:
rm: cannot remove `/var/archives/monthly/April/*.bz2': No such file or directory
But there is a file in that directory called test.bz2 and if I change my script to
echo rm "$archivedir/*.bz2"
and copy/paste the output of that line into a terminal window the file is removed successfully. What am I doing wrong?
TL;DR
Quote only the variable, not the whole expected path with the wildcard
rm "$archivedir"/*.bz2
Explanation
In Unix, programs generally do not interpret wildcards themselves. The shell interprets unquoted wildcards, and replaces each wildcard argument with a list of matching file names.
if $archivedir might contain spaces, then rm $archivedir/*.bz2 might not do what you
You can disable this process by quoting the wildcard character, using double or single quotes, or a backslash before it. However, that's not what you want here - you do want the wildcard expanded to the list of files that it matches.
Be careful about writing rm $archivedir/*.bz2 (without quotes). The word splitting (i.e., breaking the command line up into arguments) happens after $archivedir is substituted. So if $archivedir contains spaces, then you'll get extra arguments that you weren't intending. Say archivedir is /var/archives/monthly/April to June. Then you'll get the equivalent of writing rm /var/archives/monthly/April to June/*.bz2, which tries to delete the files "/var/archives/monthly/April", "to", and all files matching "June/*.bz2", which isn't what you want.
The correct solution is to write:
rm "$archivedir"/*.bz2
Your original line
rm "$archivedir/*.bz2"
Can be re-written as
rm "$archivedir"/*.bz2
to achieve the same effect. The wildcard expansion is not taking place properly in your existing setup. By shifting the double-quote to the "front" of the file path (which is legitimate) you avoid this.
Just to expand on this a bit, bash has fairly complicated rules for dealing with metacharacters in quotes. In general
almost nothing is interpreted in single-quotes:
echo '$foo/*.c' => $foo/*.c
echo '\\*' => \\*
shell substitution is done inside double quotes, but file metacharacters aren't expanded:
FOO=hello; echo "$foo/*.c" => hello/*.c
everything inside backquotes is passed to the subshell which interprets them. A shell variable that is not exported doesn't get defined in the subshell. So, the first command echoes blank, but the second and third echo "bye":
BAR=bye echo `echo $BAR`
BAR=bye; echo `echo $BAR`
export BAR=bye; echo `echo $BAR`
(And getting this to print the way you want it in SO takes several tries is apparently impossible...)
The quotes are causing the string to be interpreted as a string literal, try removing them.
I've seen similar errors when calling a shell script like
./shell_script.sh
from another shell script. This can be fixed by invoking it as
sh shell_script.sh
Why not just rm -rf */*.bz2? Works for me on OSX.

Resources