On linux systems when you type a command in a shell like rm * -rf, the order of the * and the -rf doesn't matter. My shell interpret it the same way. Now, on my Mac when I type rm -rf * everything works fine, but if I do rm * -rf an error shows up rm: -rf: No such file or directory
I tried that on a macOS and a linux both with fish and bash shells. Same problems.
Does anyone have any idea why the command interpreter on macOS thinks that -rf at the end of the command is not interpreted as parameters of the command ?
It's not about the shell, it's about the commands.
The parsing of command line arguments is not a feature and responsibility of the shell, but of the actual commands.
In both systems the shell faithfully passes the command line arguments in whatever order they were specified, and then it's up to the implementation of the commands to parse them as they see fit.
In linux, the core utilities are typically of the GNU implementation,
while on osx, the core utilities are typically of the BSD implementation.
The man page of the commands should tell you which implementation it is.
For example the last line of man rm in Linux is something like this:
GNU coreutils 8.21 March 2016 RM(1)
On osx:
BSD January 28, 1999 BSD
Order of the arguments in any shell has historically been relevant in unix.
rm incidentally even has an option -- to stop parsing options (to be able to remove files that start with "-" e.g.)
See rm(1) and getopt(3) man pages
if the shell doesn't respect order of the arguments it is given just what would the result be of this sequence:
$ touch a b
$ mv a b
what file would remain ?
Related
I need to execute "rm -rf" on remote machine running Ubuntu in order to clear specified folder.
If I use command like following everything goes fine.
rm -rf "home/blahblah/blah"/*
But if I run the same command in Linux PowerShell I would get ALL files removed.
Is there any way to specify path to be handled the same way in bash and PS? Thank you!
tl;dr
Unfortunately, as of PowerShell 7.0, you must do the following (if you want to use the external rm utility):
sh -c "rm -rf 'home/blahblah/blah'/*"
Note that I've switched to single-quoting ('...') around the path, so I could use it inside a double-quoted ("...") string. While the reverse ('... "..."/*') should work as-is, it currently requires additional escaping ('... \"...\"/*') - see this answer.
However, if the path preceding the /* doesn't actually need quoting - notably if it doesn't contain spaces - you can simply call:
rm -rf home/blahblah/blah/*
You're seeing a very unfortunate difference between PowerShell (as of 7.0) and POSIX-like shells such as Bash with respect to the handling of string arguments composed of both quoted and unquoted parts.
PowerShell parses "home/blahblah/blah"/* as two arguments:
home/blahblah/blah becomes the first argument, as-is in this case.
/* is interpreted as the second argument, which, due to being unquoted and due to an external utility being called, triggers PowerShell's emulation of the globbing (filename expansion) behavior of POSIX-like shells, which means that all files and directories in the root directory are being passed as individual arguments.
Therefore, the arguments that the rm utility actually receives are: -rf, home/blahblah/blah, /bin, /boot, /dev, ... - which is clearly not the intent.
This problematic behavior is discussed in this GitHub issue.
Passing the command to sh, the default shell on Unix-like platforms, instead, bypasses the problem.
I have the following script created by some self-claimed bash expert:
SCRIPT_LOCATION="$(readlink -f $0)"
SCRIPT_DIRECTORY="$(dirname ${SCRIPT_LOCATION})"
export PYTHONPATH="${PYTHONPATH}:${SCRIPT_DIRECTORY}/util"
That runs nicely on my local Ubuntu 16.04. Now I wanted to use it on our RH 7.2 servers; and there I got an error message from readlink; about being called with bad parameters.
Then I figured: on Ubuntu, $0 gives "bash"; whereas on RH, it gives "-bash".
EDIT: script is invoked as . ourscript.sh
Questions:
Any idea why that is?
When I change my script to use a hardcoded readlink -f bash the whole things works. Are there "better" ways for fixing this?
Feel free to also explain what readlink -f bash is actually doing ;-)
As the script is sourced the readlink -f $0 is pointless as it will just show you the command used to run the shell you are currently using.
To explain the difference in command lets look at the bash man page:
A login shell is one whose first character of argument zero is a -, or one started with the --login option.
When bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
So guessing ubuntu starts with the noprofile option.
As for readlink, we can again look at the man page
-f, --canonicalize
canonicalize by following every symlink in every component of the given name recursively; all but the last component must exist
Therefore it follows symlinks to the base.
Using readlink -f with any non qualified path will result in it just appending the last arg to your current working directory which will not actually show where the script is run.
Try putting any random string instead of bash after it and will see the script is unaffected.
e.g
readlink -f dafsfdsf
Returns
/home/me/testscript/dafsfdsf
My shell-script is failing on SUSE Linux as the stream-redirection operator I have used (&>>) is not working there, (But it is working fine in Other distributions). How can I correct this. Also I would like to know the standard way of doing the same which is supported by all Distributions?
The command you were using should mean you were using bourne shell:
ls &>> file
this command should redirect both stdout and stderr at the end of file.
Another way to write it, again with a bourne shell could be:
ls >> file 2>&1
However it seems to me this way to write the command will be recognized by more shells, I think for instance ksh will recognize the second form but not the first.
With csh or a csh-like shell you will need to use this syntax:
ls >>& file
Edit: I was confused because depending on the shell you can use &>> or >>& which are not the same.
By using zsh for some time along with oh-my-zsh framework, I noticed that which command behaves different in zsh, than in bash.
What I mean:
# on zsh
ilias#ilias-pc ~ ➜ which ls
ls: aliased to ls --color=auto
ilias#ilias-pc ~ ➜ which which
which: shell built-in command
ilias#ilias-pc ~ ➜ bash
[ilias#ilias-pc ~]$ which ls
/usr/bin/ls
[ilias#ilias-pc ~]$ which which
/usr/bin/which
[ilias#ilias-pc ~]$
Why does this happen and how can I "fix" it?
PS. I reproduce this on Arch Linux (not sure whether it matters but I mention it).
$ zsh -c 'type which'
which is a shell builtin
$ bash -c 'type which'
which is /usr/bin/which
The solution is to not use which(1), which is a non-standard and not very useful command. The question of what you should use instead isn't the most straightforward due to the alternatives being poorly specified and inconsistently implemented, but they are better than which.
Depending on your requirements, you want command (see the -v option), type, or whence. See POSIX for the former two, or your shell manual for the latter. (Bash doesn't support whence, but it is supported by most other ksh derivatives, albeit inconsistently. It typically has the most features).
In ZSH, which is equivalent to whence -c (showing function's definitions), not whence -p (which tells executable path). If you want to change that, make an alias.
Question: I get this error message:
export: bad interpreter: No such file or directory
when I execute this bash script:
#!/bin/bash
MONO_PREFIX=/opt/mono-2.6
GNOME_PREFIX=/opt/gnome-2.6
export DYLD_LIBRARY_PATH=$MONO_PREFIX/lib:$DYLD_LIBRARY_PATH
export LD_LIBRARY_PATH=$MONO_PREFIX/lib:$LD_LIBRARY_PATH
export C_INCLUDE_PATH=$MONO_PREFIX/include:$GNOME_PREFIX/include
export ACLOCAL_PATH=$MONO_PREFIX/share/aclocal
export PKG_CONFIG_PATH=$MONO_PREFIX/lib/pkgconfig:$GNOME_PREFIX/lib/pkgconfig
PATH=$MONO_PREFIX/bin:$PATH
PS1="[mono-2.6] \w # "
But the bash path seems to be correct:
asshat#IS1300:~/sources/mono-2.6# which bash
/bin/bash
asshat#IS1300:~# cd sources/
asshat#IS1300:~/sources# cd mono-2.6/
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4 mono-2.4-environment mono-2.6 mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6# cp mono-2.6-environment mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment.sh
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4-environment mono-2.6-environment
mono-2.4 mono-2.6 mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# bash mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6#
What am I doing wrong? Or is this a Lucid Lynx bug?
I did chmod + x
The first line, #!/bin/bash, tells Linux where to find the interpreter. The script should also be executable with chmod +x script.sh, which it appears you did.
It is highly likely that you created this file with a windows editor, which will place a <cr><lf> at the end of each line. This is the standard under dos / windows. OS X will place a <cr> at the end of each line. However, under Unix / Linux, the standard is to just put a <lf> at the end of the line.
Linux is now looking for a file called /bin/bash<cr> to interpret the file,
where <cr> is a carriage return character, which is a valid file character under Linux. Such a file doesn't exist. Hence the error.
Solution: Edit the file with an editor on Linux and get rid of the extra <cr>. One tool that usually works when the file is edited on Windows is dos2unix.
Could the script be using Dos newlines?
Try running dos2unix on it.
It looks like things have been configured to override the export builtin somehow. This can be done via an exported function or the enable builtin, for example. Try putting type export in the script to check. If you are setting BASH_ENV, you probably shouldn't.
If bash is called as sh, it enables POSIX mode and does not allow export to be overridden with a function, as required by POSIX. Likewise, most other shells installed as /bin/sh follow POSIX in this and/or do not allow the execution environment of a script to be messed up so strongly as through importing functions from the environment.
By the way, the script seems designed to be sourced, i.e. . ./mono-2.6-environment instead of ./mono-2.6-environment.
Had the same problem. Used brute force:
/bin/sh /full/path/to/configure --options
& this did the trick
(Of course I'd like to know why)
I encountered a similar error but in my case I forgot to add / before bin and I was encountering the bad interpreter error. Also tried to do
sudo apt-get install dos2unix -y package.
I was using this originally :
#! bin/bash ( i was missing / before bin )
Double check the path as well.
This could be a case of a shebang with homoglyphic unicode characters. In other words, you may have invisible or look-alike characters in the shebang which don't actually represent the string #!/bin/bash. Try looking at the characters in a hex editor.
what worked for me was when dos2Unix wasn't on the system I was working with:
sed -i s/{ctrl+v}{ctrl+m}// filename
This happens sometimes when file system goes funny.
Try to move or rename the file.
If you see "Stale file handle" error this is your problem.
e.g. happened us with CentOS docker
$ ./test.sh
-bash: ./test.sh: /bin/bash: bad interpreter: Invalid argument
$ ls -alstr test.sh
20 -r-xr-xr-x 0 omen omen 17874 Jun 20 01:36 test.sh
$ cp test.sh testcopy.sh
$ ./testcopy.sh
Happy Days
$ mv test.sh footest.sh
mv: cannot move ‘test.sh’ to ‘footest.sh’: Stale file handle
$ rm test.sh
rm: cannot remove ‘test.sh’: Stale file handle
You can copy the file and read it.
But not move it!
Nor remove it.
Some weird docker file-system thing maybe.
Solution: re-create the docker container OR maybe file system repair disk would help
OR of course format c: :-D :-o