Question: I get this error message:
export: bad interpreter: No such file or directory
when I execute this bash script:
#!/bin/bash
MONO_PREFIX=/opt/mono-2.6
GNOME_PREFIX=/opt/gnome-2.6
export DYLD_LIBRARY_PATH=$MONO_PREFIX/lib:$DYLD_LIBRARY_PATH
export LD_LIBRARY_PATH=$MONO_PREFIX/lib:$LD_LIBRARY_PATH
export C_INCLUDE_PATH=$MONO_PREFIX/include:$GNOME_PREFIX/include
export ACLOCAL_PATH=$MONO_PREFIX/share/aclocal
export PKG_CONFIG_PATH=$MONO_PREFIX/lib/pkgconfig:$GNOME_PREFIX/lib/pkgconfig
PATH=$MONO_PREFIX/bin:$PATH
PS1="[mono-2.6] \w # "
But the bash path seems to be correct:
asshat#IS1300:~/sources/mono-2.6# which bash
/bin/bash
asshat#IS1300:~# cd sources/
asshat#IS1300:~/sources# cd mono-2.6/
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4 mono-2.4-environment mono-2.6 mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6# cp mono-2.6-environment mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment.sh
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4-environment mono-2.6-environment
mono-2.4 mono-2.6 mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# bash mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6#
What am I doing wrong? Or is this a Lucid Lynx bug?
I did chmod + x
The first line, #!/bin/bash, tells Linux where to find the interpreter. The script should also be executable with chmod +x script.sh, which it appears you did.
It is highly likely that you created this file with a windows editor, which will place a <cr><lf> at the end of each line. This is the standard under dos / windows. OS X will place a <cr> at the end of each line. However, under Unix / Linux, the standard is to just put a <lf> at the end of the line.
Linux is now looking for a file called /bin/bash<cr> to interpret the file,
where <cr> is a carriage return character, which is a valid file character under Linux. Such a file doesn't exist. Hence the error.
Solution: Edit the file with an editor on Linux and get rid of the extra <cr>. One tool that usually works when the file is edited on Windows is dos2unix.
Could the script be using Dos newlines?
Try running dos2unix on it.
It looks like things have been configured to override the export builtin somehow. This can be done via an exported function or the enable builtin, for example. Try putting type export in the script to check. If you are setting BASH_ENV, you probably shouldn't.
If bash is called as sh, it enables POSIX mode and does not allow export to be overridden with a function, as required by POSIX. Likewise, most other shells installed as /bin/sh follow POSIX in this and/or do not allow the execution environment of a script to be messed up so strongly as through importing functions from the environment.
By the way, the script seems designed to be sourced, i.e. . ./mono-2.6-environment instead of ./mono-2.6-environment.
Had the same problem. Used brute force:
/bin/sh /full/path/to/configure --options
& this did the trick
(Of course I'd like to know why)
I encountered a similar error but in my case I forgot to add / before bin and I was encountering the bad interpreter error. Also tried to do
sudo apt-get install dos2unix -y package.
I was using this originally :
#! bin/bash ( i was missing / before bin )
Double check the path as well.
This could be a case of a shebang with homoglyphic unicode characters. In other words, you may have invisible or look-alike characters in the shebang which don't actually represent the string #!/bin/bash. Try looking at the characters in a hex editor.
what worked for me was when dos2Unix wasn't on the system I was working with:
sed -i s/{ctrl+v}{ctrl+m}// filename
This happens sometimes when file system goes funny.
Try to move or rename the file.
If you see "Stale file handle" error this is your problem.
e.g. happened us with CentOS docker
$ ./test.sh
-bash: ./test.sh: /bin/bash: bad interpreter: Invalid argument
$ ls -alstr test.sh
20 -r-xr-xr-x 0 omen omen 17874 Jun 20 01:36 test.sh
$ cp test.sh testcopy.sh
$ ./testcopy.sh
Happy Days
$ mv test.sh footest.sh
mv: cannot move ‘test.sh’ to ‘footest.sh’: Stale file handle
$ rm test.sh
rm: cannot remove ‘test.sh’: Stale file handle
You can copy the file and read it.
But not move it!
Nor remove it.
Some weird docker file-system thing maybe.
Solution: re-create the docker container OR maybe file system repair disk would help
OR of course format c: :-D :-o
Related
I am very new to linux and shell scriprting.
I am trying to run a shellscript from secure shell (ssh) on linux using following commands:
chmod +x path/to/mynewshell.sh
sh path/to/mynewshell.sh
I get this error:
path/to/mynewshell.sh: path/to/mynewshell.sh: cannot execute binary file.
Tried using this command:
bash path/to/mynewshell.sh
I get the same error.
Tried with this command: su - myusername sh path/to/mynewshell.sh
It is asking for my password and giving me this error: no such file or directory.
1.The result of cat -v path/to/mynewshell.sh is:
^#^#^#^#^#^#^#^#Rscript "$dir"/diver_script.R
done
2.When tried 'less path/to/mynewshell.sh' i got this on my terminal:
#!/bin/bash/Rscript^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#^#
for dir in /path/to/* ; do
^#^#^#^#^#^#^#^#Rscript "$dir"/myRscript.R
done
3.When i ran file path/to/mynewshell.sh : i got this "Bourne-Again shell script text executable"
Please give any advice on how I can try executing the shellscript.
chmod -x removes execution permission from a file. Do this:
chmod +x path/to/mynewshell.sh
And run it with
/path/to/mynewshell.sh
As the error report says, you script is not actually a script, it's a binary file.
I was getting the same error running my shell script through a bash interpreter in PowerShell. I ran dos2unix myscript.sh on the shell script, and now it runs ok.
From a proposed duplicate:
run_me.sh.xz: run_me.sh.xz: cannot execute binary file
This is because the file is compressed, as indicated by the .xz extension. You need to remove the compression before the file can be used.
xz -d ./run_me.sh.xz
chmod +x ./run_me.sh # probably not necessary if you already did that before
./run_me.sh
Other compression schemes like gzip (.gz extension), bzip2 (.bz2 extension) etc behave similarly; you just have to know the name of the command to uncompress it, which is of course usually easy to google.
To anyone else having the problem i had.
i was trying to run a 16 bit unicode text file converted to a shell script, this doesn't work as all 16 bit unicode text files have a 0xFFFE marker at the start making mac os not like the file and this gives the “cannot execute binary file” error.
open the text file click on "Format" at the top, go down to "Make Plain Text" click it.
open your terminal type chmod 777 /path/to/file.sh
put in terminal: /path/to/file.sh to run it
That script is simply not a shell script.
A shell script is usually readable and contains shell code.
The output your cat command shows looks indeed like it's a binary of some sort.
As some note, it might be because of a file conversion issue when copying but it looks more like an actual binary to me.
You can check what it is identified as with the file command so:
file path/to/mynewshell.sh
Just start with a clean script and rewrite the code, it looks like you just want to run some R scripts in a directory?
Make sure the R scripts point to the right R script executioner.
In my case I had a bash script that would not execute. The file was originally generated from a find ... -print0 command. Leaving a \0 character the script, removing that character solved my problem.
I tried to create a script in linux, on a Synology server over SSH
so I wrote a file test.sh
#!/bin/bash
echo "this is a test"
I saved the file.
after that I did
chmod 755 test.sh
the I did
./test.sh
then i got this error
-ash "./test.sh" is not found
the file was created in
/root
I don't understand
Your shell (ash?) is trying to execute your script and is getting an ENOENT (no such file or directory) error code back. This can refer to the script itself, but in this case it refers to the interpreter named in the #! line.
That is, /bin/bash does not exist and that's why the script couldn't be started.
Workaround: Install bash or (if you don't need any bash specific features) change the first line to #!/bin/sh.
This is one of the quirks with hash bang programs. If the interpreter is not found (i.e. the program interpreting the script), you don't get a completely useful error like /bin/bash: no such file, but a completely useless and misleading test.sh: not found.
If this isn't in the Unix Hater's Handbook, it should be. :-)
You can either use #!/bin/sh or #!/path/to/bash or #!/usr/bin/env bash (which searches PATH for bash).
i have very simple shell script
#!/bin/bash
cp -rf /var/www/ksite/app2/* /var/www/ksite/app
echo "----"
echo "done"
but seems cp command fails
if i execute
cp -rf /var/www/ksite/app2/* /var/www/ksite/app
from terminal everything work ok. Can someone tell me how to include cp in shell script?
Thanks
We seem to have doubt as to how this script fails. If there is no error message then this is a strange one. I suggest:
On the command line (which works), do a which cp
Whatever the reply, then copy that and use it as the cp in the script (e.g. /bin/cp)
Check the widcard expansion, run your script with bash -x script-name and see if you get what you expect.
echo $? after the copy in the script - if it is zero then it (thinks it) worked.
Do a ls -ld /var/www/ksite/app from your script, maybe someone set a symbolic link?
If it still fails, source the script from the command-line and see if that works . script-name
Double check that the copy did actually fail! (maybe that should be step 1.)
Make sure you really have bash at /bin/bash. I think a batter hash bang is:
#!/usr/bin/env bash
This uses the env command to locate the bash binary and set the environment.
I had similar problem. What helped me:
I used windows and putty to write script, so I had \r\n at the end of lines. Be sure, you have only \n symbol.
I copied files and the only way it worked for me at script was cp <source_dir>/fileName <dest_dir>/fileName whereas at command line cp <source_dir>/fileName <dest_dir> worked well too.
Just covering all the bases .. do the permissions vary between the excutions .. i.e. do you execute one with sudo/root privileges, the other as user (unlikely, but thought I'd ask since we don't know what the exact error is)
Similar issue to Vladmir where the script was created in Windows. I created a new file "my_bash_script.sh" in the linux environment using VIM, then read the contents of my script into the file:
:r file_made_in_windows.sh
Then I saved, closed, then set the file as executable:
chmod 744 my_bash_script.sh
From there, I ran the script:
./my_bash_script.sh
...and it worked. What a weird issue. I was confounded for a moment.
Every time I run a script using bash scriptname.sh from the command line in Debian, I get Command Not found and then the result of the script.
The script works but there is always a Command Not Found statement printed on screen for each empty line. Each blank line is resulting in a command not found.
I am running the script from the /var folder.
Here is the script:
#!/bin/bash
echo Hello World
I run it by typing the following:
bash testscript.sh
Why would this occur?
Make sure your first line is:
#!/bin/bash
Enter your path to bash if it is not /bin/bash
Try running:
dos2unix script.sh
That wil convert line endings, etc from Windows to unix format. i.e. it strips \r (CR) from line endings to change them from \r\n (CR+LF) to \n (LF).
More details about the dos2unix command (man page)
Another way to tell if your file is in dos/Win format:
cat scriptname.sh | sed 's/\r/<CR>/'
The output will look something like this:
#!/bin/sh<CR>
<CR>
echo Hello World<CR>
<CR>
This will output the entire file text with <CR> displayed for each \r character in the file.
You can use bash -x scriptname.sh to trace it.
I also ran into a similar issue. The issue seems to be permissions. If you do an ls -l, you may be able to identify that your file may NOT have the execute bit turned on. This will NOT allow the script to execute. :)
As #artooro added in comment:
To fix that issue run chmod +x testscript.sh
This might be trivial and not related to the OP's question, but I often made this mistaken at the beginning when I was learning scripting
VAR_NAME = $(hostname)
echo "the hostname is ${VAR_NAME}"
This will produce 'command not found' response. The correct way is to eliminate the spaces
VAR_NAME=$(hostname)
On Bash for Windows I've tried incorrectly to run
run_me.sh
without ./ at the beginning and got the same error.
For people with Windows background the correct form looks redundant:
./run_me.sh
If the script does its job (relatively) well, then it's running okay. Your problem is probably a single line in the file referencing a program that's either not on the path, not installed, misspelled, or something similar.
One way is to place a set -x at the top of your script or run it with bash -x instead of just bash - this will output the lines before executing them and you usually just need to look at the command output immediately before the error to see what's causing the problem
If, as you say, it's the blank lines causing the problems, you might want to check what's actaully in them. Run:
od -xcb testscript.sh
and make sure there's no "invisible" funny characters like the CTRL-M (carriage return) you may get by using a Windows-type editor.
use dos2unix on your script file.
for executing that you must provide full path of that
for example
/home/Manuel/mywrittenscript
Try chmod u+x testscript.sh
I know it from here:
http://www.linuxquestions.org/questions/red-hat-31/running-shell-script-command-not-found-202062/
If you have Notepad++ and you get this .sh Error Message: "command not found"
or this autoconf Error Message "line 615:
../../autoconf/bin/autom4te: No such file or directory".
On your Notepad++, Go to Edit -> EOL Conversion then check Macinthos(CR).
This will edit your files. I also encourage to check all files with this command,
because soon such an error will occur.
Had the same problem. Unfortunately
dos2unix winfile.sh
bash: dos2unix: command not found
so I did this to convert.
awk '{ sub("\r$", ""); print }' winfile.sh > unixfile.sh
and then
bash unixfile.sh
Problems with running scripts may also be connected to bad formatting of multi-line commands, for example if you have a whitespace character after line-breaking "\". E.g. this:
./run_me.sh \
--with-some parameter
(please note the extra space after "\") will cause problems, but when you remove that space, it will run perfectly fine.
I was also having some of the Cannot execute command. Everything looked correct, but in fact I was having a non-breakable space right before my command which was ofcourse impossible to spot with the naked eye:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Which, in Vim, looked like:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Only after running the Bash script checker shellcheck did I find the problem.
I ran into this today, absentmindedly copying the dollar command prompt $ (ahead of a command string) into the script.
Make sure you haven´t override the 'PATH' variable by mistake like this:
#!/bin/bash
PATH="/home/user/Pictures/"; # do NOT do this
This was my mistake.
Add the current directory ( . ) to PATH to be able to execute a script, just by typing in its name, that resides in the current directory:
PATH=.:$PATH
You may want to update you .bashrc and .bash_profile files with aliases to recognize the command you are entering.
.bashrc and .bash_profile files are hidden files probably located on your C: drive where you save your program files.
I'm trying to build a software called Slicer3 on Windows 7 which features a "super build".
It's a All-in-one TCL Script to checkout and build Slicer3.
I ran CYGWIN and navigated til the correct directory, then ran the script and got:
$ ./Slicer3-svn/Scripts/getbuildtest.tcl
couldn't read file "./Slicer3-svn/Scripts/getbuildtest.tcl": no such file or directory
Obviously I am sure that the file exists and I gave it 777 permission. I'm running cygwin as admin.
The beginning of the tcl file's content is:
#!/bin/sh
# the next line restarts using tclsh \
exec tclsh "$0" "$#"
So I tried commenting out line 3 and running directly
$ tclsh ./Slicer3-svn/Scripts/getbuildtest.tcl
but got the same error.
Any idea?
I will try to help troubleshooting as much as I can:
Determine if you have tclsh installed and it is in the PATH:
$ tclsh
Determine if tclsh works for a simple script:
$ echo puts hello > hello.tcl; tclsh hello.tcl
Determine if the script exists and readable:
$ cat ./Slicer3-svn/Scripts/getbuildtest.tcl
$ cd ./Slicer3-svn/Scripts
$ cat getbuildtest.tcl
As far as line-ending (DOS CRLF vs. Unix LF):
$ cd ./Slicer3-svn/Scripts
$ sed 's/\r\n/\n/g' getbuildtest.tcl > getbuildtest_new.tcl
$ tclsh getbuildtest_new.tcl
This way, we can narrow down the problem, should any of the steps failed.
Description: TCLSH couldn't read file: no such file or directory.
Possible reason: TCLSH under CYGWIN does not resolve windows PATH's properly, and cannot find the files.
Workaround: move the script within the CYGWIN path.
just moving the script to a "simpler" path works for me. I'm not that sure about the reason, neither how to solve this misbehaviour.