I created a shell script "/etc/aaa" on Openwrt which contains the following code:
#!/bin/sh
echo "Hello World!"
I also used this command to ensure the proper permissions:
chmod 777 /etc/aaa
Upon executing with any of the below 2 commands
sh /etc/aaa
or
ash /etc/aaa
it works well and prints "Hello World". The problem occurs when I try to execute it with this command:
/etc/aaa
where I get this error:
-ash: /etc/aaa: not found
Can anyone please explain why this is happening? What am I missing here?
Thanks for Charles Duffy's comment. It's really the CR LF problem.
Though i open the file in my linux virtual machine, since it has been modified in Windows. so enven in linux, the sublime text editor i use will think it's a windows file and use crlf as newline ending.
So ,just replace crlf with lf , the problem solved.
Also worth checking that your shebang is correct, either: #!/bin/sh or #!/bin/bash
Related
First and foremost the script is a 'hello world' script and it's stored in ~/jared/bin. Here's the script:
#!bin/bash
echo "hello world"
Based on this question here, I tried:
The file is executable? I used:
chmod 755 my_script
I ran the following commands, and here is the output:
which bash
/bin/bash
and finally,
echo $PATH
/home/jared/bin:/home/jared/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-8-oracle/bin:/usr/lib/jvm/java-8-oracle/db/bin:/usr/lib/jvm/java-8-oracle/jre/bin
The file endings are UNIX as I wrote the script in VIM. However to be safe, I wrote additional 'hello world' scripts in Sublime and Atom and received the same error when I ran them.
I even tried running dos2unix on the file.
The script will execute when ran directly from the directory and will execute in any directory if I type bash hello_world
However it throws the error if I use ./
I don't know if this will help, but it seems relevant: I can use tab autocomplete on the script if I don't preface it with bash or ./
I fear, I'm making a really stupid mistake here, hopefully someone is prepared to make me feel incredibly silly today. Thanks!
The first line of your script should be
#!/bin/bash
^
It needs to be #! followed by an absolute path to bash. Note the slash before /bin.
first line should look like:
#!/bin/bash
you forgot the '/' after '!' so the system cannot find correct (absolute) path to bash
I want to run a program from perl by using system command(or any other ways ).
system("samtools");
I think it should pass this to shell but it complains ,Can't exec "samtools" file or directory does not exist , when I run it.I have tried many other different program for example
system("velveth");
and it works properly but not this one (samtools). Is any of you facing this problem before?
I am really puzzled.
You can give the full path to that file location.
example:
system( "/usr/bin/perl -de 1");
Try putting Linux command inside `` characters. Will work as well.
Did you modify $path for samtools in the current shell manually?
Since system starts a new sub-shell to run your command, you have to append search path for samtools yourself if doesn't exist in your .bashrc. Check it by:
perl -e 'system("echo \$PATH")'
and
echo $PATH
to see if there's any difference.
i have very simple shell script
#!/bin/bash
cp -rf /var/www/ksite/app2/* /var/www/ksite/app
echo "----"
echo "done"
but seems cp command fails
if i execute
cp -rf /var/www/ksite/app2/* /var/www/ksite/app
from terminal everything work ok. Can someone tell me how to include cp in shell script?
Thanks
We seem to have doubt as to how this script fails. If there is no error message then this is a strange one. I suggest:
On the command line (which works), do a which cp
Whatever the reply, then copy that and use it as the cp in the script (e.g. /bin/cp)
Check the widcard expansion, run your script with bash -x script-name and see if you get what you expect.
echo $? after the copy in the script - if it is zero then it (thinks it) worked.
Do a ls -ld /var/www/ksite/app from your script, maybe someone set a symbolic link?
If it still fails, source the script from the command-line and see if that works . script-name
Double check that the copy did actually fail! (maybe that should be step 1.)
Make sure you really have bash at /bin/bash. I think a batter hash bang is:
#!/usr/bin/env bash
This uses the env command to locate the bash binary and set the environment.
I had similar problem. What helped me:
I used windows and putty to write script, so I had \r\n at the end of lines. Be sure, you have only \n symbol.
I copied files and the only way it worked for me at script was cp <source_dir>/fileName <dest_dir>/fileName whereas at command line cp <source_dir>/fileName <dest_dir> worked well too.
Just covering all the bases .. do the permissions vary between the excutions .. i.e. do you execute one with sudo/root privileges, the other as user (unlikely, but thought I'd ask since we don't know what the exact error is)
Similar issue to Vladmir where the script was created in Windows. I created a new file "my_bash_script.sh" in the linux environment using VIM, then read the contents of my script into the file:
:r file_made_in_windows.sh
Then I saved, closed, then set the file as executable:
chmod 744 my_bash_script.sh
From there, I ran the script:
./my_bash_script.sh
...and it worked. What a weird issue. I was confounded for a moment.
Every time I run a script using bash scriptname.sh from the command line in Debian, I get Command Not found and then the result of the script.
The script works but there is always a Command Not Found statement printed on screen for each empty line. Each blank line is resulting in a command not found.
I am running the script from the /var folder.
Here is the script:
#!/bin/bash
echo Hello World
I run it by typing the following:
bash testscript.sh
Why would this occur?
Make sure your first line is:
#!/bin/bash
Enter your path to bash if it is not /bin/bash
Try running:
dos2unix script.sh
That wil convert line endings, etc from Windows to unix format. i.e. it strips \r (CR) from line endings to change them from \r\n (CR+LF) to \n (LF).
More details about the dos2unix command (man page)
Another way to tell if your file is in dos/Win format:
cat scriptname.sh | sed 's/\r/<CR>/'
The output will look something like this:
#!/bin/sh<CR>
<CR>
echo Hello World<CR>
<CR>
This will output the entire file text with <CR> displayed for each \r character in the file.
You can use bash -x scriptname.sh to trace it.
I also ran into a similar issue. The issue seems to be permissions. If you do an ls -l, you may be able to identify that your file may NOT have the execute bit turned on. This will NOT allow the script to execute. :)
As #artooro added in comment:
To fix that issue run chmod +x testscript.sh
This might be trivial and not related to the OP's question, but I often made this mistaken at the beginning when I was learning scripting
VAR_NAME = $(hostname)
echo "the hostname is ${VAR_NAME}"
This will produce 'command not found' response. The correct way is to eliminate the spaces
VAR_NAME=$(hostname)
On Bash for Windows I've tried incorrectly to run
run_me.sh
without ./ at the beginning and got the same error.
For people with Windows background the correct form looks redundant:
./run_me.sh
If the script does its job (relatively) well, then it's running okay. Your problem is probably a single line in the file referencing a program that's either not on the path, not installed, misspelled, or something similar.
One way is to place a set -x at the top of your script or run it with bash -x instead of just bash - this will output the lines before executing them and you usually just need to look at the command output immediately before the error to see what's causing the problem
If, as you say, it's the blank lines causing the problems, you might want to check what's actaully in them. Run:
od -xcb testscript.sh
and make sure there's no "invisible" funny characters like the CTRL-M (carriage return) you may get by using a Windows-type editor.
use dos2unix on your script file.
for executing that you must provide full path of that
for example
/home/Manuel/mywrittenscript
Try chmod u+x testscript.sh
I know it from here:
http://www.linuxquestions.org/questions/red-hat-31/running-shell-script-command-not-found-202062/
If you have Notepad++ and you get this .sh Error Message: "command not found"
or this autoconf Error Message "line 615:
../../autoconf/bin/autom4te: No such file or directory".
On your Notepad++, Go to Edit -> EOL Conversion then check Macinthos(CR).
This will edit your files. I also encourage to check all files with this command,
because soon such an error will occur.
Had the same problem. Unfortunately
dos2unix winfile.sh
bash: dos2unix: command not found
so I did this to convert.
awk '{ sub("\r$", ""); print }' winfile.sh > unixfile.sh
and then
bash unixfile.sh
Problems with running scripts may also be connected to bad formatting of multi-line commands, for example if you have a whitespace character after line-breaking "\". E.g. this:
./run_me.sh \
--with-some parameter
(please note the extra space after "\") will cause problems, but when you remove that space, it will run perfectly fine.
I was also having some of the Cannot execute command. Everything looked correct, but in fact I was having a non-breakable space right before my command which was ofcourse impossible to spot with the naked eye:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Which, in Vim, looked like:
if [[ "true" ]]; then
highlight --syntax js "var i = 0;"
fi
Only after running the Bash script checker shellcheck did I find the problem.
I ran into this today, absentmindedly copying the dollar command prompt $ (ahead of a command string) into the script.
Make sure you haven´t override the 'PATH' variable by mistake like this:
#!/bin/bash
PATH="/home/user/Pictures/"; # do NOT do this
This was my mistake.
Add the current directory ( . ) to PATH to be able to execute a script, just by typing in its name, that resides in the current directory:
PATH=.:$PATH
You may want to update you .bashrc and .bash_profile files with aliases to recognize the command you are entering.
.bashrc and .bash_profile files are hidden files probably located on your C: drive where you save your program files.
Question: I get this error message:
export: bad interpreter: No such file or directory
when I execute this bash script:
#!/bin/bash
MONO_PREFIX=/opt/mono-2.6
GNOME_PREFIX=/opt/gnome-2.6
export DYLD_LIBRARY_PATH=$MONO_PREFIX/lib:$DYLD_LIBRARY_PATH
export LD_LIBRARY_PATH=$MONO_PREFIX/lib:$LD_LIBRARY_PATH
export C_INCLUDE_PATH=$MONO_PREFIX/include:$GNOME_PREFIX/include
export ACLOCAL_PATH=$MONO_PREFIX/share/aclocal
export PKG_CONFIG_PATH=$MONO_PREFIX/lib/pkgconfig:$GNOME_PREFIX/lib/pkgconfig
PATH=$MONO_PREFIX/bin:$PATH
PS1="[mono-2.6] \w # "
But the bash path seems to be correct:
asshat#IS1300:~/sources/mono-2.6# which bash
/bin/bash
asshat#IS1300:~# cd sources/
asshat#IS1300:~/sources# cd mono-2.6/
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4 mono-2.4-environment mono-2.6 mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6# cp mono-2.6-environment mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# ./mono-2.6-environment.sh
export: bad interpreter: No such file or directory
asshat#IS1300:~/sources/mono-2.6# ls
download mono-2.4-environment mono-2.6-environment
mono-2.4 mono-2.6 mono-2.6-environment.sh
asshat#IS1300:~/sources/mono-2.6# bash mono-2.6-environment
asshat#IS1300:~/sources/mono-2.6#
What am I doing wrong? Or is this a Lucid Lynx bug?
I did chmod + x
The first line, #!/bin/bash, tells Linux where to find the interpreter. The script should also be executable with chmod +x script.sh, which it appears you did.
It is highly likely that you created this file with a windows editor, which will place a <cr><lf> at the end of each line. This is the standard under dos / windows. OS X will place a <cr> at the end of each line. However, under Unix / Linux, the standard is to just put a <lf> at the end of the line.
Linux is now looking for a file called /bin/bash<cr> to interpret the file,
where <cr> is a carriage return character, which is a valid file character under Linux. Such a file doesn't exist. Hence the error.
Solution: Edit the file with an editor on Linux and get rid of the extra <cr>. One tool that usually works when the file is edited on Windows is dos2unix.
Could the script be using Dos newlines?
Try running dos2unix on it.
It looks like things have been configured to override the export builtin somehow. This can be done via an exported function or the enable builtin, for example. Try putting type export in the script to check. If you are setting BASH_ENV, you probably shouldn't.
If bash is called as sh, it enables POSIX mode and does not allow export to be overridden with a function, as required by POSIX. Likewise, most other shells installed as /bin/sh follow POSIX in this and/or do not allow the execution environment of a script to be messed up so strongly as through importing functions from the environment.
By the way, the script seems designed to be sourced, i.e. . ./mono-2.6-environment instead of ./mono-2.6-environment.
Had the same problem. Used brute force:
/bin/sh /full/path/to/configure --options
& this did the trick
(Of course I'd like to know why)
I encountered a similar error but in my case I forgot to add / before bin and I was encountering the bad interpreter error. Also tried to do
sudo apt-get install dos2unix -y package.
I was using this originally :
#! bin/bash ( i was missing / before bin )
Double check the path as well.
This could be a case of a shebang with homoglyphic unicode characters. In other words, you may have invisible or look-alike characters in the shebang which don't actually represent the string #!/bin/bash. Try looking at the characters in a hex editor.
what worked for me was when dos2Unix wasn't on the system I was working with:
sed -i s/{ctrl+v}{ctrl+m}// filename
This happens sometimes when file system goes funny.
Try to move or rename the file.
If you see "Stale file handle" error this is your problem.
e.g. happened us with CentOS docker
$ ./test.sh
-bash: ./test.sh: /bin/bash: bad interpreter: Invalid argument
$ ls -alstr test.sh
20 -r-xr-xr-x 0 omen omen 17874 Jun 20 01:36 test.sh
$ cp test.sh testcopy.sh
$ ./testcopy.sh
Happy Days
$ mv test.sh footest.sh
mv: cannot move ‘test.sh’ to ‘footest.sh’: Stale file handle
$ rm test.sh
rm: cannot remove ‘test.sh’: Stale file handle
You can copy the file and read it.
But not move it!
Nor remove it.
Some weird docker file-system thing maybe.
Solution: re-create the docker container OR maybe file system repair disk would help
OR of course format c: :-D :-o