!#/bin/bash: No such file or directory - linux

First, I run
/bin/bash a.sh
It succeed.
But when I want to run
/bin/bash a.sh > a_info.txt
It failed.
Error: line 1: !#/bin/bash: No such file or directory
It does not seem that it’s '^M' that’s causing this error.

The shebang pattern is #!/bin/your_shell, not !#/bin/your_shell. Just fix the typo.

Check the file's encoding.
If you get this error, but you did not misspell the 'shebang', have a close look at the file's encoding. I was getting this error on a file with encoding 'UTF8 with BOM'... I guess the BOM (Byte Order Mark) was causing the problem. Changed the encoding to ASCII and that fixed it (or you can use 'UTF-8 without BOM' which is effectively the same for files only containing ASCII characters).

Another possible cause of the "No such file or directory" error is if your shell script is using CRLF instead of LF line endings.

Related

Fatal: Failed to read password file: open ~/RPI/filename.sec: no such file or directory. But it exist [linux]

Trying to run .sh file. with --password "~/RPI.filename"in it. Should read the filename but getting an error. Even though the file exists
error:
Fatal: Failed to read password file: open ~/RPI/filename.sec: no such file or directory
list of directories and files:
~/RPI$ l
g.json m1/ m2/ filename.sec startm1.sh*
I have used it before and I know the method works but no idea what on earth is happening here.
Maybe your script is running in sh instead of bash ? I think "~" is bash syntax.
There are 1½ issues here...
First, quoting a tilde prevents tilde expansion; compare ls ~/RPI.filename and ls "~/RPI.filename"
=> leave the tilde unquoted if you don't want a literal '~'
Second, is it ~/RPI.filename or ~/RPI/filename?

Error found when loading /etc/profile/

When I turn on my laptop(Ubuntu 14.04),appears
Error found when loading /etc/profile:\n\n/etc/profile.d/myenv_vars.sh:line
LD command not found
What should I do? Can I delete profile.d?
Your file should have
LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
"space" is not a valid character for shell variable names.
To avoid the trailing colon:
LD_LIBRARY_PATH=/usr/local/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
If LD_LIBRARY_PATH is null or unset, the new value will be just "/usr/local/lib"
If LD_LIBRAY_PATH has a value, the new value will be "/usr/local/lib:old_value"
If the first line of the file contains:
#!/bin/sh
but the file has cr/lf line endings, you will get this error. Do this:
# dos2unix /etc/profile.d/env_vars.sh
and try logging in again:
$ exec ${SHELL} -l
Should be no errors.

syntax error near unexpected token ' - bash

I have a written a sample script on my Mac
#!/bin/bash
test() {
echo "Example"
}
test
exit 0
and this works fine by displaying Example
When I run this script on a RedHat machine, it says
syntax error near unexpected token '
I checked that bash is available using
cat /etc/shells
which bash shows /bin/bash
Did anyone come across the same issue ?
Thanks in advance !
It could be a file encoding issue.
I have encountered file type encoding issues when working on files between different operating systems and editors - in my case particularly between Linux and Windows systems.
I suggest checking your file's encoding to make sure it is suitable for the target linux environment. I guess an encoding issue is less likely given you are using a MAC than if you had used a Windows text editor, however I think file encoding is still worth considering.
--- EDIT (Add an actual solution as recommended by #Potatoswatter)
To demonstrate how file type encoding could be this issue, I copy/pasted your example script into Notepad in Windows (I don't have access to a Mac), then copied it to a linux machine and ran it:
jdt#cookielin01:~/windows> sh ./originalfile
./originalfile: line 2: syntax error near unexpected token `$'{\r''
'/originalfile: line 2: `test() {
In this case, Notepad saved the file with carriage returns and linefeeds, causing the error shown above. The \r indicates a carriage return (Linux systems terminate lines with linefeeds \n only).
On the linux machine, you could test this theory by running the following to strip carriage returns from the file, if they are present:
cat originalfile | tr -d "\r" > newfile
Then try to run the new file sh ./newfile . If this works, the issue was carriage returns as hidden characters.
Note: This is not an exact replication of your environment (I don't have access to a Mac), however it seems likely to me that the issue is that an editor, somewhere, saved carriage returns into the file.
--- /EDIT
To elaborate a little, operating systems and editors can have different file encoding defaults. Typically, applications and editors will influence the filetype encoding used, for instance, I think Microsoft Notepad and Notepad++ default to Windows-1252. There may be newline differences to consider too (In Windows environments, a carriage return and linefeed is often used to terminate lines in files, whilst in Linux and OSX, only a Linefeed is usually used).
A similar question and answer that references file encoding is here: bad character showing up in bash script execution
try something like
$ sudo apt-get install dos2unix
$ dos2unix offendingfile
Easy way to convert example.sh file to UNIX if you are working in Windows is to use NotePad++ (Edit>EOL Conversion>UNIX/OSX Format)
You can also set the default EOL in notepad++ (Settings>Preferences>New Document/Default Directory>select Unix/OSX under the Format box)
Thanks #jdt for your answer.
Following that, and since I keep having this issue with carriage return, I wrote that small script. Only run carriage_return and you'll be prompted for the file to "clean".
https://gist.github.com/kartonnade/44e9842ed15cf21a3700
alias carriage_return=remove_carriage_return
remove_carriage_return(){
# cygwin throws error like :
# syntax error near unexpected token `$'{\r''
# due to carriage return
# this function runs the following
# cat originalfile | tr -d "\r" > newfile
read -p "File to clean ? "
file_to_clean=$REPLY
temp_file_to_clean=$file_to_clean'_'
# file to clean => temporary clean file
remove_carriage_return_one='cat '$file_to_clean' | tr -d "\r" > '
remove_carriage_return_one=$remove_carriage_return_one$temp_file_to_clean
# temporary clean file => new clean file
remove_carriage_return_two='cat '$temp_file_to_clean' | tr -d "\r" > '
remove_carriage_return_two=$remove_carriage_return_two$file_to_clean
eval $remove_carriage_return_one
eval $remove_carriage_return_two
# remove temporary clean file
eval 'rm '$temp_file_to_clean
}
I want to add to the answer above is how to check if it is carriage return issue in Unix like environment (I tested in MacOS)
1) Using cat
cat -e my_file_name
If you see the lines ended with ^M$, then yes, it is the carriage return issue.
2) Find first line with carriage return character
grep -r $'\r' Grader.sh | head -1
3) Using vim
vim my_file_name
Then in vim, type
:set ff
If you see fileformat=dos, then the file is from a dos environment which contains a carriage return.
After finding out, you can use the above mentioned methods by other people to correct your file.
I had the same problem when i was working with armbian linux and Windows .
i was trying to coppy my codes from windows to armbian and when i run it this Error Pops Up. My problem Solved this way :
1- try to Coppy your files from windows using WinSCP .
2- make sure that your file name does not have () characters

zip command not working

I am trying to zip a file using shell script command. I am using following command:
zip ./test/step1.zip $FILES
where $FILES contain all the input files. But I am getting a warning as follows
zip warning: name not matched: myfile.dat
and one more thing I observed that the file which is at last in the list of files in a folder has the above warning and that file is not getting zipped.
Can anyone explain me why this is happening? I am new to shell script world.
zip warning: name not matched: myfile.dat
This means the file myfile.dat does not exist.
You will get the same error if the file is a symlink pointing to a non-existent file.
As you say, whatever is the last file at the of $FILES, it will not be added to the zip along with the warning. So I think something's wrong with the way you create $FILES. Chances are there is a newline, carriage return, space, tab, or other invisible character at the end of the last filename, resulting in something that doesn't exist. Try this for example:
for f in $FILES; do echo :$f:; done
I bet the last line will be incorrect, for example:
:myfile.dat :
...or something like that instead of :myfile.dat: with no characters before the last :
UPDATE
If you say the script started working after running dos2unix on it, that confirms what everybody suspected already, that somehow there was a carriage-return at the end of your $FILES list.
od -c shows the \r carriage-return. Try echo $FILES | od -c
Another possible cause that can generate a zip warning: name not matched: error is having any of zip's environment variables set incorrectly.
From the man page:
ENVIRONMENT
The following environment variables are read and used by zip as described.
ZIPOPT
contains default options that will be used when running zip. The contents of this environment variable will get added to the command line just after the zip command.
ZIP
[Not on RISC OS and VMS] see ZIPOPT
Zip$Options
[RISC OS] see ZIPOPT
Zip$Exts
[RISC OS] contains extensions separated by a : that will cause native filenames with one of the specified extensions to be added to the zip file with basename and extension swapped.
ZIP_OPTS
[VMS] see ZIPOPT
In my case, I was using zip in a script and had the binary location in an environment variable ZIP so that we could change to a different zip binary easily without making tonnes of changes in the script.
Example:
ZIP=/usr/bin/zip
...
${ZIP} -r folder.zip folder
This is then processed as:
/usr/bin/zip /usr/bin/zip -r folder.zip folder
And generates the errors:
zip warning: name not matched: folder.zip
zip I/O error: Operation not permitted
zip error: Could not create output file (/usr/bin/zip.zip)
The first because it's now trying to add folder.zip to the archive instead of using it as the archive. The second and third because it's trying to use the file /usr/bin/zip.zip as the archive which is (fortunately) not writable by a normal user.
Note: This is a really old question, but I didn't find this answer anywhere, so I'm posting it to help future searchers (my future self included).
eebbesen hit the nail in his comment for my case (but i cannot vote for comment).
Another possible reason missed in the other comments is file exceeding the file size limit (4GB).
I converted my script for unix environment using dos2unix command and executed my script as ./myscript.sh instead bash myscript.sh.
I just discovered another potential cause for this. If the permissions of the directory/subdirectory don't allow the zip to find the file, it will report this error. Actually, if you run a chmod -R 444 on the directory, and then try to zip it, you will reproduce this error, and also have a "stored 0%" report, like this:
zip warning: name not matched: borrar/enviar
adding: borrar/ (stored 0%)
Hence, try changing the permissions of the file. If you are trying to send them through email, and those email filters (like Gmail's) invent silly filters of not sending executables, don't forget that making permissions very strict when making zip compression can be the cause of the error you are reporting, of "name not matched".
spaces are not allowed:
it would fail if there are more than one files(s) in $FILES unless you put them in loop
I also encountered this issue. In my case, the line separate is CRLF in my zip shell script which causes the problem. Using LF fixed it.

How to remove non UTF-8 characters from text file

I have a bunch of Arabic, English, Russian files which are encoded in utf-8. Trying to process these files using a Perl script, I get this error:
Malformed UTF-8 character (fatal)
Manually checking the content of these files, I found some strange characters in them.
Now I'm looking for a way to automatically remove these characters from the files.
Is there anyway to do it?
This command:
iconv -f utf-8 -t utf-8 -c file.txt
will clean up your UTF-8 file, skipping all the invalid characters.
-f is the source format
-t the target format
-c skips any invalid sequence
Your method must read byte by byte and fully understand and appreciate the byte wise construction of characters. The simplest method is to use an editor which will read anything but only output UTF-8 characters. Textpad is one choice.
iconv
can do it
iconv -f cp1252 foo.txt
None of the methods here or on any other similar questions worked for me.
In the end what worked was simply opening the file in Sublime Text 2. Go to File > Reopen with Encoding > UTF-8. Copy the entire content of the file into a new file and save it.
May not be the expected solution but putting this out here in case it helps anyone, since I've been struggling for hours with this.

Resources