Test if a file exists - linux

I struggle with the examples given if a file exists.
I want to check if multiple files exists in order to perform further operations.
ls -al:
-rwxrwxrwx 1 root tomcat 6 Dec 16 11:25 documents_2019-12-12.tar.gz
echo < [ -e ./documents_2019-12-12.tar.gz ]:
bash: [: No such file or directory
Can somebody tell me what i'm doing wrong?
Edit:
I have a backup direcory with two files:
database_date.sql
documents_date.tar.gz
I need to check if both files for a given date are available. The directory shall contain these file-pairs for several dates.

What you have here is a misunderstanding of where specific syntax is used. The [ -e ./documents_2019-12-12.tar.gz ] part of your command is syntax specific to the if clause in bash. Here's an example
if [ -e ./documents_2019-12-12.tar.gz ]
then
echo "File Exists!"
fi
The square brackets [] are used to surround the check being performed by the if statement and the -e flag is specific to these if checks. More info here http://tldp.org/LDP/Bash-Beginners-Guide/html/sect_07_01.html
To explain the error you're seeing, the < operator takes a file to the right and feeds the contents to the command on the left. In your case the < sees the [ as the thing on the right so we try and read it as a file. Such a file doesn't exist so bash helpfully tells you that there's an error (the bash: [: No such file or directory bit) and then quits out.

Related

Moving some certain folders in a directory into a folder in the same directory in bash scripting linux [duplicate]

I was trying to write a Bash script that uses an if statement.
if[$CHOICE -eq 1];
The script was giving me errors until I gave a space before and after [ and before ] as shown below:
if [ $CHOICE -eq 1 ];
My question here is, why is the space around the square brackets so important in Bash?
Once you grasp that [ is a command, a whole lot becomes clearer!
[ is another way to spell "test".
help [
However while they do exactly the same, test turns out to have a more detailed help page. Check
help test
...for more information.
Furthermore note that I'm using, by intention, help test and not man test. That's because test and [ are shell builtin commands nowadays. Their feature set might differ from /bin/test and /bin/[ from coreutils which are the commands described in the man pages.
From another question:
A bit of history: this is because '[' was historically not a shell-built-in but a separate executable that received the expresson as arguments and returned a result. If you didn't surround the '[' with space, the shell would be searching $PATH for a different filename (and not find it) . – Andrew Medico Jun 24 '09 at 1:13
[ is a command and $CHOICE should be an argument, but by doing [$CHOICE (without any space between [ and $CHOICE) you are trying to run a command named [$CHOICE. The syntax for command is:
command arguments separated with space
[ is a test command. So it requires space.
It's worth noting that [ is also used in glob matching, which can get you into trouble.
$ echo [12345]
[12345]
$ echo oops >3
$ echo [12345]
3

If statement comparing variable to files in list

I am using terminal emulator. I have a folder with save files in it and am trying to determine whether the entered text matches any file in the list.
I created a variable called saveFiles using the ls. Only displaying files with .save and removing it from the output:
saveFiles=$(cd "${0%/*}"/save; ls *.save* | ls *.save*; cd "${0%/*}")
echo -n ">"
read -r "name"
So $saveFiles equals:
Savegame1 savegame2 savegame3
I'm trying to make an if statement that tests wether the entered variable equals any of the files in the folder.
The following script works except when I type a letter contained at the end of the file. So if one of the files is called savegame, if I type game it comes up with a match because game.save is contained in the string.
if [[ $saveFiles = *"$name".save* ]]
then
scene=$(cat "save/$name".save)
fi
I need to find a way to test wether any of the strings in $saveFiles are equal to the entered variable $name.
To reiterate, files in folder:
Save1.save
Save2.save
...
Read `$name`
If $name matches any file in the list then load scene otherwise repeat.
I hope this isn't confusing. Please feel free to ask me to clarify further. Thank you.
Maybe I am not understanding the question correctly, but why don't you first request the file name and then query the file system with precisely that name, e.g.
read name
if [[ -f "${name}.save" ]];
echo "Found the file ${name}.save"
fi

"read" command not executing in "while read line" loop [duplicate]

This question already has answers here:
Read user input inside a loop
(6 answers)
Closed 5 years ago.
First post here! I really need help on this one, I looked the issue on google, but can't manage to find an useful answer for me. So here's the problem.
I'm having fun coding some like of a framework in bash. Everyone can create their own module and add it to the framework. BUT. To know what arguments the script require, I created an "args.conf" file that must be in every module, that kinda looks like this:
LHOST;true;The IP the remote payload will connect to.
LPORT;true;The port the remote payload will connect to.
The first column is the argument name, the second defines if it's required or not, the third is the description. Anyway, long story short, the framework is supposed to read the args.conf file line by line to ask the user a value for every argument. Here's the piece of code:
info "Reading module $name argument list..."
while read line; do
echo $line > line.tmp
arg=`cut -d ";" -f 1 line.tmp`
requ=`cut -d ";" -f 2 line.tmp`
if [ $requ = "true" ]; then
echo "[This argument is required]"
else
echo "[This argument isn't required, leave a blank space if you don't wan't to use it]"
fi
read -p " $arg=" answer
echo $answer >> arglist.tmp
done < modules/$name/args.conf
tr '\n' ' ' < arglist.tmp > argline.tmp
argline=`cat argline.tmp`
info "Launching module $name..."
cd modules/$name
$interpreter $file $argline
cd ../..
rm arglist.tmp
rm argline.tmp
rm line.tmp
succes "Module $name execution completed."
As you can see, it's supposed to ask the user a value for every argument... But:
1) The read command seems to not be executing. It just skips it, and the argument has no value
2) Despite the fact that the args.conf file contains 3 lines, the loops seems to be executing just a single time. All I see on the screen is "[This argument is required]" just one time, and the module justs launch (and crashes because it has not the required arguments...).
Really don't know what to do, here... I hope someone here have an answer ^^'.
Thanks in advance!
(and sorry for eventual mistakes, I'm french)
Alpha.
As #that other guy pointed out in a comment, the problem is that all of the read commands in the loop are reading from the args.conf file, not the user. The way I'd handle this is by redirecting the conf file over a different file descriptor than stdin (fd #0); I like to use fd #3 for this:
while read -u3 line; do
...
done 3< modules/$name/args.conf
(Note: if your shell's read command doesn't understand the -u option, use read line <&3 instead.)
There are a number of other things in this script I'd recommend against:
Variable references without double-quotes around them, e.g. echo $line instead of echo "$line", and < modules/$name/args.conf instead of < "modules/$name/args.conf". Unquoted variable references get split into words (if they contain whitespace) and any wildcards that happen to match filenames will get replaced by a list of matching files. This can cause really weird and intermittent bugs. Unfortunately, your use of $argline depends on word splitting to separate multiple arguments; if you're using bash (not a generic POSIX shell) you can use arrays instead; I'll get to that.
You're using relative file paths everywhere, and cding in the script. This tends to be fragile and confusing, since file paths are different at different places in the script, and any relative paths passed in by the user will become invalid the first time the script cds somewhere else. Worse, you aren't checking for errors when you cd, so if any cd fails for any reason, then entire rest of the script will run in the wrong place and fail bizarrely. You'd be far better off figuring out where your system's root directory is (as an absolute path), then referencing everything from it (e.g. < "$module_root/modules/$name/args.conf").
Actually, you're not checking for errors anywhere. It's generally a good idea, when writing any sort of program, to try to think of what can go wrong and how your program should respond (and also to expect that things you didn't think of will also go wrong). Some people like to use set -e to make their scripts exit if any simple command fails, but this doesn't always do what you'd expect. I prefer to explicitly test the exit status of the commands in my script, with something like:
command1 || {
echo 'command1 failed!' >&2
exit 1
}
if command2; then
echo 'command2 succeeded!' >&2
else
echo 'command2 failed!' >&2
exit 1
fi
You're creating temp files in the current directory, which risks random conflicts (with other runs of the script at the same time, any files that happen to have names you're using, etc). It's better to create a temp directory at the beginning, then store everything in it (again, by absolute path):
module_tmp="$(mktemp -dt module-system)" || {
echo "Error creating temp directory" >&2
exit 1
}
...
echo "$answer" >> "$module_tmp/arglist.tmp"
(BTW, note that I'm using $() instead of backticks. They're easier to read, and don't have some subtle syntactic oddities that backticks have. I recommend switching.)
Speaking of which, you're overusing temp files; a lot of what you're doing with can be done just fine with shell variables and built-in shell features. For example, rather than reading line from the config file, then storing them in a temp file and using cut to split them into fields, you can simply echo to cut:
arg="$(echo "$line" | cut -d ";" -f 1)"
...or better yet, use read's built-in ability to split fields based on whatever IFS is set to:
while IFS=";" read -u3 arg requ description; do
(Note that since the assignment to IFS is a prefix to the read command, it only affects that one command; changing IFS globally can have weird effects, and should be avoided whenever possible.)
Similarly, storing the argument list in a file, converting newlines to spaces into another file, then reading that file... you can skip any or all of these steps. If you're using bash, store the arg list in an array:
arglist=()
while ...
arglist+=("$answer") # or ("#arg=$answer")? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" "${arglist[#]}"
(That messy syntax, with the double-quotes, curly braces, square brackets, and at-sign, is the generally correct way to expand an array in bash).
If you can't count on bash extensions like arrays, you can at least do it the old messy way with a plain variable:
arglist=""
while ...
arglist="$arglist $answer" # or "$arglist $arg=$answer"? Not sure of your syntax.
done ...
"$module_root/modules/$name/$interpreter" "$file" $arglist
... but this runs the risk of arguments being word-split and/or expanded to lists of files.

Linux move files without replacing if files exists

In Linux how do I move files without replacing if a particular file already exists in the destination?
I tried the following command:
mv --backup=t <source> <dest>
The file doesn't get replaced but the issue is the extension gets changed because it puts "~" at the back of the filename.
Is there any other way to preserve the extension but only the filename gets changed when moving?
E.g.
test~1.txt instead of test.txt~1
When the extension gets replaced, subsequently you can't just view a file by double clicking on it.
If you want to make it in shell, without requiring atomicity (so if two shell processes are running the same code at the same time, you could be in trouble), you simply can (using the builtin test(1) feature of your shell)
[ -f destfile.txt ] || mv srcfile.txt destfile.txt
If you require atomicity (something that works when two processes are simultaneously running it), things are quite difficult, and you'll need to call some system calls in C. Look into renameat2(2)
Perhaps you should consider using some version control system like git ?
mv has an option:
-S, --suffix=SUFFIX
override the usual backup suffix
which you might use; however afaik mv doesn't have a functionality to change part of the filename but not the extension. If you just want to be able to open the backup file with a text editor, you might consider something like:
mv --suffix=.backup.txt <source> <dest>
how this would work: suppose you have
-rw-r--r-- 1 chris users 2 Jan 25 11:43 test2.txt
-rw-r--r-- 1 chris users 0 Jan 25 11:42 test.txt
then after the command mv --suffix=.backup.txt test.txt test2.txt you get:
-rw-r--r-- 1 chris users 0 Jan 25 11:42 test2.txt
-rw-r--r-- 1 chris users 2 Jan 25 11:43 test2.txt.backup.txt
#aandroidtest: if you are able to rely upon a Bash shell script and the source directory (where the files reside presently) and the target directory (where you want to them to move to) are same file system, I suggest you try out a script that I wrote. You can find it at https://github.com/jmmitchell/movestough
In short, the script allows you to move files from a source directory to a target directory while taking into account new files, duplicate (same file name, same contents) files, file collisions (same file name, different contents), as well as replicating needed subdirectory structures. In addition, the script handles file collision renaming in three forms. As an example if, /some/path/somefile.name.ext was found to be a conflicting file. It would be moved to the target directory with a name like one of the following, depending on the deconflicting style chosen (via the -u= or --unique-style= flag):
default style : /some/path/somefile.name.ext-< unique string here >
style 1 : /some/path/somefile.name.< unique string here >.ext
style 2 : /some/path/somefile.< unique string here >.name.ext
Let me know if you have any questions.
Guess mv command is quite limited if moving files with same filename.
Below is the bash script that can be used to move and if the file with the same filename exists it will append a number to the filename and the extension is also preserved for easier viewing.
I modified the script that can be found here:
https://superuser.com/a/313924
#!/bin/bash
source=$1
dest=$2
file=$(basename $source)
basename=${file%.*}
ext=${file##*.}
if [[ ! -e "$dest/$basename.$ext" ]]; then
mv "$source" "$dest"
else
num=1
while [[ -e "$dest/$basename$num.$ext" ]]; do
(( num++ ))
done
mv "$source" "$dest/$basename$num.$ext"
fi

I can't get my bash script to run

This is the script that I used to that will not run, but I am hoping someone can help me figure out what the issue is. I am new to unix
#!/bin/bash
# cat copyit
# copies files
numofargs=$#
listoffiles=
listofcopy=
# Capture all of the arguments passed to the command, store all of the arguments, except
# for the last (the destination)
while [ "$#" -gt 1 ]
do
listoffiles="$listoffiles $1"
shift
done
destination="$1"
# If there are less than two arguments that are entered, or if there are more than two
# arguments, and the last argument is not a valid directory, then display an
# error message
if [ "$numofargs" -lt 2 -o "$numofargs" -gt 2 -a ! -d "$destination" ]
then
echo "Usage: copyit sourcefile destinationfile"
echo" copyit sourcefile(s) directory"
exit 1
fi
# look at each sourcefile
for fromfile in $listoffiles
do
# see if destination file is a directory
if [ -d "$destination" ]
then
destfile="$destination/`basename $fromfile`"
else
destfile="$destination"
fi
# Add the file to the copy list if the file does not already exist, or it
# the user
# says that the file can be overwritten
if [ -f "$destfile" ]
then
echo "$destfile already exist; overwrite it? (yes/no)? \c"
read ans
if [ "$ans" = yes ]
then
listofcopy="$listofcopy $fromfile"
fi
else
listofcopy="$listofcopy $fromfile"
fi
done
# If there is something to copy - copy it
if [ -n "$listofcopy" ]
then
mv $listofcopy $destination
fi
This is what I got and it seems that the script didn't execute all though I did invoke it. I am hoping that someone can help me
[taniamack#localhost ~]$ chmod 555 tryto.txt
[taniamack#localhost ~]$ tryto.txt
bash: tryto.txt: command not found...
[taniamack#localhost ~]$ ./tryto.txt
./tryto.txt: line 7: $'\r': command not found
./tryto.txt: line 11: $'\r': command not found
./tryto.txt: line 16: $'\r': command not found
./tryto.txt: line 43: syntax error near unexpected token `$'do\r''
'/tryto.txt: line 43: `do
Looks like your file contains Windows new line formatting: "\r\n". On Unix, a new line is just "\n". You can use dos2unix (apt-get install dos2unix), to convert your files.
Also have a look at the chmod manual (man chmod).
Most of the time i just use chmod +x ./my_file to give execution rights
I see a few issues. First of all, a mode of 555 means that no one can write to the file. You probably want chmod 755. Second of all, you need to add the current directory to your $PATH variable. In Windows, you also have a %PATH%, but by default the current directory . is always in %PATH%, but in Unix, adding the current directory is highly discouraged because of security concerns. The standard is to put your scripts under the $HOME/bin directory and make that directory the last entry in your $PATH.
First of all: Indent correctly. When you enter a loop or an if statement, indent the lines by four characters (that's the standard). It makes it much easier to read your program.
Another issue is your line endings. It looks like some of the lines have a Windows line ending on them while most others have a Unix/Linux/Mac line ending. Windows ends each line with two characters - Carriage Return and Linefeed while Unix/Linux/Mac end each line with just a Linefeed. The \r is used to represent the Carriage Return character. Use a program editor like vim or gedit. A good program editor will make sure that your line endings are consistent and correct.

Resources