Script to check contents of a directory in Linux - linux

How could I write a Linux shell script to check the contents of a directory to see if a file with the same name already exists?
The directory/location to be checked would be obtained from a file /root/TAM/store using the grep function.
The contents of store is the directory's of files which I have moved to a dustbin in a previous script, It stores the directory they were in before the mv
the input is just the name of the file in dustbin that you want to restore to its original location, If i file exists it should ask you to rename or chose a new dir

If the file /root/TAM/store has just a line with the directory to search, you can do as follows
if ls `cat /root/TAM/store` | grep -q filename_to_look_for; then
echo "filename_to_look_for exists"
else
echo "filename_to_look_for doesn't exist"
fi

Here's a pretty good example of using the if statement in a bash script to check if a file exists (under the 7.1.1.3 heading)

There's a lot of details missing from your requirements, but in general you can do this to check if a specific file exists:
grep "pattern" location_file | xargs ls

Related

Deleting all files in a directory except the ones mentioned in a list [duplicate]

This question already has answers here:
Shell script: How to delete all files in a directory except ones listed in a file?
(2 answers)
Closed 2 years ago.
I have a directory called a00 containing 3000 files with extension .SAC. I have a text file called gd.list containing names of 88 of those 3000 files. I am trying to write a code that will delete all .SAC files except those mentioned in gd.list
How to do that using shell/bash?
The rm command is commented out so that you can check and verify that it's working as needed. Then just un-comment that line.
The check directory section will ensure you don't accidentally run the script from the wrong directory and clobber the wrong files.
You can remove the echo deleting line to run silently.
#!/bin/bash
cd /home/me/myfolder2tocleanup/
# Exit if the directory isn't found.
if (($?>0)); then
echo "Can't find work dir... exiting"
exit
fi
for i in *; do
if ! grep -qxFe "$i" filelist.txt; then
echo "Deleting: $i"
# the next line is commented out. Test it. Then uncomment to removed the files
# rm "$i"
fi
done
You can find the answer here https://askubuntu.com/questions/830776/remove-file-but-exclude-all-files-in-a-list by L. D. James
there are a few alternatives.
I'd prefer to see find -Z as it more clearly demarcates the file names:
find . -maxdepth 1 -name '*.sac' -print0 | grep -x -z -Z -f gd.list | xargs -0 echo rm
Again, test this first. Perhaps sort the output and make sure it is unique versus the original file.
For a smaller list of filenames I would recommend just using find with -and -not -name and -delete, but with a larger list that can be tricky.
You could tag the files you want to keep as read-only, then delete the wildcard with the appropriate setting in rm or find to skip read-only files. That assumes you own the read-only flag. You could tag the files as executable, and use find, if the read-only flag is not for you.
Another option would be to move the matching files to a temp folder, delete the wildcard, then move the files you want to keep back. That is assuming you can afford for the files to disappear temporarily.
To make them disappear for a shorter time, move the kept files out to a temp directory, move the original directory out, move the temp directory in, then delete the movced out directory.
If you are feeling brave, try something like
ls *.sac | fgrep -v -f gd.list | xargs echo rm
Note that I've put an echo in that xargs, just to make sure no one has a cut and paste accident.
Note also the limitations of this approach mentioned in the comments. As I said, if you are feeling brave...

Create directories and download files by reading input from a file

cat paste_output.txt | while read -r file_name path_name file;
do mkdir -p -- "$path_name";
wget "$file_name";
mv "$file" "$path_name";
done;
Hi! I have this piece of code that reads field by field from the file specified. What I am trying to do here is I am creating a directory that is specified in second field and then I am downloading file specified in first field and then after having that file downloaded I am that file in the directory specified in second field.
Output: I am getting the desired directory structure and files downloaded however files are downloading in the directory I am executing the commands from.
How to move files in the desired directories?
You can use the -P flag of wget to put the file in the target directory.
If the directory doesn't exist, it will create it,
so this also let's you save on the mkdir.
while read -r file_name path_name file; do
wget -P "$path_name" "$file_name"
done < paste_output.txt
I made some other improvements to the script:
The cat is useless, input redirection is better
The semicolons at end of lines are unnecessary
It's good to indent the body of loops, for readability

Move files in a for loop

I want a script that is able to read the content of a text file which contains folder names and moves the folders from their directory to a specific folder. Here is my script:
#!/bin/bash
for i in $(cat /folder/collected/folders.txt)
do
mv /fromfilelocation/$i /folder/Collected/
done
This script is partly working as it copies only the last folder in the text file, as for the other folders it gives the error "not possible: data or directory not found" But the folder is there and according to the error the folder directory is correctly displayed.
What should I do in order to make it work correctly ??
You can use this:
#!/bin/bash
for sample in `awk '{print $1}' All_bins.txt`
do mv "$sample" All_Good_Bins
done
Use while loop instead
while read i; do
mv fromfilelocation/"$i" /folder/Collected/
done </folder/collected/folders.txt

Move files from one dir to another and add to each files name in the new directory

I need to move each *.lis file in its current directory to a new directory and add to the file's existing filename for an application to pickup the file with the new name.
For example:
Move /u01/vista/vmfiles/CompressGens.lis and /u01/vista/vmfiles/DeleteOnline.lis
to
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"CompressGens.lis
and
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"DeleteOnline.lis
What I started out with in my script:
cp -f /u01/vista/vmfiles/*.lis /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"*.lis
There are multiple *.lis in the /u01/vista/vmfiles/ directory, and depending on the system and day, the *.lis files will not always be the same. Sometimes it is "DeleteOnline.lis" and CompressGens.lis but not ArchiveGens.lis. Then the next day will be CompressGens.lis and ArchiveGens.lis.
So I will need to get the *.lis filenames in the /u01/vista/vmfiles/ directory, and then move each one.
You need a loop, so that you can do one file at a time.
ls -1tr *.lis | while read File
do
cp -p $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').CompressGens.lis &&
mv $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').DeleteOnline.lis
done
${File%.lis} is the bash/korn means of stripping that suffix - see ksh or bash man page.
The "&&" idiom is in order only to mv the file to the 2nd archived name if the copy for the 1st archived file works.
#Abe Crabtree, Thanks for the help in pointing me in the right direction. Below is the final code that worked.
ls -1tr *.lis | while read File
do
mv $File /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA.$(date '+%m%d%y%H%M%S').${File%.lis}.lis
done

output to a file in script directory

This probably quite basic but I have spent whole day finding an answer without much success.
I have an executable script that resides in ~/Desktop/shell/myScript.sh
I want a single line command to run this script from my terminal that outputs to a new directory in same directory where the script is located no matter what my present working directory is.
I was using:
mkdir -p tmp &&
./Desktop/shell/myScript.sh|grep '18x18'|cut -d":" -f1 > tmp/myList.txt
But it creates new directory in present working directory and not on the target location.
Any help would be appreciated.
Thanks!
You could solve it in one line if you pre-define a variable:
export LOC=$HOME/Desktop/shell
Then you can say
mkdir -p $LOC/tmp && $LOC/myScript.sh | grep '18x18' | cut -d":" -f1 > $LOC/tmp/myList.txt
But if you're doing this repeatedly it might be better long-term to wrap myScript.sh so that it creates the directory, and redirects the output, for you. The grep and cut parameters, as well as the output file name, would be passed as command-line arguments and options to the wrapper.
How about this:
SCRIPTDIR="./Desktop/shell/" ; mkdir "$SCRIPTDIR/tmp" ; "$SCRIPTDIR/myScript.sh" | grep '18x18' | cut -d ":" -f 1 > "$SCRIPTDIR/tmp/myList.txt"
In your case you have to give the path to the script anyway. If you put the script in the path where it is automatically searched, e.g. $HOME/bin, and you can just type myScript.sh without the directory prefix, you can use SCRIPTDIR=$( dirname $( which myScript.sh ) ).
Mixing directories with binaries and data files is usually a bad idea. For temporary files /tmp is the place to go. Consider that your script might become famous and get installed by the administrator in /usr/bin and run by several people at the same time. For this reason, try to think mktemp.
YOUR SCRIPT CAN DO THIS FOR YOU WITH SOME CODES
Instead of doing this manually from the command line and who knows where you will move your script and put it. add the following codes
[1] Find your script directory location using dirname
script_directory=`dirname $0`
The above code will find your script directory and save it in a variable.
[2] Create your "tmp" folder in your script directory
mkdir "$script_directory/tmp 2> /dev/null"
The above code will make a directory called "tmp" in your script directory. If the directory exist, mkdir will not overwrite any existing directory using this command line and gave an error. I hide all errors by "2> /dev/null"
[3] Open your script and modify it using "cut" and then redirect the output to a new file
cat "$0"|grep '18x18'|cut -d":" -f1 > "$script_directory"/tmp/myList.txt

Resources