Create symlink of every file in a folder tree [duplicate] - linux

This question already has answers here:
symlink-copying a directory hierarchy
(7 answers)
Closed 4 years ago.
I need a Bash script that will create a symlink for every *.mp3 file in folder X (and its subfolders), and store those symlinks in folder Y, without subfolders and probably skipping duplicates.
For the curious, this is to automate a radio station using Libretime.
And sorry if this is a dumb question, I never used a Bash script.

As in the comment: use find to create a list of the mp3-files:
find /top/dir/for/mp3s -name '*mp3'
You will want to use that output to loop over it, so:
find /top/dir/for/mp3s -name '*mp3' | while read mp3file, do
# do the linking
done
You want create a link in a specific directory, probably with the same filename. You can get the filename with basename. So, that would make it something like this:
find /top/dir/for/mp3s -name '*mp3' | while read mp3file; do
filename=$(basename $mp3file)
ln -s $mp3file /dir/where/the/links/are/$filename
echo Linked $mp3file to /dir/where/the/links/are/$filename
done
However, this will probably give you two types of error:
If the mp3 filename contains spaces, basename will not produce the correct filename and ln will complain. Solution: use correct quoting.
If you have duplicate filenames, ln will complain that the link already exists. Solution: test if the link exists.
Because you're not destroying anything, you can try it and actually see the problems. So, our next iteration would be:
find /top/dir/for/mp3s -name '*mp3' | while read mp3file; do
filename=$(basename "$mp3file")
if [ ! -l "/dir/where/the/links/are/$filename" ] ; then
ln -s "$mp3file" "/dir/where/the/links/are/$filename"
echo "Linked $mp3file to /dir/where/the/links/are/$filename"
else
echo "Not linked $mp3file; link exists"
fi
done
That should give you a fairly good result. It also gives you a good starting point.

Related

Writing a bash script to find all files in a directory that start with a, and do nothing if one exist [duplicate]

This question already has answers here:
Do not show results if directory is empty using Bash
(3 answers)
Closed 4 years ago.
So I have to find all the files in the directory that start with the letter a, and list them out. This is pretty easy by doing
cd some_directory
for file in a*; do
echo "$file"
done
However I want that if there are no files present that match a*, then the for loop will not run at all. Currently, if this is the case then the shell will echo
a*
Is there a way to do this? Thank you
Your text is opposite of your title, in my answer below I've assumed the text is your intention and your title is incorrect:
globs can be made to act like this with the bash shell option "nullglob":
shopt -s nullglob
An alternative is to use find and ignore errors by piping stderr to /dev/null
for file in $(find a* 2>/dev/null); do
echo "$file"
done

Deleting all files in a directory except the ones mentioned in a list [duplicate]

This question already has answers here:
Shell script: How to delete all files in a directory except ones listed in a file?
(2 answers)
Closed 2 years ago.
I have a directory called a00 containing 3000 files with extension .SAC. I have a text file called gd.list containing names of 88 of those 3000 files. I am trying to write a code that will delete all .SAC files except those mentioned in gd.list
How to do that using shell/bash?
The rm command is commented out so that you can check and verify that it's working as needed. Then just un-comment that line.
The check directory section will ensure you don't accidentally run the script from the wrong directory and clobber the wrong files.
You can remove the echo deleting line to run silently.
#!/bin/bash
cd /home/me/myfolder2tocleanup/
# Exit if the directory isn't found.
if (($?>0)); then
echo "Can't find work dir... exiting"
exit
fi
for i in *; do
if ! grep -qxFe "$i" filelist.txt; then
echo "Deleting: $i"
# the next line is commented out. Test it. Then uncomment to removed the files
# rm "$i"
fi
done
You can find the answer here https://askubuntu.com/questions/830776/remove-file-but-exclude-all-files-in-a-list by L. D. James
there are a few alternatives.
I'd prefer to see find -Z as it more clearly demarcates the file names:
find . -maxdepth 1 -name '*.sac' -print0 | grep -x -z -Z -f gd.list | xargs -0 echo rm
Again, test this first. Perhaps sort the output and make sure it is unique versus the original file.
For a smaller list of filenames I would recommend just using find with -and -not -name and -delete, but with a larger list that can be tricky.
You could tag the files you want to keep as read-only, then delete the wildcard with the appropriate setting in rm or find to skip read-only files. That assumes you own the read-only flag. You could tag the files as executable, and use find, if the read-only flag is not for you.
Another option would be to move the matching files to a temp folder, delete the wildcard, then move the files you want to keep back. That is assuming you can afford for the files to disappear temporarily.
To make them disappear for a shorter time, move the kept files out to a temp directory, move the original directory out, move the temp directory in, then delete the movced out directory.
If you are feeling brave, try something like
ls *.sac | fgrep -v -f gd.list | xargs echo rm
Note that I've put an echo in that xargs, just to make sure no one has a cut and paste accident.
Note also the limitations of this approach mentioned in the comments. As I said, if you are feeling brave...

How to recursively get all files filtered by multiple extensions within a folder including working folder without using find in Bash script

I have this question after quite a day of searching the net, perhaps I'm doing something wrong , here is my script:
#!/bin/bash
shopt -s extglob
FILE_EXTENSIONS=properties\|xml\|sh\|sql\|ksh
SOURCE_FOLDER=$1
if [ -z "$SOURCE_FOLDER" ]; then
SOURCE_FOLDER=$(pwd)
fi # Set directory to current working folder if no input parameter.
for file in $SOURCE_FOLDER/**/*.*($FILE_EXTENSIONS)
do
echo Working with file: $file
done
Basically, I want to recursively get all the files filtered by a list of extensions within folders from a directory that is passed as an argument including the directory itself.
I would like to know if there is a way of doing this and how without the use of the find command.
Imagine I have this file tree:
bin/props.properties
bin/xmls.xml
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
My script, as it is right now and running from /bin, would echo:
bin/source/sources.sh
bin/config/props.properties
bin/config/folders/moreProps.xml
Leaving the ones in the working path aside.
P.S. I know this can be done with find but I really want to know if there's another way for the sake of learning.
Thanks!
You can use find with grep, just like this:
#!/bin/bash
SOURCE_FOLDER=$1
EXTENSIONS="properties|xml|sh|sql|ksh"
find $SOURCE_FOLDER | grep -E ".(${EXTENSIONS})"
#or even better
find $SOURCE_FOLDER -regextype posix-egrep -regex ".*(${EXTENSIONS})"

How to make this (l)unix script dynamically accept directory name in for-loop?

I am teaching myself more (l)unix skills and wanted to see if I could begin to write a program that will eventually read all .gz files and expand them. However, I want it to be super dynamic.
#!/bin/bash
dir=~/derp/herp/path/goes/here
for file in $(find dir -name '*gz')
do
echo $file
done
So when I excute this file, I simply go
bash derp.sh.
I don't like this. I feel the script is too brittle.
How can I rework my for loop so that I can say
bash derp.sh ~/derp/herp/path/goes/here (1)
I tried re-coding it as follows:
for file in $*
However, I don't want to have to type in bash
derp.sh ~/derp/herp/path/goes/here/*.gz.
How could I rewrite this so I could simply type what is in (1)? I feel I must be missing something simple?
Note
I tried
for file in $*/*.gz and that obviously did not work. I appreciate your assistance, my sources have been a wrox unix text, carpentry v5, and man files. Unfortunately, I haven't found anything that will what I want.
Thanks,
GeekyOmega
for dir in "$#"
do
for file in "$dir"/*.gz
do
echo $file
done
done
Notes:
In the outer loop, dir is assigned successively to each argument given on the command line. The special form "$#" is used so that the directory names that contain spaces will be processed correctly.
The inner loop runs over each .gz file in the given directory. By placing $dir in double-quotes, the loop will work correctly even if the directory name contains spaces. This form will also work correctly if the gz file names have spaces.
#!/bin/bash
for file in $(find "$#" -name '*.gz')
do
echo $file
done
You'll probably prefer "$#" instead of $*; if you were to have spaces in filenames, like with a directory named My Documents and a directory named Music, $* would effectively expand into:
find My Documents Music -name '*.gz'
where "$#" would expand into:
find "My Documents" "Music" -name '*.gz'
Requisite note: Using for file in $(find ...) is generally regarded as a bad practice, because it does tend to break if you have spaces or newlines in your directory structure. Using nested for loops (as in John's answer) is often a better idea, or using find -print0 and read as in this answer.

Bash script - find directory, and do something in it. Or find a file, and do something with it

This is what I have so far:
for f in 'svn ls repository_dir';
do
svn checkout repository_dir/$f/trunk/dir1/dir2/dir3/dir4/needed_dir
done
This works great for the projects (100's of them) that have the needed_dir in the correct place. But some projects ($f) have their directory structure a little different. So "needed_dir" might be in a different location.
In the do loop, how can I tell my bash script to:
"Find "needed_dir". If found, check it out."
Or
"Find "needed_file.txt". If found, check it out."
Thank you for any help
EDIT: Sorry, this may be more of a superuser.com question. I actually meant to write it there. But, maybe someone can help me here too!
You can use find command of linux.
find will recursively search the directory name you give as an argument.
Suppose you have a directory structure like this:
parent
-dir1
--file1
--file2
--file3
-dir2
--file4
--file5
--file6
--dir3
Now you execute the following command:
find parent -name dir*
You will get an output like this:
parent/dir1
parent/dir2
parent/dir2/dir3
For example see the code below:-
#!/bin/sh
parentdir=$1
tofind=$2
for i in `find $parentdir -name $tofind*`; # * is added for regular expression..do whatever you need here
do
#check if it is directory
if [ -d $i ]; then
# do what you want to do
echo $i
else
# do something else
echo $i
fi
done
This takes to inputs-->
Your svn parent directory
Name of file/directory to be searched
Hope this helps. Let me know if you need further inputs.
Also you can refer this question
Without going greatly into detail (as I don't have the time as of now) here is a short example as to which you can work off of.
#!/bin/bash
sudo updatedb
if [ ! -f testfile1.txt ];
then
filelocation=$(locate awesomeo.txt)
head -10 $filelocation
fi
What it does it updates the file database, if the file you are looking for is not in the current directory then it locates the directory of the file and displays the first ten lines of that file.
But in your case you may want to do filelocation=$(locate forloopvariable) or what you are looking for. This of course instead of an if statement would be the else statement to match your needs.
If you have any other questions feel free to ask.
This snippet should work, I guess:
for f in $(svn ls repository_dir);
do
find repository_dir/$f -type d -name needed_dir | xargs -r svn checkout
done

Resources