Need assistance for creating a simple bash script - linux

I've created this bash file putting on it a secuence of commands i often run for synching files from my digital camera. the point is it doesn't to ANYTHING! What am i missing?
thank you!
code:
#!/bin/bash
#temporal
mkdir /tmp/canon
#copy files from camera
rsync -r /run/user/mango/gvfs/g*/DCIM /tmp/canon
cd /tmp/canon
#get files from subdirs
find ./ -name '*.JPG' -exec mv '{}' ./ \;
#remove dirs
ls -l | awk -F'[0-9][0-9]:[0-9][0-9]' '/^d/{print $NF}'| xargs -i rm -rf '{}' \;
#recreate folder structure with year|month pattern
jhead -n%Y/%m/%f *.JPG
#Sync with external HD
rsync -r --ignore-existing . /media/mango/WD/FOTOS/

If it does not even do the mkdir, then it sound most likely that the version of the script you want is not the one running. Try using an qualified path, such as ./myscript or an absolute path, like /home/joe/bin/myscript. The command type myscript will tell from where the shell is running it.
Also, try running the script after adding set -x to the top of the script or using bash -x myscript; that will show every line as it is executed.
If this still doesn't help, there could be bash startup code, such as in .bashrc getting in the way. That's much harder to diagnose, although the same set -x can be used, although with great caution unless a second user can login and edit this user's startup scripts since mistakes in startup scripts can make it impossible to login to the system.

Try this
chmod +x yourscriptname
./yourscriptname
Make usre you are running the same script you made.

Related

Mkdir working for one folder but not the other in a bash script?

This is probably a simple fix but I wrote a bash script to create two directories with one of those being a sub-directory of the other. I will link the script below. It creates the "/usr/local/sites" just fine but it won't create the A-upgrade below that directory for some reason. Any thoughts?
#!/bin/bash
DIRECTORY=/usr/local/sites/
SITE=A
sudo mkdir -p "$DIRECTORY"
sudo mkdir -p "$DIRECTORY/$SITE-upgrade/"
cd "$DIRECTORY/$SITE-upgrade/"
After help from the others in the comments, I stupidly realized that I had a cleanup function in my script that was deleting my directory, which is what it was suppose to do. Thanks again for the help guys. Sometimes it helps to add a "-x". The cleanup directory did the following and was deleting the directory I was searching for.
log "cleaning up folder"
log "cd up a directory"
cd ..
log "remove folder $SITE-upgrade"
find "$SITE-upgrade" -type d | xargs rm -rf
You have $SITE in the sudo statement instead of $SITES, which is the variable you assigned to above the sudo statement.

Shell Script to Recursively Loop Through Directory and print location of important files

So I am trying to write a command line shell script or a shell script that will be able to recursively loop through a directory, all its files, and sub-directories for certain files and then print the location of these files to a text file.
I know that this is possible using BASH commands such as find, locate, exec, and >.
This is what I have so far. find <top-directory> -name '*.class' -exec locate {} > location.txt \;
This does not work though. Can any BASH, Shell scripting experts help me out please?
Thank-you for reading this.
The default behavior of find (if you don't specify any other action) is to print the filename. So you can simply do:
find <top-directory> -name '*.class' > location.txt
Or if you want to be explicit about it:
find <top-directory> -name '*.class' -print > location.txt
You can save the redirection by using find's -fprint option:
find <top-directory> -name '*.class' -fprint location.txt
From the man page:
-fprint file
[...] print the full file name into file file. If file does not exist when find is run, it is created; if it does exist, it is truncated.
A less preferred way to do it is to use ls:
ls -d $PWD**/* | grep class
let's break it down:
ls -d # lists the directory (returns `.`)
ls -d $PWD # lists the directory - but this time $PWD will provide full path
ls -d $PWD/** # list the directory with full-path and every file under this directory (not recursively) - an effect which is due to `/**` part
ls -d $PWD/**/* # same like previous one, only that now do it recursively to the folders below (achieved by adding the `/*` at the end)
A better way of doing it:
After reading this due to recommendation from Charles Duffy, it appears as a bad idea to use both ls as well as find (article also says: "find is just as bad as ls in this context".) The reason it's a bad idea is because you can't control the output of ls: for example, you can't configure ls to terminate filenames with NUL. The reason it's problematic is that unix allows all kind of weird characters in a file-name (newline, pipe etc) and will "break" ls in a way you can't anticipate.
Better use a shell script for the task, and it's pretty simple task too:
Create a file my_script.sh, edit the file to contain:
for i in **/*; do
echo $PWD/$i
done
Give it execute permissions (by running: chmod +x my_script.sh).
Run it from the same directory with:
./my_script.sh
and you're good to go!

Running command recursively in linux

I'm trying to come up with a command that would run mp3gain FOLDER/SUBFOLDER/*.mp3 in each subfolder, but I'm having trouble understanding why this command doesn't work:
find . -type d -exec mp3gain \"{}\"/*.mp3 \;
When run, I get error Can't open "./FOLDER/SUBFOLDER"/*.mp3 for reading for each folder and subfolder.
If I run command manually with mp3gain "./FOLDER/SUBFOLDER"/*.mp3 it works. What's going wrong?
If you have a fixed data structure like
folder1/subfolder1/
folder1/subfolder2/
folder2/subfolder1/
[...]
and using zsh or bash version >=4.0 you could try
mp3gain **/*.mp3
But to make sure check the output of
ls **/*.mp3
before you are getting serious with mp3gain.
When you run mp3gain "./FOLDER/SUBFOLDER"/*.mp3 from your shell, the *.mp3 is getting expanded by the shell before being passed to mp3gain. When find runs it, there is no shell involved, and the *.mp3 is getting passed literally to mp3gain. The latter has no idea how to deal with wildcards (because normally it doesn't have to).
Hmmm. Just tried this to test how the directory is parsed by replacing mp3gain with echo and it works:
find . -type d -exec echo {}\/\*.mp3 \;
Try running your version of the command but with echo to see the file output for yourself:
find . -type d -exec echo \"{}\"/*.mp3 \;
Seems the quotes get in the way in your original command.
this works...
find /music -name *mp3 -exec mp3gain -r -k {} \;

Find not working in script, working in terminal prompt

I'm trying to run a bash script in linux (ubuntu but also fedora) but it the find command won't work.
search=\"*${exten[iterext]}\"
find $direc{iterdir} $r_option -iname $search exec -rm {} \\\;
Now to explain the variables:
Exten is array with file extensions read from a text file (no problem here)
direc is also an array of directories read from the command line.
Iterdir and iterext are cicle integer variables.
Now I have two problems:
1- This find command will not delete or display for that matter if I run it inside a script; however if I put an echo before the find and copy paste the output to a command prompt find works fine. I've tried the script under ubuntu and fedora so I assume it's not a bash configuration issue. I should note that the issue seems to the $search as I replaced $search with a hardcoded string (like "*txt) and it works inside the script so it's seems to be a quotation issue.
2 - I run that entire find command and also get find:missing argument to '-exec'
Please help :-( it's driving me insane.
Start simple by placing everything in the find command then worry about parameterizing it.
${exten[iterext]} should be ${exten[$iterext]}
$direc{iterdir} should be ${direc[$iterdir]}
exec -rm should be -exec rm
\\\; should be \;
Quote your variables to prevent word splitting
The following will perform a dry run thanks to the echo. Simply remove the echo when you are satisfied with the output to perform the deletions.
find "${direc[$iterdir]}" "$r_option" -name "*${exten[$iterext]}" -exec echo rm {} \;
Your use of quotes seems a little odd to me. Try this:
find "$direc{iterdir}" $r_option -iname "*${exten[iterext]}" -exec -rm "{}" ";"
Oh, and run your shell script with the -x option. This will print every command line before it is executed.
set -x
find "$direc{iterdir}" $r_option -iname "*${exten[iterext]}" -exec -rm "{}" ";"
set +x

How do I exclude a folder when performing file operations i.e. cp, mv, rm and chown etc. in Linux

How do you exclude a folder when performing file operations i.e. cp etc.
I would currently use the wild card * to apply file operation to all, but I need to exclude one single folder.
The command I'm actually wanting to use is chown to change the owner of all the files in a directory but I need to exclude one sub directory.
If you're using bash and enable extglob via shopt -s extglob then you can use !(<pattern>) to exclude the given pattern.
find dir_to_start -name dir_to_exclude -prune -o -print0 | xargs -0 chown owner
find dir_to_start -not -name "file_to_exclude" -print0 | xargs -0 chown owner
for file in *; do
if [ $file != "file_I_dont_want_to_chown" ]
then
chown -R Camsoft $file
fi
done
Combine multiple small sharp tools of unix:
To exclude the folder "foo"
% ls -d * | grep -v foo | xargs -d "\n" chown -R Camsoft
For this situation I would recommend using find. You can specify paths to exclude using the -not -iwhilename 'PATH'. Then using exec you execute the command you want to execute
find . -not -iwholename './var/foo*' -exec chown www-data '{}' \;
Although this probably does help for your situation I have also see scripts set the immutable flag. Make sure you remove the flag when your done you should use trap for this just in case the script is killed early (note: run from a script, the trap code runs when the bash session exits). A lot of trouble in my option but it's good in some situations.
cd /var
trap 'chattr -R -i foo > /dev/null 2>&1' 0
chattr -R +i foo
chown -R www-data *
Another option might be to temporarily remove permission on the that file /folder.
In Unix you need 'x' permission on a directory to enter it.
edit: obviously this isn't goign to work if you are backing up a live production database - but for excluding your 'interesting images' collection when copying documents to a USB key it's reasoanable.

Resources