i like rsync my photos from one (linux) disc partition to an other (backup location) using an shell script.
The problem is, that I need to re-scale all photos which are saved on the backup location, for example with mogrify.
Is it possible to post-process every file, which is synced/copied by rsync?
In oder to execute mogrify on every synced file?
An other way could using rsync (only) to generate the list of files which have to be synced. Next step: run a loop to mogrify every list entry in order to output the scaled photo to the backup location.
The problem is, that I have to add all the directories and child-directories to keep the original folder structure before saving the photo.
Using rsync would handle the folder creation "on the fly".
So: is it possible to execute an command on every file synced with rsync?
rsync has a -i/--itemize-changes flag to output a resume of what it does with each file.
I suggest you to play a bit with it, I'm seeing it outputs lines like >f+++++++++ file1 for a new file, >f..T...... file1 for an unchanged file, >f.sT...... file1 for an update, etc...
Having that, you can read this output into a variable, and parse this with grep and cut:
#!/bin/bash
log=$(rsync -i rsync-client/* rsync-server/)
newFiles=$(echo "$log" | grep '>f+++++++++' | cut -d' ' -f2)
for file in $newFiles
do
echo "Added file $file"
done
Related
After an HD problem and some work, I have a bunch of files with names like "f1234", "f1235", etc.
My goal is to sort this files according to their filetype. For example, I want to move all the PDF files in the "pdfs" directory.
For one file, I can do : "file f1234", and if it's a PDF, I can "mv f1234 pdfs/". But I have thousands of file... Can you help me with a bash or zsh command for sort all the PDF in one pass ? Thanks
The hard part here is reliably turning the output of file into a directory name. I think probably the best candidate for that is the mime-type of the file rather than the human readable output of file. I'd use something like:
mkdir sorted
for f in f*
do
d=$(file -b --mime-type "$f" | tr / -)
mkdir -p "sorted/$d"
mv "$f" "sorted/$d/"
done
Obviously I'd test that out a bit before running it on your files, but something pretty close to that should work.
I have several .vcf.gz files:
subset_file1.vcf.vcf.gz
subset_file2.vcf.vcf.gz
subset_file3.vcf.vcf.gz
I want to gunzip these file and rename them (remove subset_ and redudant .vcf extension in one go and get these files:
file1.vcf
file2.vcf
file3.vcf
This is the script I have tried:
iFILES=/file/path/*.gz
for i in $iFILES;
do gunzip -k $i > /get/in/this/dir/"${i##*/}"
done
Since you have to three operation at your output path name
1.remove the directory part
2.remove prefix subset_
3.remove redudant extension .vcf
It's hard to accomplish with only one command.
Following is a modification version. Be CAREFUL to try it. I didn't test it thorough in my computer.
for i in /file/path/*.gz;
do
# get the output file name
o=$(echo ${i##*/} | sed 's/.*_\(.*\)\(\.[a-z]\{3\}\)\{2\}.*/\1\2/g')
gunzip -k $i > /get/in/this/dir/$o
done
I need to move each *.lis file in its current directory to a new directory and add to the file's existing filename for an application to pickup the file with the new name.
For example:
Move /u01/vista/vmfiles/CompressGens.lis and /u01/vista/vmfiles/DeleteOnline.lis
to
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"CompressGens.lis
and
/u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"DeleteOnline.lis
What I started out with in my script:
cp -f /u01/vista/vmfiles/*.lis /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA."$(date '+%m%d%y%H%M%S')"*.lis
There are multiple *.lis in the /u01/vista/vmfiles/ directory, and depending on the system and day, the *.lis files will not always be the same. Sometimes it is "DeleteOnline.lis" and CompressGens.lis but not ArchiveGens.lis. Then the next day will be CompressGens.lis and ArchiveGens.lis.
So I will need to get the *.lis filenames in the /u01/vista/vmfiles/ directory, and then move each one.
You need a loop, so that you can do one file at a time.
ls -1tr *.lis | while read File
do
cp -p $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').CompressGens.lis &&
mv $File ../Migration_Logs/${File%.lis}.$(date '+%m%d%y%H%M%S').DeleteOnline.lis
done
${File%.lis} is the bash/korn means of stripping that suffix - see ksh or bash man page.
The "&&" idiom is in order only to mv the file to the 2nd archived name if the copy for the 1st archived file works.
#Abe Crabtree, Thanks for the help in pointing me in the right direction. Below is the final code that worked.
ls -1tr *.lis | while read File
do
mv $File /u01/vista/Migration_Logs/LIS.BHM.P.MIGRATION_LOGS.FBA.$(date '+%m%d%y%H%M%S').${File%.lis}.lis
done
I'm trying to backup just one file that is generated by other application in dynamic named folders.
for example:
parent_folder/
back_01 -> file_blabla.zip (timestam 2013.05.12)
back_02 -> file_blabla01.zip (timestam 2013.05.14)
back_03 -> file_blabla02.zip (timestam 2013.05.22)
and I need to get the latest generated zip, just that one it doesnt matter the name of the file as long as is the latest, is a zip and is inside "parent_folder" get that one.
as well when I do the rsync the folder structure + file name is generated and I want to omit that I want to backup that file in a folder and with a name so I know where is the latest and it will be always named the same.
now im doing this with a perl that get the latest generated folder with
"ls -tAF | grep '/$' | head -1"
and perform the rsync but it does brings the last zip but with the folder structure that I dont want because it doesnt override my latest zip file.
rsync -rvtW --prune-empty-dirs --delay-updates --no-implied-dirs --modify-window=1 --include='*.zip' --exclude='*.*' --progress /source/ /myBackup/
as well it would be great if I could do the rsync without needing to use perl or any other script.
thanks
The file names will differ each time ?
This would be hard for any type of syncing to work.
What you could do is :
create a new folder outside of where it is found, then :
Before you start remove the last sym linked file in that folder
When the file is found i.e. ls -tAF | grep '/$' | head -1 ....
symlink it this folder
then rsync,ssh,unison file across to new node.
If the symlink name is file-latest.zip then it will always be this
one file sent across.
But why do all that when you can just scp and you can take a look at here:
https://github.com/vahidhedayati/definedscp
for a more long winded approach, and not for this situation but it uses the real file date/time stamp then converts to seconds... It might be useful if you wish to do the stat in a different way
Using stat to work out file, work out latest file then simply scp it across, here is something to get you started:
One liner:
scp $(find /path/to/parent_folder -name \*.zip -exec stat -t {} \;|awk '{print $1" "$13}'|sort -k2nr|head -n1|awk '{print $1}') remote_server:/path/to/name.zip
More long winded way, maybe of use to understand what above is doing:
#!/bin/bash
FOUND_ARRAY=()
cd parent_folder;
for file in $(find . -name \*.zip); do
ptime=$(stat -t $file|awk '{print $13}');
FOUND_ARRAY+=($file" "$ptime)
done
IFS=$'\n'
FOUND_FILE=$(echo "${FOUND_ARRAY[*]}" | sort -k2nr | head -n1|awk '{print $1}');
scp $FOUND_FILE remote_host:/backup/new_name.zip
I have a few files waiting to be processed by a daily cron job:
file1
file2
file3
I want the job to take the first file and then rename the rest. file1 should be deleted. file2 should be renamed to file1, and file3 should be renamed to file2.
I'm looking for a solution that would work with any number of files.
Is there a simple way to do this with a script? Or, taking a step back, is there a standard Linux technique for handling a queue of files?
It looks like you are trying to implement a simple queueing mechanism for processing work on an arbitrary number of files, treating the filenames as queue positions (so that file1 is "head"). I think you're taking the queue metaphor a bit too literally into the filesystem space, however, as doing renames for all those files is extremely expensive in terms of filesystem operations and race-condition prone to boot (what if more files are added to the queue as you are renaming the previous ones?). What you should do instead is simply track the filenames to be operated on in a side file (e.g. don't traverse the filesystem looking for work, but traverse your "queue file") and lock that file whenever you're removing or adding an entry. A nice side-effect of that approach is that your filenames can then have any names you like, they don't have to be "file1, file2, ..."
You can use simple bash script as follows. It first list the files in the "folder1" to data.txt according to time stamp it created. Then first file will be removed. At last second file will be renamed first file continuously.
#!/bin/bash
# List the files in Folder1 folder
ls -tr folder1/ > data.txt
# Removing the first file
rm -rf "folder1/`head -n 1 data.txt`"
#Renaming the Old file name to new file
while IFS= read -r file
do
if [ -f "folder1/$file" ]; then
mv "folder1/$file" "folder1/$newFile"
fi
newFile="$file"
done