Restore from backup files recursively in shell - linux

I have made a mistake with a shell script and I have backup files that I want to restore.
The code I have to restore my files (which works perfectly) is:
for f in *.html~; do mv $f ${f%\~}; done
(The backup files end in .html~).
How do I do this recursively through folders?
Thanks in advance for your help.

You could alternatively use rsync
rsync -a /path/to/backup /path/to/restored/folder

find -type f -name "*.html~" |
while read f; do
mv "$f" "${f%\~}"
done

Related

Bash script to sort files into sub folders based on extension

I have the following structure:
FolderA
Sub1
Sub2
filexx.csv
filexx.doc
FolderB
Sub1
Sub2
fileyy.csv
fileyy.doc
I want to write a script that will move the .csv files into the folder sub1 for each parent directory (Folder A, Folder B and so on) giving me the following structure:
FolderA
Sub1
filexx.csv
Sub2
filexx.doc
FolderB
Sub1
fileyy.csv
Sub2
fileyy.doc
This is what I have till now but I get the error mv: cannot stat *.csv: No such file or directory
for f in */*/*.csv; do
mv -v "$f" */*/Sub1;
done
for f in */*/*.doc; do
mv -v "$f" */*/Sub2;
done
I am new to bash scripting so please forgive me if I have made a very obvious mistake. I know I can do this in Python as well but it will be lengthier which is why I would like a solution using linux commands.
find . -name "*.csv" -type f -execdir mv '{}' Sub1/ \;
Using find, search for all files with the extension .csv and then when we find them, execute a move command from within the directory containing the files, moving the files to directory Sub1
find . -name "*.doc" -type f -execdir mv '{}' Sub2/ \;
Follow the same principle for files with the extension .doc but this time, move the files to Sub2.
I believe you are getting this error because no file matched your wildcard. When it happens, the for loop will give $f the value of the wildcard itself. You are basically trying to move the file *.csv which does not exist.
To prevent this behavior, you can add shopt -s nullglob at the top of your script. When using this, if no file is found, your script won't enter the loop.
My advise is, make sure you run your script from the correct location when using wildcards like this. But maybe what you meant to do by writing */*/*.csv is to recursively match all the csv files. If that's what you intended to do, this is not the right way to do it.
To recursively match all csv/doc/etc files using native bash you can add shopt -s globstar to the top of your script and use **/*.csv as wildcard
#!/bin/bash
shopt -s globstar nullglob
for f in **/*.csv; do
mv "$f" Destination/ # Note that $f is surrounded by "" to handle whitespaces in filenames
done
You could also use the find (1) utility to achieve that. But if you're planning to do more processing on the files than just moving them, a for loop might be cleaner as you won't have to inline everything in the same command.
Side note : "Linux commands" as you say are actually not Linux commands, they are part of the GNU utilities (https://www.gnu.org/gnu/linux-and-gnu.en.html)
If csv files you want to move are in the top directories (from the point of view of the current directory), but not in the subdirectories of them, then simply:
#!/bin/bash
for dir in */; do
mv -v "$dir"*.csv "${dir}Sub1/"
mv -v "$dir"*.doc "${dir}Sub2/"
done
If the files in all subdirectories are wanted to be moved similarly, then:
shopt -s globstar
for file in **/*.csv; do
mv -v "$file" "${file%/*}/Sub1/"
done
for file in **/*.doc; do
mv -v "$file" "${file%/*}/Sub2/"
done
Note that, the directories Sub1 and Sub2 are relative to the directory where csv and doc files reside.

How to gunzip and copy files in a loop

I've a situation where I've to read list of gunzip files (for eg: test.gz, test[2020]*.gz) gunzip them and move it to a different folder(temp). I am using linux bash shell.
So far I've done this:
for f in *.gz
do
gunzip $f
done
When I run the script, the file is successfully gunzipped as test.csv, test[2020].csv respectively.
After that I don't know how to copy the gunzipped files (csv file) to "temp" folder.
Should I open another loop after this code? Or can I gunzip and copy the files in a single loop?
I also want to pause for few minutes between each copy to "temp" folder.
Any help is much appreciated.
Thanks
Remove the .gz suffix from the variable and copy the file with that name.
for f in *.gz
do
gunzip "$f"
cp "${f%.gz}" temp
sleep 60 # sleep 1 minute
done
I think what you want to do is not to copy *.gz file into temp dir. Is it right?
If you use "find", that can work well.
for f in *.gz
do
gunzip $f
mv $(find . -type f -depth 1 \! -name "*.gz") temp
done

Using find to rename files recursively with random chars

I have an IP camera that takes snapshots and nests those snapshots into multiple directories. The sub directories look something like this.
/cam_folder
|--Date
|----Hour
|------Minute
|-------->file1
|-------->file2...etc
|------Minute
|-------->file1...etc
There is a ton of sub directories because of the way it stores files since it places those snapshots within a Minute directory of the Date/Hour directories.
At any rate, there are other types of files mixed in, but I know how to use find to find all the .jpgs I need:
find /cam_folder/ -type f -name '*.jpg'
But what I need to do is rename all the .jpg files to random characters. I was able to find this which works from a single directory in a bash script:
for file in *.jpg; do
new_file="$(mktemp XXXXXXXX.jpg)"
mv -f -- "$file" "$new_file"
done
My problem is how to tie these together? I need to use find to feed these into a bash script I guess?
Is there an easier way to just walk a directory recursively renaming as I go?
find /cam_folder/ -type f -name '*.jpg' -exec sh -c '
for f; do
mv -f -- "$f" "${f%/*}/$(mktemp -u XXXXXXXX.jpg)"
done' _ {} +
find
Shell Command Language § The for Loop
Shell Command Language § Parameter Expansions
I think that meets your demand
while IFS= read -r file ; do
new_file="$(mktemp XXXXXXXX.jpg)"
mv -f -- "$file" "$new_file"
done < <(find /cam_folder/ -type f -name '*.jpg')

Bash script to move files yy/mm/dd

Wondering if anyone can help with a bash script for the following.
Have a folder blah/ which includes *.txt files which gets updated daily.
I need to move the txt files daily to a /archive/yy/mm/dd folder format.
Use the following script:
d=/archive/$(date +%Y/%m/%d)
mkdir -p "$d"
find ./blah -type f -name *.txt -exec mv {} "$d" \;

Backup files with dir structure bash script

I'm making a bash script that should backup all files and dir structure to another dir.
I made the following code to do that:
find . -type f -exec cp {} $HOME/$bdir \; -o -type d -exec mkdir -p {} $HOME/$bdir \; ;
The problem is, is that this only copies the files and not the dir structure.
NOTE: I may not use cp -r, cp -R or something like it because this code is part of an assignment.
I hope somebody can put me in the right direction. ;)
Joeri
EDIT:
I changed it to:
find . -type d -exec mkdir -p $HOME/$bdir/{} \; ;
find . -type f -exec cp {} $HOME/$bdir/{} \; ;
And it works! Ty guys ;)
This sounds like a job for rsync.
You mention that this is an assignment. What are your restrictions? Are you limited to only using find? Does it have to be a single command?
One way to do this is to do it in two find calls. The first call only looks for directories. When a directory is found, mkdir the corresponding directory in the destination hierarchy. The second find call would look for files, and would use a cp command like you currently have.
You can also take each filename, transform the path manually, and use that with the cp command. Here's an example of how to generate the destination filename:
> find . -type f | sed -e "s|^\./|/new/dir/|"
/new/dir/file1.txt
/new/dir/file2.txt
/new/dir/dir1/file1_1.txt
/new/dir/dir1/file1_2.txt
For your purposes, you could write a short bash script that take the source file as input, uses sed to generate the destination filename, and then passes those two paths to cp. The dirname command will return the directory portion of a filename, so mkdir -p $(dirname $destination_path) will ensure that the destination directory exists before you call cp. Armed with a script like that, you can simply have find execute the script for every file it finds.
cd olddir; tar c . | (cd newdir; tar xp)
Can you do your find with "-type d" and exec a "mkdir -p" first, followed by your find that copies the files rather than having it all in one command? It should probably also be mkdir -p $HOME/$bdir/{}.

Resources