I've a situation where I've to read list of gunzip files (for eg: test.gz, test[2020]*.gz) gunzip them and move it to a different folder(temp). I am using linux bash shell.
So far I've done this:
for f in *.gz
do
gunzip $f
done
When I run the script, the file is successfully gunzipped as test.csv, test[2020].csv respectively.
After that I don't know how to copy the gunzipped files (csv file) to "temp" folder.
Should I open another loop after this code? Or can I gunzip and copy the files in a single loop?
I also want to pause for few minutes between each copy to "temp" folder.
Any help is much appreciated.
Thanks
Remove the .gz suffix from the variable and copy the file with that name.
for f in *.gz
do
gunzip "$f"
cp "${f%.gz}" temp
sleep 60 # sleep 1 minute
done
I think what you want to do is not to copy *.gz file into temp dir. Is it right?
If you use "find", that can work well.
for f in *.gz
do
gunzip $f
mv $(find . -type f -depth 1 \! -name "*.gz") temp
done
Related
I'm new to bash scripting, and i'm finding it hard to solve this one.
I have a parent folder containing a mixture of sub directories and zipped sub directories.
Within those sub directories are also more nested zip files.
Not only are there .zip files, but also .rar and .7z files which also contain nested zips/rars/7zs.
I want to unzip, unrar and un7z all my nested sub directories recursively until the parent folder no longer contains any .rar, .zip, .7zip files. (these eventually need to be removed when they have been extracted). There could be thousands of sub directories all at different nesting depths. You could have zipped folders or zipped files.
However I want to retain my folder structure, so the unzipped folders must stay in the same place where it has been unzipped
I have tried this script that works for unzipping, but it does not retain the file structure.
#!/bin/bash
while [ "`find . -type f -name '*.zip' | wc -l`" -gt 0 ]
do
find . -type f -name "*.zip" -exec unzip -- '{}' \; -exec rm -- '{}' \;
done
I want for example:
folder 'a' contain zipped folder 'b.zip' which contains a zipped text file pear.zip (which is pear.txt that has been zipped to pear.zip a/b.zip(/pear.zip))
I would like folder 'a' to contain 'b' to contain pear.txt 'a/b/pear.txt'
The script above brings 'b' (b is empty) and pear both into folder 'a' where the script is executed which is not what I want. eg 'a/b' and 'a/pear.txt'
You could try this:
#!/bin/bash
while :; do
mapfile -td '' archives \
< <(find . -type f -name '*.zip' -o -name '*.7z' -print0)
[[ ${#archives[#]} -eq 0 ]] && break
for i in "${archives[#]}"; do
case $i in
*.zip) unzip -d "$(dirname "$i")" -- "$i";;
*.7z) 7z x "-o$(dirname "$i")" -- "$i";;
esac
done
rm -rf "${archives[#]}" || break
done
Every archive is listed by find. That list is extracted in the correct location and the archives removed. This repeats, until zero archives are found.
You can add an equivalent unrar command (I'm not familiar with it).
Add -o -name '*.rar' to find, and another case to case. If there's no option to specify a target directory with unrar, you could use cd "$(dirname "$i")" && unrar "$i".
There are some issues with this script. In particular, if extraction fails, the archive is still removed. Otherwise it would cause an infinite loop. You can use unzip ... || exit 1 to exit if extraction fails, and deal with that manually.
It's possible to both avoid removal and also an infinite loop, by counting files which aren't removed, but hopefully not necessary.
I couldn't test this properly. YMMV.
Let's say I have a bunch of *.tar.gz files located in a hierarchy of folders. What would be a good way to find those files, and then execute multiple commands on it.
I know if I just need to execute one command on the target file, I can use something like this:
$ find . -name "*.tar.gz" -exec tar xvzf {} \;
But what if I need to execute multiple commands on the target file? Must I write a bash script here, or is there any simpler way?
Samples of commands that need to be executed a A.tar.gz file:
$ tar xvzf A.tar.gz # assume it untars to folder logs
$ mv logs logs_A
$ rm A.tar.gz
Here's what works for me (thanks to Etan Reisner suggestions)
#!/bin/bash # the target folder (to search for tar.gz files) is parsed from command line
find $1 -name "*.tar.gz" -print0 | while IFS= read -r -d '' file; do # this does the magic of getting each tar.gz file and assign to shell variable `file`
echo $file # then we can do everything with the `file` variable
tar xvzf $file
# mv untar_folder $file.suffix # untar_folder is the name of folder after untar
rm $file
done
As suggested, the array way is unsafe if file name contained space(s), and also doesn't seem to work properly in this case.
Writing a shell script is probably easiest. Take a look at sh for loops. You could use the output of a find command in an array, and then loop over that array to perform a set of commands on each element.
For example,
arr=( $(find . -name "*.tar.gz" -print0) )
for i in "${arr[#]}"; do
# $i now holds each of the filenames output by find
tar xvzf $i
mv $i $i.suffix
rm $i
# etc., etc.
done
I'm new to Unix Scripting. Sorry if this question sounds stupid. I have a script where it copies files from Landingzone to Archive directory. Now, I want to write a script where it checks for test.txt file (which is like a trigger file), only if it is found then copy all files that arrived before test.txt files from Landingzone to Archive. Please let me know how to do this? I'm mentioning this as script because I've couple more commands apart from copying.
This should work
ARCHIVE=... # archive directory
cd Landingzone
if [ -f test.txt ]; then
find . -type f -maxdepth 1 ! -name test.txt ! -newer test.txt -exec cp {} $ARCHIVE ';'
fi
explanation:
thanks to if [ -f ... ]; everything is performed only if test.txt exists. Then we call find to
search for normal files only ('-f')
exclude subdirectories (-maxdepth 1)
exclude test.txt itself (! -name test.txt)
exclude files that are newer than test.txt (! -newer test.txt)
copy all files found to the archive directory (-exec ...)
I have a directory. It has about 500K .gz files.
How can I extract all .gz in that directory and delete the .gz files?
This should do it:
gunzip *.gz
#techedemic is correct but is missing '.' to mention the current directory, and this command go throught all subdirectories.
find . -name '*.gz' -exec gunzip '{}' \;
There's more than one way to do this obviously.
# This will find files recursively (you can limit it by using some 'find' parameters.
# see the man pages
# Final backslash required for exec example to work
find . -name '*.gz' -exec gunzip '{}' \;
# This will do it only in the current directory
for a in *.gz; do gunzip $a; done
I'm sure there's other ways as well, but this is probably the simplest.
And to remove it, just do a rm -rf *.gz in the applicable directory
Extract all gz files in current directory and its subdirectories:
find . -name "*.gz" | xargs gunzip
If you want to extract a single file use:
gunzip file.gz
It will extract the file and remove .gz file.
for foo in *.gz
do
tar xf "$foo"
rm "$foo"
done
Try:
ls -1 | grep -E "\.tar\.gz$" | xargs -n 1 tar xvfz
Then Try:
ls -1 | grep -E "\.tar\.gz$" | xargs -n 1 rm
This will untar all .tar.gz files in the current directory and then delete all the .tar.gz files. If you want an explanation, the "|" takes the stdout of the command before it, and uses that as the stdin of the command after it. Use "man command" w/o the quotes to figure out what those commands and arguments do. Or, you can research online.
I have made a mistake with a shell script and I have backup files that I want to restore.
The code I have to restore my files (which works perfectly) is:
for f in *.html~; do mv $f ${f%\~}; done
(The backup files end in .html~).
How do I do this recursively through folders?
Thanks in advance for your help.
You could alternatively use rsync
rsync -a /path/to/backup /path/to/restored/folder
find -type f -name "*.html~" |
while read f; do
mv "$f" "${f%\~}"
done