Gunzip multiple files in folder and continue on error - linux

I am unzipping multiple files in a folder like this:
gunzip -f -k *.gz
Some of the .gz files are broken which cause the command to abort.
What is a nice way of unzipping all files while ignoring the broken ones?

The original answer gives an error, because the shell tries to run the gzipped filenames as a command, because of the grave accent ` wrappers around *.gz. But if you remove them - then it works.
I cannot edit the original answer, because SO requires an edit to change at least 6 characters, so here's a new one.
for file in *.gz
do
gunzip -f -k $file
done

Use this loop as suggested by #Gene
for file in *.gz
do
gunzip -f -k $file
done

Related

Get wget to download only new items from a list

I've got a file that contains a list of file paths. I’m downloading them like this with wget:
wget -i cram_download_list.txt
However the list is long and my session gets interrupted. I’d like to look at the directory for which files already exist, and only download the outstanding ones.
I’ve been trying to com up with an option involving comm, but can’t work out how to loop it in with wget.
File contents look like this:
ftp://ftp.sra.ebi.ac.uk/vol1/run/ERR323/ERR3239280/NA07037.final.cram
ftp://ftp.sra.ebi.ac.uk/vol1/run/ERR323/ERR3239286/NA11829.final.cram
ftp://ftp.sra.ebi.ac.uk/vol1/run/ERR323/ERR3239293/NA11918.final.cram
ftp://ftp.sra.ebi.ac.uk/vol1/run/ERR323/ERR3239298/NA11994.final.cram
I’m currently trying to do something like this:
ls *.cram | sed 's/^/ftp:\/\/ftp.sra.ebi.ac.uk\/vol1\/run\/ERR323\/ERR3239480\//' > downloaded.txt
comm -3 <(sort cram_download_list.txt) <(sort downloaded.txt) | tr -d " \t" > to_download.txt
wget -i to_download_final.txt
I’d like to look at the directory for which files already exist, and
only download the outstanding ones.
To get such behavior you might use -nc (alias --no-clobber) flag. It does skip downloads that would download to existing files (overwriting them). So in your case
wget -nc -i cram_download_list.txt
Beware that this solution does not handle partially downloaded files.
wget -c -i <(find -type f -name '*.cram' -printf '%f$\n' |\
grep -vf - cram_download_list.txt )
Finds files ending in cram and prints them followed by a $ and a newline. This is used as for an inverted regex match list for your download list, i.e. removes any lines ending in the existing file names from your download list.
Added:
-c for finalizing incomplete files (i.e. resume download)
Note: does not handle spaces or newlines in file names well, but these are ftp-URLs so that should not be a problem in the first place.
If you also want to handle partial transferred files, you always need to pass in the complete set of filenames that wget is able to check the length. Which means that for this scenario the only way is:
wget -c -i cram_download_list.txt
The files which are already completed will only be checked and skipped.

Sort files according to their filetype

After an HD problem and some work, I have a bunch of files with names like "f1234", "f1235", etc.
My goal is to sort this files according to their filetype. For example, I want to move all the PDF files in the "pdfs" directory.
For one file, I can do : "file f1234", and if it's a PDF, I can "mv f1234 pdfs/". But I have thousands of file... Can you help me with a bash or zsh command for sort all the PDF in one pass ? Thanks
The hard part here is reliably turning the output of file into a directory name. I think probably the best candidate for that is the mime-type of the file rather than the human readable output of file. I'd use something like:
mkdir sorted
for f in f*
do
d=$(file -b --mime-type "$f" | tr / -)
mkdir -p "sorted/$d"
mv "$f" "sorted/$d/"
done
Obviously I'd test that out a bit before running it on your files, but something pretty close to that should work.

Loop in bash script

I have a directory containing gzipped datafiles. I want to run each file using the script est_abundance.py. But first i need to unzip them. So i have this bash:
for file in /home/doy.user/scratch1/Secoutput/; do
cd "$file"
gunzip *kren.gz
python analysis1.py -i /Secoutput/*kren -k gkd_output -o /bracken_output/$(basename *kren).txt
wait
done
The problem is, the bash script keeps on unzipping all of the datafiles, it does not continue to the next command after unzipping one file.
Can you help me correct this? I just want every command to be done for every file.
Use, notice that you should use $file variable, and you can get the name of the file after unzipping by stripping the .gz part using ${file%.gz}:
for file in /home/doy.user/scratch1/Secoutput/*; do
gunzip $file
python analysis1.py -i ${file%.gz} -k gkd_output -o /bracken_output/$(basename ${file%.gz}).txt
wait
done

Bash loop to gunzip file and remove file extension and file prefixes

I have several .vcf.gz files:
subset_file1.vcf.vcf.gz
subset_file2.vcf.vcf.gz
subset_file3.vcf.vcf.gz
I want to gunzip these file and rename them (remove subset_ and redudant .vcf extension in one go and get these files:
file1.vcf
file2.vcf
file3.vcf
This is the script I have tried:
iFILES=/file/path/*.gz
for i in $iFILES;
do gunzip -k $i > /get/in/this/dir/"${i##*/}"
done
Since you have to three operation at your output path name
1.remove the directory part
2.remove prefix subset_
3.remove redudant extension .vcf
It's hard to accomplish with only one command.
Following is a modification version. Be CAREFUL to try it. I didn't test it thorough in my computer.
for i in /file/path/*.gz;
do
# get the output file name
o=$(echo ${i##*/} | sed 's/.*_\(.*\)\(\.[a-z]\{3\}\)\{2\}.*/\1\2/g')
gunzip -k $i > /get/in/this/dir/$o
done

How to check for an exploding zip file in bash?

I have a bash shell script that unzips a zip file, and manipulates the resulting files. Because of the process, I expect all the content I am interested to be within a single folder like so:
file.zip
/file
/contentFolder1
/contentFolder2
stuff1.txt
stuff2.txt
...
I've noticed users on Windows typically don't create a sub folder but instead submit an exploding zip file that looks like:
file.zip
/contentFolder1
/contentFolder2
stuff1.txt
stuff2.txt
...
How can I detect these exploding zips, so that I may handle them accordingly? Is it possible without unzipping the file first?
If you want to check, unzip -l will print the contents of the zip file without extracting them. You'll have to massage the output a bit, though, since it's printing all sorts of additional crud.
Unzip to a directory first, and then remove the extra layer if the zip is not a bomb.
tempdir=`mktemp -d`
unzip -d $tempdir file.zip
if [ $(ls $tempdir | wc -l) = 1 ]; then
mv $tempdir/* .
rmdir $tempdir
else
mv $tempdir file
fi
I wouldn't try to detect it. I'd just force unzip to do what I want. With InfoZip:
$ unzip -j -d unzip-output-dir FileFromUntrustedSource.zip
-j makes it ignore any directory structure within the file, and -d tells it to put files in a particular directory, creating it if necessary.
If there are two files with the same name but in different subdirectories, the above command will make unzip ask if you want to overwrite the first with the second. You can add -o to force it to overwrite without asking, or -f to only overwrite if the second file is newer.

Resources