I have a lot of folders with a zip file in each. Most of the zip files in the folders have been opened already. I just want to unzip those which have not been opened, which I know all have the same date.
I'm trying to use the following but I'm getting hit back with Unzip rules. The first part finds all the files I need, but piping the results to unzip, as I have done, isn't enough.
find *2019-01-05* | unzip
you can try to use xargs to get prior results and then unzip them:
find *2019-01-05* | xargs unzip
That's:
find -type f -name \*2019-01-05\*.zip -exec unzip {} +
-type f for good measure, in case there are similarly named directories.
Related
I have a /folder with over a half million files created in the last 10 years. I'm restructuring the process so that in the future there are subfolders based on the year.
For now, I need to backup all files modified within the last year. I tried
zip -r /backup.zip $(find /folder -type f -mtime -365
but get error: Argument list too long.
Is there any alternative to get the files compressed and archived?
Zip has an option to read the filelist from stdin. Below is from the zip man page
-# file lists. If a file list is specified as -# [Not on MacOS],
zip takes the list of input files from standard input instead of
from the command line. For example,
zip -# foo
will store the files listed one per line on stdin in foo.zip.
This should do what you need
find /folder -type f -mtime -365 | zip -# /backup.zip
Note that I've removed the -r option because it isn't doing anything - you are explicitly selecting standard files with the find command (-type f)
You'll have to switch from passing all the files at once to piping the files one at a time to the zip command.
find /folder -type f -mtime -365 | while read FILE;do zip -r /backup.zip $FILE;done
You can also work with the -exec parameter in find, like this:
find /folder -type f -mtime -365 -exec zip -r /backup.zip \;
(or whatever your command is). For every file, the given command is executed with the file passed as a last parameter.
Find the files and then execute the zip command on as many files as possible using + as opposed to ;
find /folder -type f -mtime -365 -exec zip -r /backup.zip '{}' +
I am writing a house-keeping script and have files within a directory that I want to clean up.
I want to move files from a source directory to another, there are many sub-directories so there could be files that are the same. What I want to do, is either use CMP command or MD5sum each file, if they are no duplicates then move them, if they are the same only move 1.
So the I have the move part working correctly as follows:
find /path/to/source -name "IMAGE_*.JPG" -exec mv '{}' /path/to/destination \;
I am assuming that I will have to loop through my directory, so I am thinking.
for files in /path/to/source
do
if -name "IMAGE_*.JPG"
then
md5sum (or cmp) $files
...stuck here (I am worried about how this method will be able to compare all the files against eachother and how I would filter them out)...
then just do the mv to finish.
Thanks in advance.
find . -type f -exec md5sum {} \; | sort | uniq -d
That'll spit out all the md5 hashes that have duplicates. then it's just a matter of figuring out which file(s) produced those duplicate hashes.
There's a tool designed for this purpose, it's fdupes :
fdupes -r dir/
dupmerge is another such tool...
I tried to search files and zip them with the following commmand
find . regexpression -exec zip {} \;
however it is not working. How can i do this?
The command you use will run zip on each file separately, try this:
find . -name <name> -print | zip newZipFile.zip -#
The -# tells zip to read files from the input. From man zip(1),
-# file lists. If a file list is specified as -# [Not on MacOS], zip takes the list of input files from standard input instead of from the command line.
Your response is close, but this might work better:
find -regex 'regex' -exec zip filname.zip {} +
That will put all the matching files in one zip file called filename.zip. You don't have to worry about special characters in the filename (like a line break), which you would if you piped the results.
You can also provide the names as the result of your find command:
zip name.zip `find . -name <name> -print`
This is a feature of the shell you are using. You can search for "backticks" to determine how your shell handles this.
I have a directory "mapnik" with hundreds of sub-directories, each containing more than 10000 files. I would like to zip "mapnik" recursively, preserving the folder-structure but only adding files greater than 103 Byte to the archive.
How can I accomplish this? I tried using find and pipes, but with the wrong syntax and the huge number of files, "trial and error" is not the best way to get it done ;)
Thanks for your help guys!
How about
find -size +103c -print0 | xargs -0 zip -r outname.zip
Delan's suggestion produced some kind of zip-error whith files of the same name. But it got me on the right track. This is what worked for me:
cd mapnik
find . -size +103c -print | zip archive.zip -#
I have got an archive of many fonts but i have troubble extracting them all into one folder. i tried to write a long script for 3 hours now, it somehow breaks on a path issue. i tried piping like find . -name *.zip|unzip -d ~/fonts but it doesnt work. i changed so much in the script i wrote, that it is not really presentable :(.
each fontfile is supposedly (i didnt check all, there are really many) inside a rar archive which together with a readme is in a zip archive which together with another readme is in each its own folder. can this be done in one line?
Try changing the one line like this
find . -name "*.zip" | xargs unzip -d ~/fonts
Try this
find . -name "*.zip" -exec unzip -d ~/fonts {} \;