How to zip VERY LONG list of files in environment variable: *** buffer overflow detected ***: zip terminated - linux

I am copying files over from one machine to another. I am only interested in files that are more than N days old, so I have used find to create a list of filenames as follows:
DAYS_OLD=7
FILEs=`find /some/path -mtime -$DAYS_OLD | xargs`
Now I want to zip the files into one archive:
ZIPFILE='myfiles.zip'.
I run the following command:
zip -r $ZIPFILE "${FILEs}"
I get the following error:
* buffer overflow detected *: zip terminated
How can I zip the files (in the ${FILEs} environment variable) into a zip archive?

One way:
find /some/path -mtime -$DAYS_OLD | xargs zip -r $SOMEDIR/$ZIPFILE

Use find /some/path -mtime "$DAYS_OLD" -exec zip -r "$ZIPFILE" {} + to work with any valid file name, including those with newlines in their names.

Try with
zip $ZIPFILE -i#<(find /some/path -mtime -$DAYS_OLD)
the -i parameter tells zip to read the list from a file. The file will be simulated on the fly with the contents of the command.

Related

Create ZIP of hundred thousand files based on date newer than one year on Linux

I have a /folder with over a half million files created in the last 10 years. I'm restructuring the process so that in the future there are subfolders based on the year.
For now, I need to backup all files modified within the last year. I tried
zip -r /backup.zip $(find /folder -type f -mtime -365
but get error: Argument list too long.
Is there any alternative to get the files compressed and archived?
Zip has an option to read the filelist from stdin. Below is from the zip man page
-# file lists. If a file list is specified as -# [Not on MacOS],
zip takes the list of input files from standard input instead of
from the command line. For example,
zip -# foo
will store the files listed one per line on stdin in foo.zip.
This should do what you need
find /folder -type f -mtime -365 | zip -# /backup.zip
Note that I've removed the -r option because it isn't doing anything - you are explicitly selecting standard files with the find command (-type f)
You'll have to switch from passing all the files at once to piping the files one at a time to the zip command.
find /folder -type f -mtime -365 | while read FILE;do zip -r /backup.zip $FILE;done
You can also work with the -exec parameter in find, like this:
find /folder -type f -mtime -365 -exec zip -r /backup.zip \;
(or whatever your command is). For every file, the given command is executed with the file passed as a last parameter.
Find the files and then execute the zip command on as many files as possible using + as opposed to ;
find /folder -type f -mtime -365 -exec zip -r /backup.zip '{}' +

How to zip a directory in linux excluding a single file?

I would like to zip a directory and I am able to do so with
zip -r zip_file_name directory
however, I would like exclude a single file in the directory from the zip file. How would I go about doing this?
Enter the directory which you want to zip. Then:
find . -not -name "file_to_exclude" | zip zip_file_name -#
The command above will create zip_file_name.zip in directory itself.
To create zip at a particular path, Enter the directory which you want to zip. Then:
find . -not -name "file_to_exclude" | zip ~/ParticularPath/zip_file_name -#
From linux man page for zip:
-# file lists. If a file list is specified as -# [Not on MacOS], zip takes the list of input files from standard input instead of from the command line. For example,
zip -# foo
will store the files listed one per line on stdin in foo.zip.
Under Unix, this option can be used to powerful effect in conjunction with the find command. For example, to archive all the C source files in the current directory and its subdirectories:
find . -name "*.[ch]" -print | zip source -#

Zip together all HTML files under current directory

I am looking to zip together *.html files recursively under the current directory.
My current command is:
zip all-html-files.zip *.html
But this doesn't work recursively. Nor does adding the -r option it seems. Can anybody advise? I want to zip all html files under the current directory, including those underneath subdirectories, but zip the HTML files only, not their file folders.
Thanks!
What about this?
find /your/path/ -type f -name "*.html" | xargs zip all_html_files.zip
looks for all .html files under the directory /your/path (change it for yours). Then, pipes the result to xargs, which creates the zip file.
To junk the paths, add -j option:
find /your/path/ -type f -name "*.html" | xargs zip -j all_html_files.zip
find . -name "*.html" -print | zip all-html-files.zip -#
Try
find . -type f -name "*.html" | xargs zip all-html-files
You can also say
find . -type f -name "*.html" | zip all-html-files -#
If you do not want to preserve the directory structure, specify the -j option:
find . -type f -name "*.html" | zip -j all-html-files -#
man zip says:
-# file lists. If a file list is specified as -# [Not on MacOS], zip
takes the list of input files from standard input instead of from the
command line. For example,
zip -# foo
will store the files listed one per line on stdin in foo.zip.
Under Unix, this option can be used to powerful effect in conjunction
with the find (1) command. For example, to archive all the C source
files in the current directory and its subdirectories:
find . -name "*.[ch]" -print | zip source -#
(note that the pattern must be quoted to keep the shell from expanding
it).
-j
--junk-paths
Store just the name of a saved file (junk the path), and do not
store directory names. By default, zip will store the full path
(relative to the current directory).

how can i search for files and zip them in one zip file

I tried to search files and zip them with the following commmand
find . regexpression -exec zip {} \;
however it is not working. How can i do this?
The command you use will run zip on each file separately, try this:
find . -name <name> -print | zip newZipFile.zip -#
The -# tells zip to read files from the input. From man zip(1),
-# file lists. If a file list is specified as -# [Not on MacOS], zip takes the list of input files from standard input instead of from the command line.
Your response is close, but this might work better:
find -regex 'regex' -exec zip filname.zip {} +
That will put all the matching files in one zip file called filename.zip. You don't have to worry about special characters in the filename (like a line break), which you would if you piped the results.
You can also provide the names as the result of your find command:
zip name.zip `find . -name <name> -print`
This is a feature of the shell you are using. You can search for "backticks" to determine how your shell handles this.

Unzipping from a folder of unknown name?

I have a bunch of zip files, and I'm trying to make a bash script to automate the unzipping of certain files from it.
Things is, although I know the name of the file I want, I don't know the name of the folder it's in; it is one folder depth in
How can I extract these files, preferably discarding the folder?
Here's how to unzip any given file at any depth and junk the folder paths on the way out:
unzip -j somezip.zip *somefile.txt
The -j junks any folder structure in the zip file and the asterisk gives a wildcard to match along any path.
if you're in:
some_directory/
and the zip files are in any number of subdirectories, say:
some_directory/foo
find ./ -name myfile.zip -exec unzip {} -d /directory \;
Edit: As for the second part, removing the directory that contained the zip file I assume?
find ./ -name myfile.zip -exec unzip {} -d /directory \; -exec echo rm -rf `dirname {}` \;
Notice the "echo." That's a sanity check. I always echo first when executing something destructive like rm -rf in a loop/iterative sequence like this. Good luck!
Have you tried unzip somefile.zip "*/blah.txt"?
You can use find to find the file that you need to unzip, and xargs to call unzip:
find /path/to/root/ -name 'zipname.zip' -print0 | xargs -0 unzip
print0 enables the command to work with files or paths that have white space in them. -0 is the option to xargs that makes it work with print0.

Resources