Sync zip file with folder? - linux

I know specifying a zip file in the command add the files to the zip file itself, but how can I really sync a folder having already a zip file so that it does add new files, replace modified files and delete absent files in the folder?

The command-line "zip" tool has the -FS (or --filesync) flag that does exactly this (updates existing, adds new, deletes removed).
See http://www.info-zip.org/mans/zip.html.

The command-line "zip" command has options for --freshen (update existing files only), --update (update and add files), and --delete to remove, but I couldn't find a way to combine them all into one command.
Since the zip file probably needs to be completely rewritten during processing anyways, why not just delete the old zip file and create a new one from scratch?

Related

How to use zip to generate a new archive but not just refresh files in the archive and add files into it, on linux?

Here, (on linux)
there is an existing archive named A.zip, which include File1 and File2:
A.zip:
File1
File2
and I run this command: zip A.zip File1 File3, then the archive A.zip becomes like:
A.zip:
File1
File2
File3
however, what I really want to get is a brand new archive A.zip! like:
A.zip:
File1
File3
I know it can be done by run rm A.zip and then run zip A.zip File1 File3, but it is not elegant and if I write these commands into a shell script so A.zip may not exist while the action to remove a non-existent file is not elegant as well.
Is there any options for me to get this done?
Use these option to works:
zip -FSr A.zip File1 File3
OPTIONS
-FS
Synchronize the contents of an archive with the files on the OS. Normally when an archive is updated, new files are added and changed files are updated but files that no longer exist on the
OS are not deleted from the archive. This option enables a new mode that checks entries in the archive against the file system. If the file time and file size of the entry matches that of
the OS file, the entry is copied from the old archive instead of being read from the file system and compressed. If the OS file has changed, the entry is read and compressed as usual. If
the entry in the archive does not match a file on the OS, the entry is deleted. Enabling this option should create archives that are the same as new archives, but since existing entries are
copied instead of compressed, updating an existing archive with -FS can be much faster than creating a new archive. Also consider using -u for updating an archive.
For this option to work, the archive should be updated from the same directory it was created in so the relative paths match. If few files are being copied from the old archive, it may be
faster to create a new archive instead.
Note that the timezone environment variable TZ should be set according to the local timezone in order for this option to work correctly. A change in timezone since the original archive was
created could result in no times matching and recompression of all files.
This option deletes files from the archive. If you need to preserve the original archive, make a copy of the archive first or use the --out option to output the updated archive to a new
file. Even though it may be slower, creating a new archive with a new archive name is safer, avoids mismatches between archive and OS paths, and is preferred.
-r
Travel the directory structure recursively; for example:
zip -r foo.zip foo
or more concisely
zip -r foo foo
In this case, all the files and directories in foo are saved in a zip archive named foo.zip, including files with names starting with ".", since the recursion does not use the shell's file-
name substitution mechanism. If you wish to include only a specific subset of the files in directory foo and its subdirectories, use the -i option to specify the pattern of files to be in‐
cluded. You should not use -r with the name ".*", since that matches ".." which will attempt to zip up the parent directory (probably not what was intended).
Multiple source directories are allowed as in
zip -r foo foo1 foo2
which first zips up foo1 and then foo2, going down each directory.
Note that while wildcards to -r are typically resolved while recursing down directories in the file system, any -R, -x, and -i wildcards are applied to internal archive pathnames once the di‐
rectories are scanned. To have wildcards apply to files in subdirectories when recursing on Unix and similar systems where the shell does wildcard substitution, either escape all wildcards
or put all arguments with wildcards in quotes. This lets zip see the wildcards and match files in subdirectories using them as it recurses.

How can we specify the unzip or 7za command in linux to extract multiple zip files into one folder while keeping all duplicates?

I currently have about 10 zip files I would like to extract into one folder. Each zip file contains around 1000 images. As a result, lots of the names of the images are duplicated. For example, in the first zip file, we have things like Img.jpg, Img(1).jpg, Img(2).jpg. I know that to extract multiple zip files into a single folder, I would do something like:
unzip '*.zip'
However, when it tries to put a file from the first zip file that has the same name as a file in the second zip file, it starts to ask:
replace duplicatefile.mp4? [y]es, [n]o, [A]ll, [N]one, [r]ename:
At this point, what do I do if I want to keep ALL files, including the duplicates, and possibly have them named to image(1).jpg instead?
In short, is there a way to call the unzip command on all the zip files, have them extracted into a single folder, without losing any files due to same names?
Thanks.
Invoke unzip --help for details.
But it appears unzip *.zip -n should do the trick?
(Make sure it does what you expect before going ahead!)

Remove .svn files from zip file

I would like to remove all svn directories from a zip file. Can not find correct pattern. This is the pattern I tried ".svn/*"
Was able to remove all .class files.
Found pattern. Should work for any directory.
"/.svn/"
Avoid the problem in the first place. Don't create your zip file directly from your working copy. Just use svn export to create a new directory without the .svn directories, and without any unversioned files, and zip that instead.

Unzip archive to an existing directory structure

I'm looking at bring the content of one WP blog over to another as I will be using WPML to server regional content instead of multiple sites. So not strictly a WP question, more command line.
This may seem an obvious or stupid question, but if I bring over the other 'uploads' folder as a zip and unzip to the wp-content folder, will the contents merge into the existing folders or overwrite what is already there.
If it's the latter, is there a switch I can append to ensure files are merged?
Thanks in advance,
Tom
When unzip finds a file that already exists in the destination, it will ask you if you want to overwrite it. You can then type y to overwrite it, A to overwrite all files, N if you don't want to overwrite any of them etc.
Example:
$ unzip archive.zip
Archive: archive.zip
replace foo? [y]es, [n]o, [A]ll, [N]one, [r]ename:
Try using
unzip -o <filepath/zipfile.zip> -d <path where you want files>
Best use: unzip --help

Can you use tar to apply a patch to an existing web application?

Patches are frequently released for my CMS system. I want to be able to extract the tar file containing the patched files for the latest version directly over the full version on my development system. When I extract a tar file it puts it into a folder with the name of the tar file. That leaves me to manually copy each file over to the main directory. Is there a way to force the tar to extract the files into the current directory and overwrite any files that have the same filenames? Any directories that already exist should not be overwritten, but merged...
Is this possible? If so, what is the command?
Check out the --strip-components (or --strippath) argument to tar, might be what you're looking for.
EDIT: you might want to throw --keep-newer into the mix, so any locally modified files aren't overwritten. And I would suggest testing new releases on a development server, then using rsync or subversion to carry over the changes.
I tried getting --strip-components to work and, while I didn't try that hard, I didn't get it working. It kept flattening the directory structure. In searching, I came across the following command that seems to do exactly what I want:
pax -r -f patch.tar -s'/patch///'
It's not tar, but hey, it works... Replace the words "patch" with whatever your tar file name is.
The option '--strip-components' allows you to trim parts of the embedded filenames. With that it is possible to do what you want.
For more help check out http://www.gnu.org/software/tar/manual/html_section/transform.html
I have just done:
tar -xzf patch.tar.gz
And it overwrites all the files that the patch contains.
I.e., if the patch was created for the contents of the app folder, I would extract it there. Results would be like this:
tar.gz contains: oldfolder/someoldfile.txt, oldfolder/newfolder/newfile.txt
before app looks like:
app/oldfolder/someoldfile.txt
Afterwards, app looks like
app/oldfolder/someoldfile.txt
oldfolder/newfolder/newfile.txt
And the "someoldfile.txt" is actually updated to what was in the tar.gz
Maybe this doesn't work with regular tar, only tar.gz. But I doubt it. I think it should work for everything, as long as user has write permissions.

Resources