Zip function junk directory paths - linux

I'm creating a zip file by relatively specifying file locations. Here is an example of the command I'm running:
zip priv/purchases/test.zip priv/audio/5001.mp3 priv/audio/5002.mp3
When the file compresses it maintains the relative paths of the files. Thus I get a file hierarchy of:
/priv
/audio
/5001.mp3
/5002.mp3
I've read the man page and I guess I should be using the -j flag. Instead I'd like the files to be extracted at the root of the uncompressed file.
-j seems to work but it ALSO includes the file structure. Why?

Well don't I feel silly. Apparetly if you don't remove the previous directory it seems to append the files. Shoot! Haha.

Related

GZip an entire directory

i used the following:
gzip -9 -c -r <some_directory> > directory.gz
how do i decompress this directory ?
I have tried
gunzip directory.gz
i am just left with a single file and not a directory structure.
As others have already mentioned, gzip is a file compression tool and not an archival tool. It cannot work with directories. When you run it with -r, it will find all files in a directory hierarchy and compress them, i.e. replacing path/to/file with path/to/file.gz. When you pass -c the gzip output is written to stdout instead of creating files. You have effectively created one big file which contains several gzip-compressed files.
Now, you could look for the gzip file header/magic number, which is 1f8b and then reconstruct your files manually.
The sensible thing to do now is to create backups (if you haven't already). Backups always help (especially with problems such as yours). Create a backup of your directory.gz file now. Then read on.
Fortunately, there's an easier way than manually reconstructing all files: using binwalk, a forensics utility which can be used to extract files from within other files. I tried it with a test file, which was created the same way as yours. Running binwalk -e file.gz will create a folder with all extracted files. It even manages to reconstruct the original file names. The hierarchy of the directories is probably lost. But at least you have your file contents and their names back. Good luck!
Remember: backups are essential.
(For completeness' sake: What you probably intended to run: tar czf directory.tar.gz directory and then tar xf directory.tar.gz)
gzip will compress 1+ files, though not meant to function like an archive utility. The posted cmd-line would yield N compressed file images concatenated to stdout, redirected to the named output file; unfortunately stuff like filenames and any dirs would not be recorded. A pair like this should work:
(create)
tar -czvf dir.tar.gz <some-dir>
(extract)
tar -xzvf dir.tar.gz

Python extract zip file avoiding junk paths

In ubuntu, when extracting a zip file, there is an option -j that:
junk paths. The archive's directory structure is not recreated; all files are
deposited in the extraction directory (by default, the current one).
I am searching this kind of option in python but did not find anything succesful for my code:
with zipfile.ZipFile(my_file) as zip_ref:
zip_ref.extractall(my_path)
How can I handle this situation?
Thank you very much for your information!

How to use command zip in linux that folder have short path?

I used command zip in linux (RedHat), this is my command:
zip -r /home/username/folder/compress/zip.zip /home/username/folder/compressed/*
Then, i open file zip.zip, i see architecture as path folder compress.
I want to in folder zip only consist list file *.txt
Because i used this command in script crontab hence i can't use command cd to path folder before run command zip
Please help me
I skimmed the zip man page and this is what I have found. There is not an option archive files relative to a different directory. The closest I have found is zip -j which removes the entire path and stores the files directly in the zip rather than sub directories. I do not know what happens in the case of file name conflicts such as if /home/username/folder/compressed/a.txt and /home/username/folder/compressed/subdir/a.txt both exist. If this is not a problem for you, you can use this option, but I am concerned because you did specify the -r option indicating that you expect zip to traverse sub folders.
I also thought of the possibility that your script could somehow call zip with a different working directory, but I took a look at this unix stack exchange page and it looks like their options use cd.
I have to admit I do not understand why you cannot use cd and I am very curious about it. You said something about using crontab, but I have never heard of anything wrong with changing directories in a crontab script.
I used option -j in command zip
zip -jr /home/username/folder/compress/zip.zip /home/username/folder/compressed/*
and i was yet settled this problem, thanks

zip command skip errors

zip -r file.zip folder/
This is the typical command I use to zip a directory, however it is on an active site so images are constantly deleted/updated. Leading to the command failing due to a file being there when it started the process but not there when it gets to actually compressing it (at least from what I can see).
I have no option to stop the editing of the files in this case so my only hope is to just skip them, the amount of images getting edited compared to the sheer size of the directory is insigificant. so 2-3 files changing out of 100,000 is nothing, but the error stops the compression altogether.
I tried to look for a way around this, but have had no luck, could be just looking in the wrong direction but I feel that there is no way this is impossible.
Here is an example error:
zip I/O error: No such file or directory
zip error: Input file read failure (was zipping uploads/2010/03/file.jpg)
Is there some way to use the zip command or something similar to zip a folder, but if it runs into an error when it hits a file, it just skips it?
tar is always a good option to compress in Linux. Beware that zip may also have file size limit issue.
tar vcfz file.tar.gz folder

Can you use tar to apply a patch to an existing web application?

Patches are frequently released for my CMS system. I want to be able to extract the tar file containing the patched files for the latest version directly over the full version on my development system. When I extract a tar file it puts it into a folder with the name of the tar file. That leaves me to manually copy each file over to the main directory. Is there a way to force the tar to extract the files into the current directory and overwrite any files that have the same filenames? Any directories that already exist should not be overwritten, but merged...
Is this possible? If so, what is the command?
Check out the --strip-components (or --strippath) argument to tar, might be what you're looking for.
EDIT: you might want to throw --keep-newer into the mix, so any locally modified files aren't overwritten. And I would suggest testing new releases on a development server, then using rsync or subversion to carry over the changes.
I tried getting --strip-components to work and, while I didn't try that hard, I didn't get it working. It kept flattening the directory structure. In searching, I came across the following command that seems to do exactly what I want:
pax -r -f patch.tar -s'/patch///'
It's not tar, but hey, it works... Replace the words "patch" with whatever your tar file name is.
The option '--strip-components' allows you to trim parts of the embedded filenames. With that it is possible to do what you want.
For more help check out http://www.gnu.org/software/tar/manual/html_section/transform.html
I have just done:
tar -xzf patch.tar.gz
And it overwrites all the files that the patch contains.
I.e., if the patch was created for the contents of the app folder, I would extract it there. Results would be like this:
tar.gz contains: oldfolder/someoldfile.txt, oldfolder/newfolder/newfile.txt
before app looks like:
app/oldfolder/someoldfile.txt
Afterwards, app looks like
app/oldfolder/someoldfile.txt
oldfolder/newfolder/newfile.txt
And the "someoldfile.txt" is actually updated to what was in the tar.gz
Maybe this doesn't work with regular tar, only tar.gz. But I doubt it. I think it should work for everything, as long as user has write permissions.

Resources