Getting files names inside a rar/zip file without unzip - linux

Does anyone know if it is possible to get the name of files inside a rar/zip without having to unrar/unzip the file.. and if yes, is there a way to block it or make difficult..
Thanks

The file names in a zip file are visible even if the data is encrypted. If you want to hide the names, the easy solution is to zip the zip file encrypted.
Later versions of PKZip do have an option to encrypt the file names as well with –cd=encrypt. (cd means central directory.)

The -l flag to unzip(1) does just that:
-l
list archive files (short format). The names, uncompressed file sizes and modification dates and times of the specified files are printed, along with totals for all files specified.
unrar(1) has the l option:
l
List archive content.

Related

How to create a sub-zip based on the contents of another zip?

I would like to create a zip that contains just a subset based on another zip file. Is there a smarter way to do this than just extracting the specific files and then re-zipping them under a new name?
I'm looking for an efficient way to do this, as the original file will contain thousands and thousands of files and will have a rough size of ~30 GB.
by deleting other stuff e.g.:
zip --delete file.zip junk/*
Second option is by listing zip file with
zipinfo -1 file.zip >list.files
and edit temporally file list to include. Then you
unzip file.zip $(cat list.files)
and create new zip file. Of course the same file can be used for exclusion for zip --delete. Or you can do the same using its on the fly file list in combination with grep for instance.
Check also https://bitbucket.org/agalanin/fuse-zip

Importing list of files to compress/zip

Does anybody know of any way (on Windows) to create an archive (zip, rar,..) and adding files to it by importing a list of files to be added (say from a CSV or text file or simply pasted) that need to be archived. Say I have a simple list of 1,000 files across multiple directories that I want to add to an archive, this would be a much simpler method of doing it than adding each file individually. Also I do not want to arhive the entire directory as it is absolutely massive.
eg:
c:\somedir\file1.php
c:\somedir\somesubdir\file2.php
c:\someotherdir\file3.php
...
And no I do not want to import all files in certain directories, the hundreds of files are scattered across tens of directories which also contain lots of other files that I do not want to archive.
Thanks
rar.exe from WinRAR has the following option:
n#<list> Include files listed in specified list file

Overwrite file in copying IF content to of them not the same

I have a lot of files from one side (A) and a lot of other files in other place (B)
I'm copying A to B, there are a lot of files are the same, but content could be different!
Usually I used mc (Midnight Commander) to do it, and selected "Overwrite if different size".
But there is a situation when size are the same, but content is different. In this case mc keeps file in B place and not overwrite it.
In mc overwrite dialog there is a work "Update" I don't know what it is doing? In help there is no such information, maybe this is a solution?
So I'm searching solution which can help me copy all files from A to B and overwrite files in B place if they exists AND content is different from A.
if file in "B" place exists (the same name) and content is different it has to be overwritten by file from "A" place every time.
Do you know any solution?
I'd use rsync as this will not rely on the file date but actually check whether the content of the file has changed. For example:
#> rsync -cr <directory to copy FROM> <directory to copy TO>
Rsync copies files either to or from a remote host, or locally on the current host (it does not support copying files between two remote hosts).
-c, --checksum skip based on checksum, not mod-time & size
-r, --recursive recurse into directories
See man rsync for more options and details.
Have you tried the command line:
cp -ru A/* B/
Should copy recursively all changed files (more recent timestamp) from directory A to directory B.
You can also use -a instead of -r in the command line, depending on what you want to do. See the cp man page.
You might want to keep some sort of 'index' file that holds the SHA-1 hash of the files, which you create when you write them. You can then calculate the 'source' hash and compare it against the 'destination' hash from the index file. This will only work if this process is the only way files are written to the destination.
http://linux.math.tifr.res.in/manuals/man/mc.html
The replace dialog is shown when you attempt to copy or move a file on the top of an existing file. The dialog shows the dates and sizes of the both files. Press the Yes button to overwrite the file, the No button to skip the file, the alL button to overwrite all the files, the nonE button to never overwrite and the Update button to overwrite if the source file is newer than the target file. You can abort the whole operation by pressing the Abort button

How to change a file inside an archive (.ear) file without extracting entire file

I have an .ear file (an archive file like tar / zip) that has a file inside that i want to change.
For example myfile.ear contains 1.txt and i want to change 1.txt to 2.txt and possibly also change some of the content inside 1.txt (like sed does)
I really want to avoid having to extract myfile.ear, change the file and compress it again.
Does anyone know a way to achieve this in linux ?
And if it's not possible, I would also like to know why
Thanks.
EAR files are just JAR files which are just ZIP files. The ZIP format, IIRC, contains metadata and data interleaved, so changing one file (which might be larger/smaller than the file it is replacing) might not fit (or leave a gap), thus in all practical terms the file must be rewritten when doing modifications.

Linux - Restoring a file

I've written a vary basic shell script that moves a specified file into the dustbin directory. The script is as follows:
#!/bin/bash
#move items to dustbin directory
mv "$#" ~/dustbin/
echo "File moved to dustbin"
This works fine for me, any file I specify gets moved to the dustbin directory. However, what I would like to do is create a new script that will move the file in the dustbin directory back to its original directory. I know I could easily write a script that would move it back to a location specified by the user, but I would prefer to have one that would move it to its original directory.
Is this possible?
I'm using Mac OS X 10.6.4 and Terminal
You will have to store where the original file is coming from then. Maybe in a seperate file, a database, or in the files attributes (meta-data).
Create a logfile with 2 columns:
The complete filename in the dustbin
The complete original path and filename
You will need this logfile anyway - what will you do when a user deleted 2 files in different directories, but with the same name? /home/user/.wgetrc and /home/user/old/.wgetrc ?
What will you do when a user deletes a file, makes a new one with the same name, and then deletes that too? You'll need versions or timestamps or something.
You need to store the original location somewhere, either in a database or in an extended attribute of the file. A database is definitely the easiest way to do it, though an extended attribute would be more robust. Looking in ~/.Trash/ I see some, but not all files have extended attributes, so I'm not sure how Apple does it.
You need to somehow encode the source directory in the file. I think the easiest would be to change the filename in the dustbin directory. So that /home/user/music/song.mp3 becomes ~/dustbin/song.mp3|home_user_music
And when you copy it back your script needs to process the file name and construct the path beginning at |.
Another approach would be to let the filesystem be your database.
A file moved from /some/directory/somewhere/filename would be moved to ~/dustbin/some/directory/somewhere/filename and you'd do find ~/dustbin -name "$file" to find it based on its basename (from user input). Then you'd just trim "~/bustbin" from the output of find and you'd have the destination ready to use. If more than one file is returned by find, you can list the proposed files for user selection. You could use ~/dustbin/$deletiondate if you wanted to make it possible to roll back to earlier versions.
You could do a cron job that would periodically remove old files and the directories (if empty).

Resources