Hi I have to unzip a file that could have a Directory and I want to exclude everything within that directory, I tried lot of options and looked here as well, but doesn't seem to find any good solution.
These are the contents of the zip file:
Please note the depth of EXCLUDE folder is unknown, but we have to exclude everything
$unzip -l patch2.zip
Archive: patch2.zip
Length Date Time Name
--------- ---------- ----- ----
0 2013-10-29 17:42 EXCLUDE/
0 2013-10-29 17:24 EXCLUDE/inner/
0 2013-10-29 17:24 EXCLUDE/inner/inner1.txt
0 2013-10-29 15:45 EXCLUDE/file.txt
0 2013-10-29 15:44 patch.jar
0 2013-10-29 15:44 system.properties
--------- -------
0 6 files
I tried this command, which only extract the files within it, but not the folder and its contents:
$unzip -l patch2.zip -x EXCLUDE/*
Archive: patch2.zip
Length Date Time Name
--------- ---------- ----- ----
0 2013-10-29 17:42 EXCLUDE/
0 2013-10-29 17:24 EXCLUDE/inner/
0 2013-10-29 17:24 EXCLUDE/inner/inner1.txt
0 2013-10-29 15:44 patch.jar
0 2013-10-29 15:44 system.properties
--------- -------
0 5 files
Thanks for the help.
You need to quote the exclude pattern so that it is passed to unzip. Otherwise it will be expanded by the shell before being passed to unzip.
Try:
unzip patch2.zip -x "EXCLUDE/*"
#dogbane answer is right.
But I still add another [I hope] interresting option, as you are on linux:
mc
(aka: Midnight Commander)
Start it, and then : on the Right panel, navigate to where you want your files to end up, and on the Left panel, navigate "inside" the ZIP file, and at that first level select + copy the things you need (ie, select all, and unselect the EXCLUDE folder, for example)
mc is VERY flexible and nice to use, especially to tar/untar/zip/move/delete/rename files... (on windows, an equivalent is TotalCommander, and I use its "synchronise" option very often to keep backups and origin in sync). It allow you to navigate archives as if they were uncompressed (trying to minimize the actual decompression to just the "navigating" part so you don't uncompress them twice).
Related
We have complaints "from the field" (i.e. from sysadmins installing software) that cygwin "messes up" windows permissions on NTFS (Windows 7/10/2008/2012, etc).
Problem Usecase
The general usecase is this:
Sysadmin launches some 'software installer' from the cygwin bash cmd line
Installer runs fine
Sysadmin tries to start windows services
Result:
Service fails to start
Workaround Steps
These steps seem to get past the problem:
Sysadmin resets ntfs permissions with windows ICACLS command : (in this example "acme" is the newly created directory. This command sets acme and its children to re-inherit permissions from folder "d:\instances"
d:\instances> icacls acme /RESET /T /C /Q
Sysadmin starts service
Result:
Windows service starts
Question
What makes cygwin handle permissions for newly-written files differently than powershell? Is it a matter of a wrong version of umask?
Can the sysadmin take steps in advance to ensure cygwin sets up permissions properly?
thanks in advance
I found the answer here; it refers to this mailing-list letter.
You need to edit Cygwin's /etc/fstab and add "noacl" to the list of mount-options.
To add to the answer of ulathek here is the copy-paste of the two URLs:
First:
How to fix incorrect Cygwin permission in Windows 7
Cygwin started to behave quite strangely after recent updates. I was not able to edit files in vim, because it was complaining that files are read only. Even cp -r didn’t work correctly. Permission of new directory was broken and I was not able to remove it. Pretty weird behavior.
E.g. ls -l
total 2
----------+ 1 georgik None 34 Jul 14 18:09 index.jade
----------+ 1 georgik None 109 Jul 14 17:40 layout.jade
Hm. It is clear that something is wrong with permission. Even owner has no permission on those files.
Output of mount command:
C: on /cygdrive/c type ntfs (binary,posix=0,user,noumount,auto)
I found a solution at cygwin forum. It’s quite easy to fix it.
Open /etc/fstab and enter following line:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
Save it. Close all cygwin terminals and start new terminal.
Output of mount:
C: on /cygdrive/c type ntfs (binary,noacl,posix=0,user,noumount,auto)
Output of ls -l
total 2
-rw-r--r-- 1 georgik None 34 Jul 14 18:09 index.jade
-rw-r--r-- 1 georgik None 109 Jul 14 17:40 layout.jade
Second:
7/14/2010 10:57 AM
> Drive Y is a mapping to a network location. Interestingly, ls -l
>> /cygdrive returns:
>> d---------+ 1 ???????? ???????? 24576 2010-07-09 11:18 c
>> drwx------+ 1 Administrators Domain Users 0 2010-07-14 06:58 y
>>
>> The c folder looks weird, the y folder looks correct.
>>
> Try ls -ln /cygdrive. The user and group ownerships on the root of the
> C: drive are most likely not found in your passwd and group files. The
> -n option for ls will print the user and group IDs rather than try to
> look up their names. Unfortunately, I can't think of any way offhand to
> generate the passwd and group entries given only user and group IDs.
> Maybe someone else can comment on that.
>
I think your answer is correct:
$ ls -ln /cygdrive
total 24
d---------+ 1 4294967295 4294967295 24576 2010-07-09 11:18 c
drwx------+ 1 544 10513 0 2010-07-14 11:45 y
I edited my /etc/fstab file (it contained only commented lines) and
added this line at the end of the file:
none /cygdrive cygdrive binary,noacl,posix=0,user 0 0
I closed all my Cygwin processes, opened a new terminal and did an ls-l
on visitor.cpp again:
-rw-r--r-- 1 cory Domain Users 3236 2010-07-11 22:37 visitor.cpp
Success!!! The permissions are now reported as 644 rather than 000 and I
can edit the file with Cygwin vim and not have bogus read-only issues.
Thank you Jeremy.
cory
i create backup folder in ftp server , and send all my .tar.gz file into /backup folder
using (put file.tar.gz backup)
while i retrieve backup,, i get backup folder as backup files. ,, how to convert the file to folder ..
ftp server
ls
227 Entering Passive Mode (10,21,131,105,76,56)
150 Accepted data connection
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 .
drwxr-xr-x 6 100 ftpgroup 7 Oct 20 19:57 ..
-r-------- 1 100 ftpgroup 84 Oct 21 11:15 .banner
drwxrwxrwx 3 100 ftpgroup 4 Oct 20 18:28 backup
drwxrwxrwx 2 100 ftpgroup 3 Oct 20 19:45 dailybackup
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:57 hi5songs
drwxrwxr-x 2 100 ftpgroup 3 Oct 20 19:49 whole
226-Options: -a -l
226 7 matches total
i tried :
ftp> mget backup``
mget .? y
227 Entering Passive Mode (10,21,131,105,62,8)
550 I can only retrieve regular files
mget ..? y
Warning: embedded .. in .. (changing to !!)
227 Entering Passive Mode (10,21,131,105,46,39)
550 Can't open !!: No such file or directory
mget backup? y
227 Entering Passive Mode (10,21,131,105,72,24)
550 I can only retrieve regular files
mget cpanelbackup? y
227 Entering Passive Mode (10,21,131,105,73,69)
550 Can't open cpanelbackup: No such file or directory
while
i use (get backup home)
it successfully retrieve but as files shown below
server:
'root#azar [/home]# ls
./ backup.2* .cpan/ dailybackup hi5songs.4 oldeserver
../ backup.3* cPanelInstall/ hi5songs/ hi5songs.5 oldserver/
0_README_BEFORE_DELETING_VIRTFS backup.4* .cpanm/ hi5songs.1 home quota.user
backup/ backup.5* .cpcpan/ hi5songs.2 latest virtfs/
backup.1* .banner cpeasyapache/ hi5songs.3 lost+found/ whole'
i got that backup with green color executable file like backup.1* ( note: i cant open those file and extract those files) what to do
how to get my .tar.gz file back
please guide me,,
advance thanks,,
Updated Answer
If you want to get all files from /some/place on your server, to /home/here on your local machine, you would either do this:
cd /home/here # change directory before starting FTP
ftp server ... # connect
cd /some/place # go to desired folder on server
bi # ensure no funny business with line-endings
mget * # get all files
or you can change directory locally, within FTP like this:
ftp server ... # connect
cd /some/place # go to desired folder on server
lcd /home/here # LOCALLY change directory to where you want the files to 'land'
bi # ensure no funny business
mget * # get all files
Original Answer
I cannot understand your question at all, but you are doing some things wrong.
You cannot use GET or MGET to get a folder (directory) like you are trying to do with mget backup. You can only GET a file. Now your file may be a tar-file with more than one file in it, but it is still a file.
If you are getting tar-files and binary files, you should use BINARY mode to ensure line-end characters that may occur in binary files are not translated between Windows and Unix line-endings. So, as a matter of course, you should issue BI command before you get files.
If you have several files in your backup directory, you should probably do cd backup then bi
then mget *
Sorry if this makes no sense, but I will try to give all the information needed!
I would like to use rsync to copy a range of sequentially numbered files from one folder to another.
I am archiving a DCDM (Its a film thing) and it contains in the order of 600,000 individually numbered, sequential .tif image files (~10mb ea.).
I need to break this up to properly archive onto LTO6 tapes. And I would like to use rsync to prep the folders such that my simple bash .sh file can automate the various folders and files that I want to back up to tape.
The command I normally use when running rsync is:
sudo rsync -rvhW --progress --size only <src> <dest>
I use sudo if needed, and I always test the outcome first with --dry-run
The only way I’ve got anything to work (without kicking out errors) is by using the * wildcard. However, this only does files with the set pattern (eg. 01* will only move files from the range 010000 - 019999) and I would have to repeat for 02, 03, 04 etc..
I've looked on the internet, and am struggling to find an answer that works.
This might not be possible, and with 600,000 .tif files, I can't write an exclude for each one!
Any thoughts as to how (if at all) this could be done?
Owen.
You can check for the file name starting with a digit by using pattern matching:
for file in [0-9]*; do
# do something to $file name that starts with digit
done
Or, you could enable the extglob option and loop over all file names that contain only digits. This could eliminate any potential unwanted files that start with a digit but contain non-digits after the first character.
shopt -s extglob
for file in +([0-9]); do
# do something to $file name that contains only digits
done
+([0-9]) expands to one or more occurrence of a digit
Update:
Based on the file name pattern in your recent comment:
shopt -s extglob
for file in legendary_dcdm_3d+([0-9]).tif; do
# do something to $file
done
Globing is the feature of the shell to expand a wildcard to a list of matching file names. You have already used it in your question.
For the following explanations, I will assume we are in a directory with the following files:
$ ls -l
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 file.txt
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 funny_cat.jpg
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-2.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-3.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-4.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-2.pdf
The most simple case is to match all files. The following makes for a poor man's ls.
$ echo *
file.txt funny_cat.jpg report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf report_2014-1.pdf report_2014-2.pdf
If we want to match all reports from 2013, we can narrow the match:
$ echo report_2013-*.pdf
report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf
We could, for example, have left out the .pdf part but I like to be as specific as possible.
You have already come up with a solution to use this for selecting a range of numbered files. For example, we can match reports by quater:
$ for q in 1 2 3 4; do echo "$q. quater: " report_*-$q.pdf; done
1. quater: report_2013-1.pdf report_2014-1.pdf
2. quater: report_2013-2.pdf report_2014-2.pdf
3. quater: report_2013-3.pdf
4. quater: report_2013-4.pdf
If we are to lazy to type 1 2 3 4, we could have used $(seq 4) instead. This invokes the program seq with argument 4 and substitutes its output (1 2 3 4 in this case).
Now back to your problem: If you want chunk sizes that are a power of 10, you should be able to extend the above example to fit your needs.
old question i know, but someone may find this useful. the above examples for expanding a range also work with rsync. for example to copy files starting with a, b and c but not d and e from dir /tmp/from_here to dir /tmp/to_here:
$ rsync -avv /tmp/from_here/[a-c]* /tmp/to_here
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
alice/
bob/
cedric/
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 89 bytes received 24 bytes 226.00 bytes/sec
total size is 0 speedup is 0.00
If you are writing to LTO6 tapes, you should consider including "--inplace" to your command. Inplace is meant for writing to linear filesystems such as LTO
Earlier, I tried to put the file on the host kml. My question here Not showing the path in KML. Now I created new file kmz on the recommendations of Google. File here: http://tourist-sweden.se/transport/map/sthlm/t-11-1.kmz . Calling map: http://tourist-sweden.se/transport/map/sthlm/t-11-bana.html
Now maps show only the path but do not show of my icons. What a mistake again in my maps? Are there any robust and simple alternative to kml?
Your KMZ file is not correct. If you zip up the t-11 directory, it works:
It currently looks like this:
[lross#JJ kmz]$ unzip -l t-11-1.kmz
Archive: t-11-1.kmz
Length Date Time Name
-------- ---- ---- ----
0 07-07-13 12:06 t-11/
3655 07-07-13 12:43 t-11/t-11-bana.kml
0 07-07-13 12:08 t-11/files/
1039 07-04-13 21:21 t-11/files/subway-blue.png
If you create it from the t-11 directory so it looks like this, it works:
[lross#JJ t-11]$ unzip -l t-11-1a.kmz
Archive: t-11-1a.kmz
Length Date Time Name
-------- ---- ---- ----
0 07-07-13 12:08 files/
3655 07-07-13 12:43 t-11-bana.kml
1039 07-04-13 21:21 files/subway-blue.png
http://www.geocodezip.com/v3_GoogleEx_layer-kml_linktoB.html?filename=http://www.geocodezip.com/geoxml3_test/kmz/t-11-1a.kmz
Can someone pls advise why I'm getting NO CHANGES found at the end.
Also, I'm getting an annoying message, "Username not specified in .hg/hgrc. Keyring will not be used."
Version tool: Hg latest version
Server: Linux
Workspace: ~/2012WS
LinuxServer123:~/2012WS # hg clone http://LinuxServer123/hg/GigaTest/
Username not specified in .hg/hgrc. Keyring will not be used.
http authorization required
realm: Mercurial Repositories
user: u123456
password:
destination directory: GigaTest
requesting all changes
adding changesets
adding manifests
adding file changes
added 14 changesets with 585 changes to 575 files (+1 heads)
2 files updated, 0 files merged, 0 files removed, 0 files unresolved
updating to branch default
0 files updated, 0 files merged, 0 files removed, 0 files unresolved
LinuxServer123:~/2012WS #
LinuxServer123:~/2012WS # cd GigaTest/
LinuxServer123:~/2012WS/GigaTest # ls -tlr
total 12
-rw-r--r-- 1 root root 25 Jan 10 16:36 hello.py
-rw-r--r-- 1 root root 25 Jan 10 16:36 HELLO-UP.PY
drwxr-xr-x 4 root root 4096 Jan 10 16:36 .hg
LinuxServer123:~/2012WS/GigaTest # vi hello.py
LinuxServer123:~/2012WS/GigaTest # ls -l > new.txt
LinuxServer123:~/2012WS/GigaTest # hg add new.txt
LinuxServer123:~/2012WS/GigaTest #
LinuxServer123:~/2012WS/GigaTest #
LinuxServer123:~/2012WS/GigaTest # hg stat
M hello.py
A new.txt
LinuxServer123:~/2012WS/GigaTest #
LinuxServer123:~/2012WS/GigaTest # hg out
comparing with http://LinuxServer123/hg/GigaTest/
Username not specified in .hg/hgrc. Keyring will not be used.
http authorization required
realm: Mercurial Repositories
user: u123456
password:
searching for changes
no changes found
LinuxServer123:~/2012WS/GigaTest #
Thanks in advance.
You have to do hg commit first.
hg stat shows the changes made to the current working repository (since the last commit) and hg out shows the commits made to your repository that will be pushed out on hg push.
And the message "Username not specified in .hg/hgrc" means that your username is not specified in the .hg/hgrc file. Keyring is an extension I'm not familiar with; presumably it will take your username and do something with a key.