Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have two questions regarding unzip
I am trying to extract the zip file everything after the first parent directory.
i am trying this and its not working
unzip -d /var/www/html/data/ file-gpl.zip */*
The other questions in my old file someone has used unzip -W flag which i could not find in man page. what does it do
i tried this
[20:59][user1#core:/var/www/html/build]$ unzip -W -d test file.zip
UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send
bug reports using http://www.info-zip.org/zip-bug.html; see README for details.
Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]
Default action is to extract files in list, except those in xlist, to exdir;
file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).
-p extract files to pipe, no messages -l list files (short format)
-f freshen existing files, create none -t test compressed archive data
-u update files, create if necessary -z display archive comment only
-v list verbosely/show version info -T timestamp archive to latest
-x exclude files that follow (in xlist) -d extract files into exdir
modifiers:
-n never overwrite existing files -q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting -a auto-convert any text files
-j junk paths (do not make directories) -aa treat ALL files as text
-U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields
-C match filenames case-insensitively -L make (some) names lowercase
-X restore UID/GID info -V retain VMS version numbers
-K keep setuid/setgid/tacky permissions -M pipe through "more" pager
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
1. unzip -d ./new/destination example.zip
2. I got the description for -W flag from my box is below
-W [only when WILD_STOP_AT_DIR compile-time option enabled] modifies the pattern matching routine so that both ‘?’ (single-char wildcard)
and ‘*’ (multi-char wildcard) do not match the directory separator character ‘/’. (The two-character sequence ‘‘**’’ acts as a multi-
char wildcard that includes the directory separator in its matched characters.) Examples:
"*.c" matches "foo.c" but not "mydir/foo.c"
"**.c" matches both "foo.c" and "mydir/foo.c"
"*/*.c" matches "bar/foo.c" but not "baz/bar/foo.c"
"??*/*" matches "ab/foo" and "abc/foo"
but not "a/foo" or "a/b/foo"
This modified behaviour is equivalent to the pattern matching style used by the shells of some of UnZip’s supported target OSs (one
example is Acorn RISC OS). This option may not be available on systems where the Zip archive’s internal directory separator character
‘/’ is allowed as regular character in native operating system filenames. (Currently, UnZip uses the same pattern matching rules for
both wildcard zipfile specifications and zip entry selection patterns in most ports. For systems allowing ‘/’ as regular filename char-
acter, the -W option would not work as expected on a wildcard zipfile specification.)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have directory /logos which contains approximately 10000 png images. Can you please suggest some script to make two new folders /logos-1 and /logos-2 each one with half of the images from initial folder?
Thank you in advance <3
One approach could be to iterate over the files in the folder, keep and counter and move they files the other directory on each iteration:
counter=0
mkdir -p logos-0
mkdir -p logos-1
for file in logos/*
do
[ -e "$file" ] || continue
echo mv "$file" "logos-$((counter++%2))/"
done
Remove the echo if the mv commands looks appropriate.
You can use rename, a.k.a. Perl rename and prename for that. I assume you don't really want the leading slashes and you aren't really working in the root directory - put them back if you are.
rename --dry-run -p -N 01 '$_ = join "", "logos-", $N++%2+1, "/$_"' *.png
Sample Output
'1.png' would be renamed to 'logos-2/1.png'
'10.png' would be renamed to 'logos-1/10.png'
'2.png' would be renamed to 'logos-2/2.png'
'3.png' would be renamed to 'logos-1/3.png'
'4.png' would be renamed to 'logos-2/4.png'
'5.png' would be renamed to 'logos-1/5.png'
'6.png' would be renamed to 'logos-2/6.png'
'7.png' would be renamed to 'logos-1/7.png'
'8.png' would be renamed to 'logos-2/8.png'
'9.png' would be renamed to 'logos-1/9.png'
You can remove the --dry-run if the output looks good. The -p means it will create any necessary directories/paths for you. If you aren't familiar with Perl that means:
"Set N=1. For each PNG file, make the new name (which we must store in special variable $_) equal to the result of joining the word logos- with a number alternating between 1 and 2, with a slash followed by whatever it was before ($_)."
You may find this alternative way of writing it easier:
rename --dry-run -N 01 '$_ = sprintf("logos-%d/$_", $N%2+1)' *.png
Using this tool confers several benefits:
you can do dry runs
you can calculate any replacement you like
you don't need to create directories
it will not clobber files if multiple inputs rename to the same output
On macOS, use homebrew and install with:
brew install rename
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm trying to ftp a folder using the command line ftp client, but so far I've only been able to use 'get' to get individual files.
You could rely on wget which usually handles ftp get properly (at least in my own experience). For example:
wget -r ftp://user:pass#server.com/
You can also use -m which is suitable for mirroring. It is currently equivalent to -r -N -l inf.
If you've some special characters in the credential details, you can specify the --user and --password arguments to get it to work. Example with custom login with specific characters:
wget -r --user="user#login" --password="Pa$$wo|^D" ftp://server.com/
As pointed out by #asmaier, watch out that even if -r is for recursion, it has a default max level of 5:
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
If you don't want to miss out subdirs, better use the mirroring option, -m:
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf
--no-remove-listing.
Just to complement the answer given by Thibaut Barrère.
I used
wget -r -nH --cut-dirs=5 -nc ftp://user:pass#server//absolute/path/to/directory
Note the double slash after the server name. If you don't put an extra slash the path is relative to the home directory of user.
-nH avoids the creation of a directory named after the server name
-nc avoids creating a new file if it already exists on the destination (it is just skipped)
--cut-dirs=5 allows to take the content of /absolute/path/to/directory and to put it in the directory where you launch wget. The number 5 is used to filter out the 5 components of the path. The double slash means an extra component.
ncftp -u <user> -p <pass> <server>
ncftp> mget directory
If lftp is installed on your machine, use mirror dir. And you are done. See the comment by Ciro below if you want to recursively download a directory.
If you can use scp instead of ftp, the -r option will do this for you. I would check to see whether you can use a more modern file transfer mechanism than FTP.
Use WGet instead. It supports HTTP and FTP protocols.
wget -r ftp://mydomain.com/mystuff
Good Luck!
reference: http://linux.about.com/od/commands/l/blcmdl1_wget.htm
There is 'ncftp' which is available for installation in linux. This works on the FTP protocol and can be used to download files and folders recursively. works on linux. Has been used and is working fine for recursive folder/file transfer.
Check this link... http://www.ncftp.com/
If you can, I strongly suggest you tar and bzip (or gzip, whatever floats your boat) the directory on the remote machine—for a directory of any significant size, the bandwidth savings will probably be worth the time to zip/unzip.
If you want to stick to command line FTP, you should try NcFTP. Then you can use get -R to recursively get a folder. You will also get completion.
wget -r ftp://url
Work perfectly for Redhat and Ubuntu
You should not use ftp. Like telnet it is not using secure protocols, and passwords are transmitted in clear text. This makes it very easy for third parties to capture your username and password.
To copy remote directories remotely, these options are better:
rsync is the best-suited tool if you can login via ssh, because it copies only the differences, and can easily restart in the middle in case the connection breaks.
ssh -r is the second-best option to recursively copy directory structures.
To fetch files recursively, you can use a script like this:
https://gist.github.com/flibbertigibbet/8165881
See:
rsync man page
ssh man page
toggle the prompt by PROMPT command.
Usage:
ftp>cd /to/directory
ftp>prompt
ftp>mget *
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have many webp format images in a folder but with .jpg extension like
abc-test.jpg
It's a webp format image. I want it to convert in .png format with same name for that I have used this command and it worked
find . -name "*.jpg" -exec dwebp {} -o {}.png \;
It converted all webp images to .png but the issue is it's saving images like this:
abc-test.jpg.png
But my requirement is to save it without .jpg extension like
abc-test.png
If you have many to convert/rename, I would recommend you use GNU Parallel and not only get them converted faster by doing them I parallel, but also take advantage of the ability to modify filenames.
The command you want is:
parallel dwebp {} -o {.}.png ::: *.jpg
where the {.} means "the filename without the original extension".
If you want to recurse into subdirectories too, you can use:
find . -name "*.jpg" -print0 | parallel -0 dwebp {} -o {.}.png
If you want a progress meter, or an "estimated time of arrival", you can add --progress or --eta after the parallel command.
If you want to see what GNU Parallel would run, without actually running anything, add --dry-run.
I commend GNU Parallel to you in this age where CPUs are getting "fatter" (more cores) rather than faster.
How to convert .webp images to .png on Linux
Tested on Linux Ubuntu 20.04
This question is the top hit for the Google search of "linux convert .webp image to png". Therefore, for anyone stumbling here and just wanting that simple answer, here it is:
# 1. Install the `webp` tool
sudo apt update
sudo apt install webp
# 2. Use it: convert in.webp to out.png
dwebp in.webp -o out.png
Done! You now have out.png.
References
I learned about dwebp from the question itself
I did it with short oneliner that does not require parallel to be installed in the system
for x in `ls -1 *.jpg`; do dwebp {} -o ${x%.*}.png ::: $x; done
And this works for current directory
I would try to amend the #mark-setchell recursive solution so it would look like this:
for x in `find . -name "*.jpg"`; do dwebp {} -o ${x%.*}.png ::: $x; done
The ${x%.*} part is the one requiring a word of explanation here - it tells bash to take . and everything after the dot from the x variable.
It is prone to misbehave for names with more dots as I did not check if regex here is lazy or greedy - the answer can be tuned further therefore.
If the problem is with linux image viewers - thats the reason of convertion - then I found that: here
"Add WebP support to GNOME Image Viewer in Ubuntu and Other Linux
By default, the photo viewer does not support WebP images files. However, you can add WebP support by installing webp-pixbuf-loader library. Not only it allows you to open WebP files in GNOME Image Viewer, it also displays thumbnails for WebP files in the file explorer.
On Ubuntu-based Linux distributions, you can install this library using a PPA. Use the following commands one by one:"
sudo add-apt-repository ppa:krifa75/eog-ordissimo
sudo apt update
sudo apt install webp-pixbuf-loader
A good thing to do is to use sed along with mv. It matches the pattern and replaces with a newer one.
for file in *.jpg;
do
mv "$file" "`echo $file | sed s/.jpg/.png/`"
done
if you want to retain the old files instead of mv you can use cp
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
Improve this question
Is there a specific Unix/Linux command that would show when a file was renamed? I have a file in a Linux box and want to know when the file got renamed. I tried ls -ltr but that command only shows the date when it was created and not when it was last modified.
Is there a specific Unix/Linux command to do this?
You can use
stat filename
Check Access when it was last read.
Check Modify when it was updated.
Check Change if it was renamed.
Plus other file info like blocks, inodes, etc.
The behaviour is going to depend on the file system type. On classic Unix file systems (and, amongst others, the Mac OS X HFS file system in particular), the interesting thing about a file rename within a file system is that it does not affect the inode of the file at all; it only affects the directory (or directories) where the file was moved from/to. So, within a single file system, the only timestamps that change for a rename are the modify time of the directory holding the file. For example*:
$ fl -a -c -m -n xxx.pl
2013-07-17 16:17:17 2013-07-17 16:17:17 2013-07-17 16:17:17 xxx.pl
$ mv xxx.pl yyy.pl
$ fl -a -c -m -n yyy.pl
2013-07-17 16:17:17 2013-07-17 16:17:17 2013-07-17 16:17:17 yyy.pl
$
This means that on such file systems, you cannot tell when the file was renamed some time after the file was renamed.
* fl is a home-brew command; the invocation is equivalent to the Mac OS X stat invocation stat -f '%Sa %Sm %Sc %N'. This is only marginally related to the GNU/Linux stat command.
On the other hand, empirical evidence shows that a Linux ext4 file system does record the name change with a change of the modification time of the inode itself.
$ stat -c '%x %y %z %n' test.dat
2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 test.dat
$ mv test.dat test.data
$ stat -c '%x %y %z %n' test.data
2013-07-19 10:31:31.803842672 -0700 2013-07-19 10:31:31.803842672 -0700 2013-07-22 09:11:49.074339525 -0700 test.data
$
So, what you can do depends on the O/S on which you are running and which file system type your file is on.
If you have the inotify service available and running, you could record the event that is the file rename, but if you don't record the event, there is no 'post mortem' way to know when the file was renamed.
The modify time on the directory is not a good indication of when a specific file was changed; it changes whenever any file is created, moved, deleted. If there's only a single file in the directory, then the change time probably indicates when that file was created or renamed, but single file directories are the exception and not the rule.
You can use the inotify library (man page) to be notified when something happens on a file.
It's likely that there are bindings to scripting languages.
On AIX, you can use equivalent of "stat" -
istat filename
You should be able to know the file's last updated, modified and accessed time down to the seconds along with some more information.
This script lists the name of the file ( in a tar archive) containing a pattern.
tar tf myarchive.tar | while read -r FILE
do
if tar xf test.tar $FILE -O | grep "pattern" ;then
echo "found pattern in : $FILE"
fi
done
My question is:
Where is this feature documented, where $FILE is one of the files in the archive:
tar xf test.tar $FILE
This is usually documented in man pages, try running this command:
man tar
Unfortunately, Linux has not the best set of man pages. There is an online copy of tar manpage from this OS: http://linux.die.net/man/1/tar and it is terrible. But it links to info man command which is command to access the "info" system widely used in GNU world (many programs in linux user-space are from GNU projects, for example gcc). There is an exact link to section of online info tar about extracting specific files: http://www.gnu.org/software/tar/manual/html_node/extracting-files.html#SEC27
I may also recommend documentation from BSD (e.g. FreeBSD) or opengroup.org. Utilities can be different in detail but behave same in general.
For example, there is some rather old but good man from opengroup (XCU means 'Commands and Utilities' of the Single UNIX Specification, Version 2, 1997):
http://pubs.opengroup.org/onlinepubs/7908799/xcu/tar.html
tar key [file...]
The following operands are supported:
key --
The key operand consists of a function letter followed immediately by zero or more modifying letters. The function letter is one of the following:
x --
Extract the named file or files from the archive. If a named file matches a directory whose contents had been written onto the archive, this directory is (recursively) extracted. If a named file in the archive does not exist on the system, the file is created with the same mode as the one in the archive, except that the set-user-ID and set-group-ID modes are not set unless the user has appropriate privileges. If the files exist, their modes are not changed except as described above. The owner, group, and modification time are restored (if possible). If no file operand is given, the entire content of the archive is extracted. Note that if several files with the same name are in the archive, the last one overwrites all earlier ones.
And to fully understand command tar xf test.tar $FILE you should also read about f option:
f --
Use the first file operand (or the second, if b has already been specified) as the name of the archive instead of the system-dependent default.
So, test.tar in your command will be used by f key as archive name; then x will use second argument ($FILE) as name of file or directory to extract from archive.