Convert WEBP images to PNG by Linux command [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last year.
The community reviewed whether to reopen this question 8 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I have many webp format images in a folder but with .jpg extension like
abc-test.jpg
It's a webp format image. I want it to convert in .png format with same name for that I have used this command and it worked
find . -name "*.jpg" -exec dwebp {} -o {}.png \;
It converted all webp images to .png but the issue is it's saving images like this:
abc-test.jpg.png
But my requirement is to save it without .jpg extension like
abc-test.png

If you have many to convert/rename, I would recommend you use GNU Parallel and not only get them converted faster by doing them I parallel, but also take advantage of the ability to modify filenames.
The command you want is:
parallel dwebp {} -o {.}.png ::: *.jpg
where the {.} means "the filename without the original extension".
If you want to recurse into subdirectories too, you can use:
find . -name "*.jpg" -print0 | parallel -0 dwebp {} -o {.}.png
If you want a progress meter, or an "estimated time of arrival", you can add --progress or --eta after the parallel command.
If you want to see what GNU Parallel would run, without actually running anything, add --dry-run.
I commend GNU Parallel to you in this age where CPUs are getting "fatter" (more cores) rather than faster.

How to convert .webp images to .png on Linux
Tested on Linux Ubuntu 20.04
This question is the top hit for the Google search of "linux convert .webp image to png". Therefore, for anyone stumbling here and just wanting that simple answer, here it is:
# 1. Install the `webp` tool
sudo apt update
sudo apt install webp
# 2. Use it: convert in.webp to out.png
dwebp in.webp -o out.png
Done! You now have out.png.
References
I learned about dwebp from the question itself

I did it with short oneliner that does not require parallel to be installed in the system
for x in `ls -1 *.jpg`; do dwebp {} -o ${x%.*}.png ::: $x; done
And this works for current directory
I would try to amend the #mark-setchell recursive solution so it would look like this:
for x in `find . -name "*.jpg"`; do dwebp {} -o ${x%.*}.png ::: $x; done
The ${x%.*} part is the one requiring a word of explanation here - it tells bash to take . and everything after the dot from the x variable.
It is prone to misbehave for names with more dots as I did not check if regex here is lazy or greedy - the answer can be tuned further therefore.

If the problem is with linux image viewers - thats the reason of convertion - then I found that: here
"Add WebP support to GNOME Image Viewer in Ubuntu and Other Linux
By default, the photo viewer does not support WebP images files. However, you can add WebP support by installing webp-pixbuf-loader library. Not only it allows you to open WebP files in GNOME Image Viewer, it also displays thumbnails for WebP files in the file explorer.
On Ubuntu-based Linux distributions, you can install this library using a PPA. Use the following commands one by one:"
sudo add-apt-repository ppa:krifa75/eog-ordissimo
sudo apt update
sudo apt install webp-pixbuf-loader

A good thing to do is to use sed along with mv. It matches the pattern and replaces with a newer one.
for file in *.jpg;
do
mv "$file" "`echo $file | sed s/.jpg/.png/`"
done
if you want to retain the old files instead of mv you can use cp

Related

split directory with 10000 files into 2 directories [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have directory /logos which contains approximately 10000 png images. Can you please suggest some script to make two new folders /logos-1 and /logos-2 each one with half of the images from initial folder?
Thank you in advance <3
One approach could be to iterate over the files in the folder, keep and counter and move they files the other directory on each iteration:
counter=0
mkdir -p logos-0
mkdir -p logos-1
for file in logos/*
do
[ -e "$file" ] || continue
echo mv "$file" "logos-$((counter++%2))/"
done
Remove the echo if the mv commands looks appropriate.
You can use rename, a.k.a. Perl rename and prename for that. I assume you don't really want the leading slashes and you aren't really working in the root directory - put them back if you are.
rename --dry-run -p -N 01 '$_ = join "", "logos-", $N++%2+1, "/$_"' *.png
Sample Output
'1.png' would be renamed to 'logos-2/1.png'
'10.png' would be renamed to 'logos-1/10.png'
'2.png' would be renamed to 'logos-2/2.png'
'3.png' would be renamed to 'logos-1/3.png'
'4.png' would be renamed to 'logos-2/4.png'
'5.png' would be renamed to 'logos-1/5.png'
'6.png' would be renamed to 'logos-2/6.png'
'7.png' would be renamed to 'logos-1/7.png'
'8.png' would be renamed to 'logos-2/8.png'
'9.png' would be renamed to 'logos-1/9.png'
You can remove the --dry-run if the output looks good. The -p means it will create any necessary directories/paths for you. If you aren't familiar with Perl that means:
"Set N=1. For each PNG file, make the new name (which we must store in special variable $_) equal to the result of joining the word logos- with a number alternating between 1 and 2, with a slash followed by whatever it was before ($_)."
You may find this alternative way of writing it easier:
rename --dry-run -N 01 '$_ = sprintf("logos-%d/$_", $N%2+1)' *.png
Using this tool confers several benefits:
you can do dry runs
you can calculate any replacement you like
you don't need to create directories
it will not clobber files if multiple inputs rename to the same output
On macOS, use homebrew and install with:
brew install rename

How to get folders and subfolders with specific name from in FTP [duplicate]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I'm trying to ftp a folder using the command line ftp client, but so far I've only been able to use 'get' to get individual files.
You could rely on wget which usually handles ftp get properly (at least in my own experience). For example:
wget -r ftp://user:pass#server.com/
You can also use -m which is suitable for mirroring. It is currently equivalent to -r -N -l inf.
If you've some special characters in the credential details, you can specify the --user and --password arguments to get it to work. Example with custom login with specific characters:
wget -r --user="user#login" --password="Pa$$wo|^D" ftp://server.com/
As pointed out by #asmaier, watch out that even if -r is for recursion, it has a default max level of 5:
-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
If you don't want to miss out subdirs, better use the mirroring option, -m:
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite
recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf
--no-remove-listing.
Just to complement the answer given by Thibaut Barrère.
I used
wget -r -nH --cut-dirs=5 -nc ftp://user:pass#server//absolute/path/to/directory
Note the double slash after the server name. If you don't put an extra slash the path is relative to the home directory of user.
-nH avoids the creation of a directory named after the server name
-nc avoids creating a new file if it already exists on the destination (it is just skipped)
--cut-dirs=5 allows to take the content of /absolute/path/to/directory and to put it in the directory where you launch wget. The number 5 is used to filter out the 5 components of the path. The double slash means an extra component.
ncftp -u <user> -p <pass> <server>
ncftp> mget directory
If lftp is installed on your machine, use mirror dir. And you are done. See the comment by Ciro below if you want to recursively download a directory.
If you can use scp instead of ftp, the -r option will do this for you. I would check to see whether you can use a more modern file transfer mechanism than FTP.
Use WGet instead. It supports HTTP and FTP protocols.
wget -r ftp://mydomain.com/mystuff
Good Luck!
reference: http://linux.about.com/od/commands/l/blcmdl1_wget.htm
There is 'ncftp' which is available for installation in linux. This works on the FTP protocol and can be used to download files and folders recursively. works on linux. Has been used and is working fine for recursive folder/file transfer.
Check this link... http://www.ncftp.com/
If you can, I strongly suggest you tar and bzip (or gzip, whatever floats your boat) the directory on the remote machine—for a directory of any significant size, the bandwidth savings will probably be worth the time to zip/unzip.
If you want to stick to command line FTP, you should try NcFTP. Then you can use get -R to recursively get a folder. You will also get completion.
wget -r ftp://url
Work perfectly for Redhat and Ubuntu
You should not use ftp. Like telnet it is not using secure protocols, and passwords are transmitted in clear text. This makes it very easy for third parties to capture your username and password.
To copy remote directories remotely, these options are better:
rsync is the best-suited tool if you can login via ssh, because it copies only the differences, and can easily restart in the middle in case the connection breaks.
ssh -r is the second-best option to recursively copy directory structures.
To fetch files recursively, you can use a script like this:
https://gist.github.com/flibbertigibbet/8165881
See:
rsync man page
ssh man page
toggle the prompt by PROMPT command.
Usage:
ftp>cd /to/directory
ftp>prompt
ftp>mget *

Having issues with using unzip in linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I have two questions regarding unzip
I am trying to extract the zip file everything after the first parent directory.
i am trying this and its not working
unzip -d /var/www/html/data/ file-gpl.zip */*
The other questions in my old file someone has used unzip -W flag which i could not find in man page. what does it do
i tried this
[20:59][user1#core:/var/www/html/build]$ unzip -W -d test file.zip
UnZip 6.00 of 20 April 2009, by Info-ZIP. Maintained by C. Spieler. Send
bug reports using http://www.info-zip.org/zip-bug.html; see README for details.
Usage: unzip [-Z] [-opts[modifiers]] file[.zip] [list] [-x xlist] [-d exdir]
Default action is to extract files in list, except those in xlist, to exdir;
file[.zip] may be a wildcard. -Z => ZipInfo mode ("unzip -Z" for usage).
-p extract files to pipe, no messages -l list files (short format)
-f freshen existing files, create none -t test compressed archive data
-u update files, create if necessary -z display archive comment only
-v list verbosely/show version info -T timestamp archive to latest
-x exclude files that follow (in xlist) -d extract files into exdir
modifiers:
-n never overwrite existing files -q quiet mode (-qq => quieter)
-o overwrite files WITHOUT prompting -a auto-convert any text files
-j junk paths (do not make directories) -aa treat ALL files as text
-U use escapes for all non-ASCII Unicode -UU ignore any Unicode fields
-C match filenames case-insensitively -L make (some) names lowercase
-X restore UID/GID info -V retain VMS version numbers
-K keep setuid/setgid/tacky permissions -M pipe through "more" pager
See "unzip -hh" or unzip.txt for more help. Examples:
unzip data1 -x joe => extract all files except joe from zipfile data1.zip
unzip -p foo | more => send contents of foo.zip via pipe into program more
unzip -fo foo ReadMe => quietly replace existing ReadMe if archive file newer
1. unzip -d ./new/destination example.zip
2. I got the description for -W flag from my box is below
-W [only when WILD_STOP_AT_DIR compile-time option enabled] modifies the pattern matching routine so that both ‘?’ (single-char wildcard)
and ‘*’ (multi-char wildcard) do not match the directory separator character ‘/’. (The two-character sequence ‘‘**’’ acts as a multi-
char wildcard that includes the directory separator in its matched characters.) Examples:
"*.c" matches "foo.c" but not "mydir/foo.c"
"**.c" matches both "foo.c" and "mydir/foo.c"
"*/*.c" matches "bar/foo.c" but not "baz/bar/foo.c"
"??*/*" matches "ab/foo" and "abc/foo"
but not "a/foo" or "a/b/foo"
This modified behaviour is equivalent to the pattern matching style used by the shells of some of UnZip’s supported target OSs (one
example is Acorn RISC OS). This option may not be available on systems where the Zip archive’s internal directory separator character
‘/’ is allowed as regular character in native operating system filenames. (Currently, UnZip uses the same pattern matching rules for
both wildcard zipfile specifications and zip entry selection patterns in most ports. For systems allowing ‘/’ as regular filename char-
acter, the -W option would not work as expected on a wildcard zipfile specification.)

please explain the linux command: find /tmp -name core -type f -print | xargs /bin/rm -f [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
i have been looking all throughout the google for the explanation of such commands
find /tmp -name core -type f -print | xargs /bin/rm -f
well i got the command from the net itself and explanation was also mentioned there.by now i know that this command is used to find a file named 'core' in the directory named 'tmp' and delete the file. i have used and checked this and it is perfectly working.
my problem is that i could not understand the terms used in this command like what does -type f, and xargs do??
also how to generate such commands according to our need(obviously could not unless properly understood), and the biggest problem is what to write in google to get the help regarding this...i mean under what topic may i expect these.
please help
regards.
This is a string of unix commands:
find // name of command (in this case, "find")
arguments to 'find':
/tmp // where to look
-name core // name to look for, "core" (optional argument)
-type f // files only (not, eg, directories or files contents)
// (optional argument)
-print // output the list to standard output (STDOUT)
| // name of command (otherwise known as
// 'pipe') this is a little program that
// takes the output from a previous process
// (in this case, the output from 'find')
// and pass it to another process as input
xargs // name of command ('xargs') the program
// that will accept the output from 'print'
// as input (directed by 'pipe'). It provides
// a robust way to process indefinitely
// long lists by breaking them into smaller
// lists and passing each sublist through
// to its command argument
/bin/rm // name of command for xargs to execute
// on its input list ('rm' = remove)
-f // argument to rm, only remove files, not directories.
That's how unix works, it's made up of lots of little single-purpose programs with obscure 2-letter names that are devoted to a single task. You string these together to achieve more complex tasks.
The correct way to find out what any one command does is to use the 'man' command with the command name in question as argument, eg
man find
man xargs
man rm
You will get pages of detailed info describing each input option and output possibilities. A name like 'xargs' is also easy to google but understandably 'find' is not (maybe try 'unix find'). Many of them have Wikipedia pages...
Perhaps you should get a decent into guide to unix ...

Sync without scanning individual files? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Consider two directories:
/home/user/music/flac
/media/MUSIC/flac
I would like the second directory (destination; a USB drive) to contain the same files and structure as the first directory (master). There are 3600+ files (59G in total). Every file is scanned using unison, which is painfully slow. I would rather it compare based on file name, size, and modification time.
I think rsync might be better but the examples from the man pages are rather cryptic, and Google searches did not reveal any simple, insightful examples. I would rather not accidentally erase files in the master. ;-)
The master list will change over time: directories reorganized, new files added, and existing files updated (e.g., re-tagging). Usually the changes are minor; taking hours to complete a synchronization strikes me as sub-optimal.
What is the exact command to sync the destination directory with the master?
The command should copy new files, reorganize moved files (or delete then copy), and copy changed files (based on date). The destination files should have their timestamp set to the master's timestamp.
You can use rsync this way:
rsync --delete -r -u /home/user/music/flac/* /media/MUSIC/flac
It will delete files in /media/MUSIC/flac (never on master), and update based on file date.
There are more options, but I think this way is sufficient for you. :-)
(I just did simple tests! Please test better!)
You can use plain old cp to copy new & changed files (as long as your filesystems have working timestamps):
cp -dpRuv /home/user/music/flac /media/MUSIC/
To delete files from the destination that don't exist at the source, you'll need to use find. Create a script /home/user/bin/remover.sh like so:
#!/bin/bash
CANONNAME="$PWD/$(basename $1)"
RELPATH=$(echo "$CANONNAME" | sed -e "s#/media/MUSIC/flac/##")
SOURCENAME="/home/user/music/flac/$RELPATH"
if [ ! -f "$SOURCENAME" ]; then
echo "Removing $CANONNAME"
rm "$CANONNAME"
fi
Make it executable, then run it from find:
find /media/MUSIC/flac -type f -execdir /home/user/bin/remover.sh "{}" \;
The only thing this won't do is remove directories from the destination that have been removed in the source - if you want that too you'll have to make a third pass, with a similar find/script combination.

Resources