I've just very recently switched to Linux and I want to change a load of files to have different extensions. For example I want to change .doc/docx to .txt and images to .jpg and so on. Is there a csh script that would cover any extension or would I have to write a new one for each filetype.
I have this so far, but I'm not sure if it will actually work. Any help is much appreciated!
#!/bin/bash
for f in *.$1
do
[ -f "$f" ] && mv -v "$f" "${f%$1}$2"
done
No need to reinvent the wheel:
http://linux.die.net/man/1/rename
rename .doc .txt *.doc
You need proper programs to convert file format:
Use wvWare to convert doc to html
Use ImageMagick to convert png to jpg
Use html2text to convert html to txt
That would do the rename; keep in mind that renaming a Word document won't cause it to become text, though.
Related
I am trying to rename all files located in a directory (recursively) with a specific meta data field appended to the end of the png file name.
the meta data field name is "aesthetic_score" with a value range from 1.0-9.0
when I type:
exiftool -Aesthetic_score -G1 -s testn.png
the result is:
[PNG] Aesthetic_score : 7.0
This is how I would like to append the png files recursively within a directory.
Note i would like to swap out the word aesthetic with the word chad in the append, and not all files will have this data field:
input file:
filename001.png (metadata aesthetic_score:7.0)
output:
filename001-chad-score-70.png
I tried to use Digikam and JExifToolGui-2.01, without success.
I am trying to perform this task in the cmd line, although other solutions are welcome. Thank you for your help.
So, this might work for you, I can't really test it; note that you would need to get rid of the echo before the mv for it to actually do something (rename rather than just show what it would do).
while read name
do
newname=$(exiftool -G1 -s "$name"|awk '$2~/FileName/{name=$4}; $2~/Aesthetic_score/{basename=gensub(/^(.+)\....$/,"\\1","1",name);ext=gensub(/^.*\.(...)$/,"\\1","1",name);gsub(/\./,"",$4);print basename"."$4"."ext}')
echo mv "$name" "$newname"
done <<<$( find -iname \*.png )
Basically the find at the very end finds all the pngs.
The while loop takes every name find throws it, and passes each file through exiftool (using your specs) and parses the output using awk, which then outputs the new name, which gets captured in the shell variable by the same name.
And finally the mv (without the echo) renames the files.
I need to convert from svg to png all the images in a folder. Lets say that images are called test1.svg, test2.svg, ... , testn.svg. Using the following script:
for i in *.svg
do
convert "$i" PNG24:"$i".png
done
does the job correctly, and the images are called test1.svg.png, test2.svg.png, ... , testn.svg.png. My questions are:
1) Is it possible to make the output images be called test1.png, test2.png, ... , testn.png, essentially removing the 'svg' part from the name?
2) Is it possible to send them directly into some other directory?
Thanks!
Yes. You can make another directory and send them there like this:
mkdir other
for i in *.jpg; do
convert "$i" PNG24:other/"${i%jpg}png"
done
If you have lots of images to do, and you are on macOS or Linux, I would recommend GNU Parallel to get the job done faster:
mkdir other
parallel convert {} PNG24:other/{.}.png ::: *jpg
I came across a problem in Bash when I would try to only open images based upon the information stored in .txt files about them. I am trying to sort a number of images by size or height, and display an image with them in the sorted order, but if there exists a .jpg in the folder without a .txt file with the same name, it should not process it.
I have the sorting piece of my situation done, and am trying to figure out how I would go about opening only the images that have a .jpg extension WITH a .txt file.
I figured a solution would look like me putting every .jpg's name (without extension) in a list and then process through the list and run something like:
[if -f $filename.txt ]; then ~~~
but I came across the problem of iterating through without a for-loop, or else all the pictures would open multiple times. My attempt was:
for i in *jpg; do
y=$y ${i.jpg}
done
if[ -f $y.txt ] then
(sorting parts)
This only looked at the last filename in y, as it should, but I am trying to figure out a way to look at each separate filename and see if there exists that textfile, in order to include it in the sorting.
Thanks so much for your help!
Collecting a list of file names in a single variable is an antipattern. You want to collect them in an array instead.
a=()
for f in *.jpg; do
if [ -e "${f%.jpg}".txt ]; then
continue
fi
a+=("$f")
done
# now do things with "${a[#]}"
Frequently, you don't really need to collect the files in an array -- just do everything you were doing inside the for loop to each individual file as you traverse the files.
(And actually y=$y ${i%.jpg} doesn't append to y -- it sets y to itself for the duration of attempting to execute a file named i sans the .jpg extension, which would most likely fail in the vast majority of cases.)
I would do the file check first such that find just reports files that have a corresponding text file. The following snippet will just display jpg files that have a corresponding txt file:
find . -name "*.jpg" -maxdepth 1 -exec /bin/bash -c '[ -e "${0%.*}.txt" ] && echo "$0";' {} \;
I want to extract some files. Ex. test.zip to /path/to/folder. Using Archive::Extract and specifying a "to" in extract I can extract it to /path/to/folder, but it extracts to /path/to/folder/test. Same goes for using the system unzip/gunzip.
I don't want to unzip -j, I want to keep the subdirectories.
Is there a way to do this that does not involve browsing to /path/to/folder/test and cp -rf * ../? Either by system command or in perl...
Thanks for reading. :)
You might prefer Archive::Zip
Archive::Zip->new( 'test.zip' )->extractTree( '', '/path/to/folder' );
I've written a program to process a bunch of png files that are generated by a seperate process. The capture mostly works, however there are times when the process dies and is restarting which leaves a corrupted image. I have no way to detect when the process dies or which file it dies one (there are ~3000 png files).
Is there a good way to check for a corrupted png file?
I know this is a question from 2010, but I think this is a better solution: pngcheck.
Since you're on a Linux system you probably already have Python installed.
An easy way would be to try loading and verifying the files with PIL (Python Imaging Library) (you'd need to install that first).
from PIL import Image
v_image = Image.open(file)
v_image.verify()
(taken verbatim from my own answer in this thread)
A different possible solution would be to slightly change how your processor processes the files: Have it always create a file named temp.png (for example), and then rename it to the "correct" name once it's done. That way, you know if there is a file named temp.png around, then the process got interrupted, whereas if there is no such file, then everything is good.
(A variant naming scheme would be to do what Firefox's downloader does -- append .partial to the real filename to get the temporary name.)
Kind of a hack, but works
If you are running on linux or something like you might have the "convert" command
$ convert --help
Version: ImageMagick 5.5.6 04/01/03 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 2003 ImageMagick Studio LLC
Usage: convert [options ...] file [ [options ...] file ...] [options ...] file
If you make an invalid png, and then try to convert, you'll get an error:
$ date> foo.png
$ convert foo.png foo.gif
convert: NotAPNGImageFile (foo.png).
Find all non-PNG files:
find . -type f -print0 | xargs -0 file --mime | grep -vF image/png
Find all corrupted PNG files:
find . -type f -print0 | xargs -0 -P0 sh -c 'magick identify +ping "$#" > /dev/null' sh
file command only checks magic number. Having the PNG magic number doesn't mean it is a well formed PNG file.
magick identify is a tool from ImageMagick. By default, it only checks headers of the file for better performance. Here we use +ping to disable the feature and make identify read the whole file.