imagemagick RAW file identify/convert - no such file or directory (tmp file) - linux

I'm trying to use imagemagick-7 (CLI) on ubuntu to identify and convert RAW images to PNG format. I am using the dcraw binary as the delegate for identifying and converting raw images.
I updated the dng:decode delegate in delegates.xml as follows:
<delegate decode="dng:decode" command=""dcraw" "%i"" />
When I run this command: magick identify test.dng, I get the following error:
identify: unable to open image '/tmp/magick-24332a6nW8lcwejNJ.ppm': No such file or directory # error/blob.c/OpenBlob/3489.
The same error is given for magick convert. I noticed that imagemagick is generating a temporary intermediate file in my /tmp directory, which has a different name than the name it's expecting to find there. For example, it generates this file - magick-24332P6aVDePOFeCn.ppm - but is expecting the file it generated to have this name - magick-24332a6nW8lcwejNJ.ppm - which is why it's not finding it.
Note: I tested the same thing on OS X and it works perfectly fine (with the same delegates.xml configuration).
Is this a bug in imagemagick's implementation for unix systems or am I doing something wrong? Any thoughts would be greatly appreciated!

You were close the right command to use in delegate is
<delegate decode="dng:decode" command=""dcraw" -c "%i" > "%u.ppm"" />

Almost! You need to use the %o placeholder to tell the delegate manager were the output file will be written to. However the dcraw utility doesn't have in output destination options -- at least from what I can tell form the man-page. It does have a stdout option (-c), so you should be able to pipe/redirect the stream to an output location.
dcraw -c %i > %o
or in delegate XML.
<delegate decode="dng:decode" command=""dcraw" -c "%i" > "%o"" />
Update
Some tricks I've observed with custom delegations.
Use full path to binaries. Usually because I installed something outside of the systems PATH. Usually /usr/local/bin or /opt directories.
command="/usr/local/bin/dcraw ...
Use mv over pipes. If your not comfortable with debugging unix streams & such. Just do some basic copy/move command. We can rewrite the above command with something like...
dcraw %i; mv %i.ppm %o
Ignore escaping quotes until you know its working. Encapsulating arguments are important, and keep everything safe, but \"%i & "%i are hard to read.
<delegate decode="dng:decode" command="dcraw %i; mv %i.ppm %o" />
... verify things are called correctly ... then probably escape paths.
<delegate decode="dng:decode" command="dcraw "%i"; mv "%i".ppm "%o"" />
As pointed out previously, use identify -list Delegate to verify that the command is loaded correctly, and -verbose to verify that it was called correctly during runtime.

For anyone else who experiences this problem, my solution ended up being to switch from imagemagick to graphicsmagick, which is by default configured to use dcraw (of course, you need to have dcraw installed and on your PATH).
http://www.graphicsmagick.org/

Related

add a permitted path to ghostscipt running configuration

I use a program which create me postscript file before using ps2pdf to make it a readable pdf, i've made a program which add some string to overwrite the company new logo. (The first program can't import image file itself).
I add the string before the before-last line of the file (" showpage").
While running my program to add the logo there is no error.
With the option -dNOSAFER everything is fine, but by default it's set to -dSAFER, and an invalidfileaccess error pop, the files are 6 jpg images alone in their directory.
I don't want to make it run with the -dNOSAFER option on. As it will fully open the file system.
In the documentation I've seen that there is a "permitted path" setting, but i can't find nowhere to set this up. Is it just a command line option to set in the command launching the program ? Or is there a config file for GhostScript / ps2pdf where i can put the path to this directory as permitted path.
in this documentation :
http://www.ghostscript.com/doc/current/Use.htm
I only find
-dTTYPAUSE
Causes Ghostscript to read a character from /dev/tty, rather than
standard input, at the end of each page. This may be useful if input
is coming from a pipe. Note that -dTTYPAUSE overrides -dNOPAUSE. Also
note that -dTTYPAUSE requires opening the terminal device directly,
and may cause problems in combination with -dSAFER. Permission errors
can be avoided by adding the device to the permitted reading list
before invoking safer mode
gs -dTTYPAUSE -dDELAYSAFER -c '<< /PermitFileReading [ (/dev/tty)] >> setuserparams .locksafe' -dSAFER
The quote is just for the context but is this a way to put the permitted path ?
As gs automatically launch with the full system as readOnly there will be no difference ? There is no other find result for PermitFile in this page.
Try adding the required path to the search path with -I (Include) See Use.htm, section 8 How Ghostscript finds files. This should only be a problem if you are using 'run' or similar to read files from another location.
The section on TTYPAUSE is not relevant.

Need more clarity on file command usage in linux?

I have built a linux image for ARM on Ubuntu. I was curious to use the file command on the image file created in arch/arm/boot directory. When i execute the following the command
balaji#balaji-virtual-machine:~/meraj/linux-stable/arch/arm/boot$ ls
bootp compressed dts Image install.sh Makefile zImage
balaji#balaji-virtual-machine:~/meraj/linux-stable/arch/arm/boot$ file Image
Image: data
balaji#balaji-virtual-machine:~/meraj/linux-stable/arch/arm/boot$ file zImage
zImage: data
balaji#balaji-virtual-machine:~/meraj/linux-stable/arch/arm/boot$
It gives not much information. I would like to know if this is expected behaviour or not?
From file manpage:
The type printed will usually contain one of the words...
... "data" meaning anything else (data is usually 'binary' or non-printable).
Exceptions are well-known file formats (core files, tar archives) that
are known to contain binary data.
Also...
Any file that cannot be identified as having been written in any of
the character sets listed above is simply said to be 'data'.

zip command not working

I am trying to zip a file using shell script command. I am using following command:
zip ./test/step1.zip $FILES
where $FILES contain all the input files. But I am getting a warning as follows
zip warning: name not matched: myfile.dat
and one more thing I observed that the file which is at last in the list of files in a folder has the above warning and that file is not getting zipped.
Can anyone explain me why this is happening? I am new to shell script world.
zip warning: name not matched: myfile.dat
This means the file myfile.dat does not exist.
You will get the same error if the file is a symlink pointing to a non-existent file.
As you say, whatever is the last file at the of $FILES, it will not be added to the zip along with the warning. So I think something's wrong with the way you create $FILES. Chances are there is a newline, carriage return, space, tab, or other invisible character at the end of the last filename, resulting in something that doesn't exist. Try this for example:
for f in $FILES; do echo :$f:; done
I bet the last line will be incorrect, for example:
:myfile.dat :
...or something like that instead of :myfile.dat: with no characters before the last :
UPDATE
If you say the script started working after running dos2unix on it, that confirms what everybody suspected already, that somehow there was a carriage-return at the end of your $FILES list.
od -c shows the \r carriage-return. Try echo $FILES | od -c
Another possible cause that can generate a zip warning: name not matched: error is having any of zip's environment variables set incorrectly.
From the man page:
ENVIRONMENT
The following environment variables are read and used by zip as described.
ZIPOPT
contains default options that will be used when running zip. The contents of this environment variable will get added to the command line just after the zip command.
ZIP
[Not on RISC OS and VMS] see ZIPOPT
Zip$Options
[RISC OS] see ZIPOPT
Zip$Exts
[RISC OS] contains extensions separated by a : that will cause native filenames with one of the specified extensions to be added to the zip file with basename and extension swapped.
ZIP_OPTS
[VMS] see ZIPOPT
In my case, I was using zip in a script and had the binary location in an environment variable ZIP so that we could change to a different zip binary easily without making tonnes of changes in the script.
Example:
ZIP=/usr/bin/zip
...
${ZIP} -r folder.zip folder
This is then processed as:
/usr/bin/zip /usr/bin/zip -r folder.zip folder
And generates the errors:
zip warning: name not matched: folder.zip
zip I/O error: Operation not permitted
zip error: Could not create output file (/usr/bin/zip.zip)
The first because it's now trying to add folder.zip to the archive instead of using it as the archive. The second and third because it's trying to use the file /usr/bin/zip.zip as the archive which is (fortunately) not writable by a normal user.
Note: This is a really old question, but I didn't find this answer anywhere, so I'm posting it to help future searchers (my future self included).
eebbesen hit the nail in his comment for my case (but i cannot vote for comment).
Another possible reason missed in the other comments is file exceeding the file size limit (4GB).
I converted my script for unix environment using dos2unix command and executed my script as ./myscript.sh instead bash myscript.sh.
I just discovered another potential cause for this. If the permissions of the directory/subdirectory don't allow the zip to find the file, it will report this error. Actually, if you run a chmod -R 444 on the directory, and then try to zip it, you will reproduce this error, and also have a "stored 0%" report, like this:
zip warning: name not matched: borrar/enviar
adding: borrar/ (stored 0%)
Hence, try changing the permissions of the file. If you are trying to send them through email, and those email filters (like Gmail's) invent silly filters of not sending executables, don't forget that making permissions very strict when making zip compression can be the cause of the error you are reporting, of "name not matched".
spaces are not allowed:
it would fail if there are more than one files(s) in $FILES unless you put them in loop
I also encountered this issue. In my case, the line separate is CRLF in my zip shell script which causes the problem. Using LF fixed it.

"Unable to open image" error when using ImageMagick's Filename References

I'm using ImageMagick to do some image processing from the commandline, and would like to operate on a list of files as specified in foo.txt. From the instructions here: http://www.imagemagick.org/script/command-line-processing.php I see that I can use Filename References from a file prefixed with #. When I run something like:
montage #foo.txt output.jpg
everything works as expected, as long as foo.txt is in the current directory. However, when I try to access bar.txt in a different directory by running:
montage /some_directory/#bar.txt
output2.jpg
I get:
montage: unable to open image
/some_directory/#bar.txt: No such file
or directory # blob.c/OpenBlob/2480.
I believe the issue is my syntax, but I'm not sure what to change it to. Any help would be appreciated.
Quite an old entry but it seems relatively obvious that you need to put the # before the full path:
montage #/some_directory/bar.txt output2.jpg
As of ImageMagick 6.5.4-7 2014-02-10, paths are not supported with # syntax. The # file must be in the current directory and identified by name only.
I haven't tried directing IM to pull the list of files from a file, but I do specify multiple files on the command line like this:
gm -sOutputFile=dest.ext -f file1.ppm file2.ppm file3.ppm
Can you pull the contents of that file into a variable, and then let the shell expand that variable?

An efficient way to detect corrupted png files?

I've written a program to process a bunch of png files that are generated by a seperate process. The capture mostly works, however there are times when the process dies and is restarting which leaves a corrupted image. I have no way to detect when the process dies or which file it dies one (there are ~3000 png files).
Is there a good way to check for a corrupted png file?
I know this is a question from 2010, but I think this is a better solution: pngcheck.
Since you're on a Linux system you probably already have Python installed.
An easy way would be to try loading and verifying the files with PIL (Python Imaging Library) (you'd need to install that first).
from PIL import Image
v_image = Image.open(file)
v_image.verify()
(taken verbatim from my own answer in this thread)
A different possible solution would be to slightly change how your processor processes the files: Have it always create a file named temp.png (for example), and then rename it to the "correct" name once it's done. That way, you know if there is a file named temp.png around, then the process got interrupted, whereas if there is no such file, then everything is good.
(A variant naming scheme would be to do what Firefox's downloader does -- append .partial to the real filename to get the temporary name.)
Kind of a hack, but works
If you are running on linux or something like you might have the "convert" command
$ convert --help
Version: ImageMagick 5.5.6 04/01/03 Q16 http://www.imagemagick.org
Copyright: Copyright (C) 2003 ImageMagick Studio LLC
Usage: convert [options ...] file [ [options ...] file ...] [options ...] file
If you make an invalid png, and then try to convert, you'll get an error:
$ date> foo.png
$ convert foo.png foo.gif
convert: NotAPNGImageFile (foo.png).
Find all non-PNG files:
find . -type f -print0 | xargs -0 file --mime | grep -vF image/png
Find all corrupted PNG files:
find . -type f -print0 | xargs -0 -P0 sh -c 'magick identify +ping "$#" > /dev/null' sh
file command only checks magic number. Having the PNG magic number doesn't mean it is a well formed PNG file.
magick identify is a tool from ImageMagick. By default, it only checks headers of the file for better performance. Here we use +ping to disable the feature and make identify read the whole file.

Resources