I'm looking for ways to determine the quality of a photography (jpg). The first thing that came into my mind was to compare the file-size to the amount of pixel stored within. Are there any other ways, for example to check the amount of noise in a jpg? Does anyone have a good reading link on this topic or any experience? By the way, the project I'm working on is written in C# (.net 3.5) and I use the Aurigma Graphics Mill for image processing.
Thanks in advance!
I'm not entirely clear what you mean by "quality", if you mean the quality setting in the JPG compression algorithm then you may be able to extract it from the EXIF tags of the image (relies on the capture device putting them in and no-one else overwriting them) for your library see here:
http://www.aurigma.com/Support/DocViewer/30/JPEGFileFormat.htm.aspx
If you mean any other sort of "quality" then you need to come up with a better definition of quality. For example, over-exposure may be a problem in which case hunting for saturated pixels would help determine that specific sort of quality. Or more generally you could look at statistics (mean, standard deviation) of the image histogram in the 3 colour channels. The image may be out of focus, in which case you could look for a cutoff in the spatial frequencies of the image Fourier transform. If you're worried about speckle noise then you could try applying a median filter to the image and comparing back to the original image (more speckle noise would give a larger change) - I'm guessing a bit here.
If by "quality" you mean aesthetic properties of composition etc then - good luck!
The 'quality' of an image is not measurable, because it doesn't correspond to any particular value.
If u take it as number of pixels in the image of specific size its not accurate. You might talk about a photograph taken in bad light conditions as being of 'bad quality', even though it has exactly the same number of pixels as another image taken in good light conditions. This term is often used to talk about the overall effect of an image, rather than its technical specifications.
I wanted to do something similar, but wanted the "Soylent Green" option and used people to rank images by performing comparisons. See the question responses here.
I think you're asking about how to determine the quality of the compression process itself. This can be done by converting the JPEG to a BMP and comparing that BMP to the original bitmap from with the JPEG was created. You can iterate through the bitmaps pixel-by-pixel and calculate a pixel-to-pixel "distance" by summing the differences between the R, G and B values of each pair of pixels (i.e. the pixel in the original and the pixel in the JPEG) and dividing by the total number of pixels. This will give you a measure of the average difference between the original and the JPEG.
Reading the number of pixels in the image can tell you the "megapixel" size(#pixels/1000000), which can be a crude form of programatic quality check, but that wont tell you if the photo is properly focused, assuming it is supposed to be focused (think fast-motion objects, like trains), nor weather or not there is something in the pic worth looking at, that will require a human, or pigeon if you prefer.
Related
I'm reading/watching anything I can about color management/color science and something that's not making sense to me is the scene-referred and display-referred workflows. Isn't everything display-referred, because your monitor is converting everything you see into something it can display?
While reading this article, I came across this image:
So, if I understand this right to follow a linear workflow, I should apply an inverse power function to any imported jpg/png/etc files that contain color data, to get it's gamma to be linear. I then work on the image, and when I'm ready to export, say to sRGB and save it as a png, it'll bake in the original transfer function.
But, even while it's linear, and I'm working on it, is't my monitor converting everything I see to what I can display? Isn't it basically applying it's own LUT? Isn't there already a gamma curve that the monitor itself is applying?
Also, from input to output, how many color space conversions take place, say if I'm working in the ACEScg color space. If I import a jpg texture, I linearize it and bring it into the ACEScg color space. I work on it, and when I render it out, the renderer applies a view transform to convert it from ACEScg to sRGB, and then also what I'm seeing is my monitor converting then from sRGB to my monitor's own ICC profile, right (which is always happening since everything I'm seeing is through my monitor's ICC profile)?
Finally, if I add a tone-mapping s curve, where does that conversion sit on that image?
I'm not sure your question is about programming, and the question has not much relevance to the title.
In any case:
light (photons) behave linearly. The intensity of two lights is the sum of the intensity of each light. For this reason a lot of image mangling is done in linear space. Note: camera sensors have often a near linear response.
eyes see nearly as with a gamma exponent of 2. So for compression (less noise with less bit information) gamma is useful. By accident also the CRT phosphors had a similar response (else the engineers would have found some other methods: in past such fields were done with a lot of experiments: feed back from users, on many settings).
Screens expects images with a standardized gamma correction (now it depends on the port, setting, image format). Some may be able to cope with many different colour spaces. Note: now we have no more CRT, so the screen will convert data from expected gamma to the monitor gamma (and possibly different value for each channel). So a sort of a LUT (it may just be electronically done, so without the T (table)). Screens are setup so that with a standard signal you get expected light. (There are standards (images and methods) to measure the expected bahavious, but so ... there is some implicit gamma correction of the gamma corrected values. It was always so: on old electronic monitor/TV technicians may get an internal knob to regulate single colours, general settings, etc.)
Note: Professionals outside computer graphic will use often opto-electronic transfer function (OETF) from camera (so light to signal) and the inverse electro-optical transfer function (EOTF) when you convert a signal (electric) to light, e.g. in the screen. I find this way to call the "gamma" show quickly what it is inside gamma: it is just a conversion between analogue electrical signal and light intensity.
The input image has own colour space. You now assume a JPEG, but often you have much more information (RAW or log, S-log, ...). So now you convert to your working colour space (it may be linear, as our example). If you show the working image, you will have distorted colours. But you may not able to show it, because you will use probably more then 8-bit per channel (colour). Common is 16 or 32bits, and often with half-float or single float).
And I lost some part of my answer (after last autosave). The rest was also complex, but the answer is already too long. In short. You can calibrate the monitor: two way: the best way (if you have a monitor that can be "hardware calibrated"), you just modify the tables in monitor. So it is nearly all transparent (it is just that the internal gamma function is adapted to get better colours). You still get the ICC, but for other reasons. Or you get the easy calibration, where the bytes of an image are transformed on your computer to get better colours (in a program, or now often by operating system, either directly by OS, or by telling the video card to do it). You should careful check that only one component will do colour correction.
Note: in your program, you may save the image as sRGB (or AdobeRGB), so with standard ICC profiles, and practically never as your screen ICC, just for consistency with other images. Then it is OS, or soft-preview, etc. which convert for your screen, but if the image as your screen ICC, just the OS colour management will see that ICC-image to ICC-output will be a trivial conversion (just copying the value).
So, take into account that at every step, there is an expected colour space and gamma. All programs expect it, and later it may be changed. So there may be unnecessary calculation, but it make things simpler: you should not track expectations.
And there are many more details. ICC is also use to characterize your monitor (so the capable gamut), which can be used for some colour management things. The intensions are just the method the colour correction are done, if the image has out-of-gamut colours (just keep the nearest colour, so you lose shade, but gain accuracy, or you scale all colours (and you expect your eyes will adapt: they do if you have just one image at a time). The evil is in such details.
Goal: I want to grab the best frame from an animated GIF and use it as a static preview image. I believe the best frame is one that shows the most content - not necessarily the first or last frame.
Take this GIF for example:
--
This is the first frame:
--
Here is the 28th frame:
It's clear that frame 28th represents the entire GIF well.
How could I programmatically determine if one frame has more pixel/content over another? Any thoughts, ideas, packages/modules, or articles that you can point me to would be greatly appreciated.
One straightforward way this could be accomplished would be to estimate the entropy of each image and choose the frame with maximal entropy.
In information theory, entropy can be thought of as the "randomness" of the image. An image of a single color is very predictable, the flatter the distribution, the more random. This is highly related to the compression method described by Arthur-R as entropy is the lower bound on how much data can be losslessly compressed.
Estimating Entropy
One way to estimate the entropy is to approximate the probability mass function for pixel intensities using a histogram. To generate the plot below I first convert the image to grayscale, then compute the histogram using a bin spacing of 1 (for pixel values from 0 to 255). Then, normalize the histogram so that the bins sum to 1. This normalized histogram is an approximation of the pixel probability mass function.
Using this probability mass function we can easily estimate the entropy of the grayscale image which is described by the following equation
H = E[-log(p(x))]
Where H is entropy, E is the expected value, and p(x) is the probability that any given pixel takes the value x.
Programmatically H can be estimated by simply computing -p(x)*log(p(x)) for each value p(x) in the histogram and then adding them together.
Plot of entropy vs. frame number for your example.
with frame 21 (the 22nd frame) having the highest entropy.
Observations
The entropy computed here is not equal to the true entropy of the
image because it makes the assumption that each pixel is independently sampled from the same distribution. To get the true entropy we would need to know
the joint distribution of the image which we won't be able to know without
understanding the underlying random process that generated the images
(which would include human interaction). However, I don't think the true entropy would be very useful and this measure should
give a reasonable estimate of how much content is in the image.
This method will fail if some not-so-interesting frame
contains much more noise (randomly colored pixels) than the most
interesting frame because noise results in a high entropy. For example, the
following image is pure uniform noise and therefore has maximum entropy (H = 8 bits), i.e. no compression is possible.
Ruby Implementation
I don't know ruby but it looks like one of the answers to this question refers to a package for computing entropy of an image.
From m. simon borg's comment
FWIW, using Ruby's File.size() returns 1904 bytes for the 28th frame
image and 946 bytes for the first frame image – m. simon borg
File.size() should be roughly proportional to entropy.
As an aside, if you check the size of the 200x200 noise image on disk you will see that the file is 40,345 bytes even after compression, but the uncompressed data is only 40,000 bytes. Information theory tells us that no compression scheme can ever losslessly compress such images on average.
There are a couple ways I might go about this. My first thought (this may not be the most practical solution, but it seems theoretically interesting!) would be to try losslessly compressing each frame, and in theory, the frame with the least repeatable content (and thus the most unique content) would have the largest size, so you could then compare the size in bytes/bits of each compressed frame. The accuracy of this solution would probably be highly dependent on the photo passed in.
A more realistic/ practical solution might be to grab the predominant color in the GIF (so in the example, the background color), and then iterate through each pixel and increment a counter each time the color of the current pixel doesn't match the color of the background.
I'm thinking about some more optimized/ sample based solutions, and will edit my response to include them a little later, if performance is a concern for you.
I think that you can choose an API such as Restful Web Service for do that because without it that's so hard.
For example,these are some famous API's:
https://cloud.google.com/vision/
https://www.clarifai.com/
https://vize.ai
https://azure.microsoft.com/en-us/services/cognitive-services/computer-vision/
https://imagga.com
I am working with GD library, and I'm looking for a way to detect the nearest pixel to the middle center of shapes, as well as total area used by each shape in a monochromic black-and-white image.
I'm having difficulty coming up with an efficient algorithm to do this. If you have done something similar to this in the past, I'd be grateful for any solution that would help.
Check out the binary image library
Essentially, Otsu threshold to separate out foreground from background, then label connected components. That particular image looks very clean but you might need morph ops to clean it up a bit and get rid of small holes and other artifacts.
Then you have area trivially (count pixels in component) or almost as trivially (use the weighted area function that penalises edge pixels). Centre is just mean.
http://malcolmmclean.github.io/binaryimagelibrary/
#MalcolmMcLean is right but there are remaining difficulties (if you are after maximum accuracy).
If you threshold with Otsu, there are a few pairs of "kissing" dots which will form a single blob using connected component analysis.
In addition, Otsu threshoding will discard some of the partially filled edge pixels so that the weighted averages will be inaccurate. A cure would be to increase the threshold (up to 254 is possible), but that worsens the problem of the kissing dots.
A workaround is to keep a low threshold and dilate the blobs individually to obtain suitable masks that cover all edge pixels. Even so, slight inaccuracies will result in the vicinity of the kissings.
Blob splitting by the watershed transform is also possible but more care is required to handle the common pixels. I doubt that a prefect solution is possible.
An alternative is the use of subpixel edge detection and least-squares circle fitting (after blob detection with a very low threshold to separate the dots). By avoiding the edge pixels common to two circles, you can probably achieve excellent results.
I want to get a screenshot of a x11 window and find the location of smaller images in it. I've had no experiences with working with images, I searched a lot, but I don't get much helpful results.
The image are from files and can be loaded with any format that is easier to use.
The getting screenshot is easy, using XGetImage. But then the question is that which format to use XYPixmap or ZPixmap? What's the difference? How each pixel is represented?
And then what about the images? Which file format is easier to use? And then how each pixel is represented in that format?
And which algorithm should I use to find the location of the images in the screenshot?
I'm really lost here. I need a push in the right direction and see some example code that can help me to understand what I'm dealing with. Couldn't find any similar work.
The language, frameworks or the tools doesn't really matter to me as long as I get it working on my ubuntu machine. I can work in either C, C++, haskell, python or javascript.
With XYPixmap, each image plane is a separate bitmap (one bit per pixel, with padding at the end each scanline). If you have 24-bit color, you get 24 separate bitmaps. To retrieve pixel value at some (x,y) coordinates, you need to fetch one bit from each of the bitmaps at these coordinates, and pack these bits into a pixel.
With ZPixmap, pixels are represented as sequences of bits, with padding at the end of each scanline. If you have 24-bit color, every 3 bytes is a pixel.
In both cases, there may bee padding in the end and sometimes in the beginning of each scanline. It is all described here.
I would not use either format directly. Convert your pixmap to a simple 1, 2, or 4 bytes-per-pixel 2D array, and do the same with the patterns you want to search. If you want to find exact matches, you can use a slightly modified string search algorithm like KMP. Fuzzy matches are tricky, I don't know of any methods that work well.
The luminence of pixels on a computer screen is not usually linearly related to the digital RGB triplet values of a pixel. The nonlinear response of early CRTs required a compensating nonlinear encoding and we continue to use such encodings today.
Usually we produce images on a computer screen and consume them there as well, so it all works fine. But when we antialias, the nonlinearity — called gamma — means that we can't just add an alpha value of 0.5 to a 50% covered pixel and expect it to look right. An alpha value of 0.5 is only 0.5^2.2=22% as bright as an alpha of 1.0 with a typical gamma of 2.2.
Is there any widely established best practice for antialiasing gamma compensation? Do you have a pet method you use from day to day? Has anyone seen any studies of the results and human perceptions of the quality of the graphic output with different techniques?
I've thought of doing standard X^(1/2.2) compensation but that is pretty computationally intense. Maybe I can make it faster with a 256 entry lookup table, though.
Lookup tables are used quite often for work like that. They're small and fast.
But whether look-up or some formula, if the end result is an image file, and the format permits, it's best to save a color profile or at least the gamma value in the file for later viewing, rather than try adjusting RGB values yourself.
The reason: for typical byte-valued R, G, B channels, you have 256 unique values in each channel at each pixel. That's almost good enough to look good to the human eye (I wish "byte" had been defined as nine bits!) Any kind of math, aside from trivial value inversion, would map many-to-one for some of those values. The output won't have 256 values to pick from for each pixel for R, G, or B, but far fewer. That can lead to contouring, jaggies, color noise and other badness.
Precision issues aside, if any kind of decent quality is desired, all composting, mixing, blending, color correction, fake lens flare addition, chroma-keying and whatever, should be done in linear RGB space, where the values of R, G and B are in proportion to physical light intensity. The image math mimics physical light math. But where ultimate speed is vital, there are ways to cheat.
Jim Blinns - "Dirty Pixels" book outlines a fast and good compositing calculation by using 16 bit math plus lookup tables to accurately go back and forward to linear color space. This guy worked on NASAs visualisations, he knows his stuff.
I'm trying to answer, though mainly for reference now, to the actual questions:
First, there are the recommendations from ITU (http://www.itu.int/rec/T-REC-H.272-200701-I/en) which can be applied to programming (but you have to know your stuff).
In Jim Blinn's "Notation, Notation, Notation", Chapter 9, has a very detailed mathematical and perceptual error analysis, although he only covers compositing (many other graphics tasks are affected too).
The notation he establishes can also be used to derive a way of dealing with gamma, or to check if a given way of doing so is actually correct. Very handy, my pet method (mainly as I discovered it independently but later found his book).
When generating images, one typically works in a linear color space (like linear RGB or one of the CIE color spaces) and then converts to a non-linear RGB space at the end. That conversion can be accelerated in hardware or via lookup tables or even through tricky math. (See the other answers' references.)
When performing an alpha blend (e.g., render this icon onto this background), this kind of precision is often elided in favor of speed. The results are computed directly in the non-linear RGB-space by lerping with the alpha as the parameter. This is not "correct", but it's good enough in most cases. Especially for things like icons on desktops.
If you're trying to do more correct blending, you treat it like an original render. Work in linear space (which may require an initial conversion) and then convert to your non-linear display space at the end.
A lot of graphics nowadays use sRGB as the non-linear display color space. If I recall correctly, sRGB is very similar to a gamma of 2.2, but there are adjustments made to values at the low end.