I saved a *.dds image using the NVIDIA texture tools plugin for Photoshop, and I chose the D3DFMT_A32R32G32B32F format. Because it's a floating point format, I thought the data was saved in Linear color space. However, when I loaded the file in my code, I found I needed to perform a conversion from sRGB to Linear, to get the result I expected.
Is this correct? Is D3DFMT_A32R32G32B32F an sRGB format? Or, is it a bug in the Photoshop plugin? Or, is the format agnostic in terms of color space?
On a related note, I chose the legacy D3DFMT_A32R32G32B32F format because the plugin doesn't seem to support the newer DXGI formats.
Float data could technically be in any color space. In DirectXTex I generally consider float formats to be linear space, but the old plug-in could certainly be treating everything as sRGB.
Traditional DDSURFACEDESC2 DDS file doesn't explicitly encode color space. Some hacks by NVIDIA tried to add sRGB vs. Linear, but it was never widely adopted. With the DX10 extended header that uses DXGI_FORMAT, only a few formats explicitly indicate sRGB, so even there you can also technically have data in various color spaces.
See this blog post
Related
I'm reading/watching anything I can about color management/color science and something that's not making sense to me is the scene-referred and display-referred workflows. Isn't everything display-referred, because your monitor is converting everything you see into something it can display?
While reading this article, I came across this image:
So, if I understand this right to follow a linear workflow, I should apply an inverse power function to any imported jpg/png/etc files that contain color data, to get it's gamma to be linear. I then work on the image, and when I'm ready to export, say to sRGB and save it as a png, it'll bake in the original transfer function.
But, even while it's linear, and I'm working on it, is't my monitor converting everything I see to what I can display? Isn't it basically applying it's own LUT? Isn't there already a gamma curve that the monitor itself is applying?
Also, from input to output, how many color space conversions take place, say if I'm working in the ACEScg color space. If I import a jpg texture, I linearize it and bring it into the ACEScg color space. I work on it, and when I render it out, the renderer applies a view transform to convert it from ACEScg to sRGB, and then also what I'm seeing is my monitor converting then from sRGB to my monitor's own ICC profile, right (which is always happening since everything I'm seeing is through my monitor's ICC profile)?
Finally, if I add a tone-mapping s curve, where does that conversion sit on that image?
I'm not sure your question is about programming, and the question has not much relevance to the title.
In any case:
light (photons) behave linearly. The intensity of two lights is the sum of the intensity of each light. For this reason a lot of image mangling is done in linear space. Note: camera sensors have often a near linear response.
eyes see nearly as with a gamma exponent of 2. So for compression (less noise with less bit information) gamma is useful. By accident also the CRT phosphors had a similar response (else the engineers would have found some other methods: in past such fields were done with a lot of experiments: feed back from users, on many settings).
Screens expects images with a standardized gamma correction (now it depends on the port, setting, image format). Some may be able to cope with many different colour spaces. Note: now we have no more CRT, so the screen will convert data from expected gamma to the monitor gamma (and possibly different value for each channel). So a sort of a LUT (it may just be electronically done, so without the T (table)). Screens are setup so that with a standard signal you get expected light. (There are standards (images and methods) to measure the expected bahavious, but so ... there is some implicit gamma correction of the gamma corrected values. It was always so: on old electronic monitor/TV technicians may get an internal knob to regulate single colours, general settings, etc.)
Note: Professionals outside computer graphic will use often opto-electronic transfer function (OETF) from camera (so light to signal) and the inverse electro-optical transfer function (EOTF) when you convert a signal (electric) to light, e.g. in the screen. I find this way to call the "gamma" show quickly what it is inside gamma: it is just a conversion between analogue electrical signal and light intensity.
The input image has own colour space. You now assume a JPEG, but often you have much more information (RAW or log, S-log, ...). So now you convert to your working colour space (it may be linear, as our example). If you show the working image, you will have distorted colours. But you may not able to show it, because you will use probably more then 8-bit per channel (colour). Common is 16 or 32bits, and often with half-float or single float).
And I lost some part of my answer (after last autosave). The rest was also complex, but the answer is already too long. In short. You can calibrate the monitor: two way: the best way (if you have a monitor that can be "hardware calibrated"), you just modify the tables in monitor. So it is nearly all transparent (it is just that the internal gamma function is adapted to get better colours). You still get the ICC, but for other reasons. Or you get the easy calibration, where the bytes of an image are transformed on your computer to get better colours (in a program, or now often by operating system, either directly by OS, or by telling the video card to do it). You should careful check that only one component will do colour correction.
Note: in your program, you may save the image as sRGB (or AdobeRGB), so with standard ICC profiles, and practically never as your screen ICC, just for consistency with other images. Then it is OS, or soft-preview, etc. which convert for your screen, but if the image as your screen ICC, just the OS colour management will see that ICC-image to ICC-output will be a trivial conversion (just copying the value).
So, take into account that at every step, there is an expected colour space and gamma. All programs expect it, and later it may be changed. So there may be unnecessary calculation, but it make things simpler: you should not track expectations.
And there are many more details. ICC is also use to characterize your monitor (so the capable gamut), which can be used for some colour management things. The intensions are just the method the colour correction are done, if the image has out-of-gamut colours (just keep the nearest colour, so you lose shade, but gain accuracy, or you scale all colours (and you expect your eyes will adapt: they do if you have just one image at a time). The evil is in such details.
What color space do web browsers in general, and Chrome in particular, assume for the WebGL drawing buffer? That is, in what color space should a shader output pixel values?
I can't find anything in the WebGL specs about what color space to assume for the WebGL drawing buffer.
Our experiments indicate that Chrome assumes sRGB during compositing. An image element tagged with Prophoto is displayed correctly on a wide gamut monitor, but the same image rendered on a WebGL canvas is displayed as if it was tagged with sRGB.
Does this mean that it is currently not possible to do color correct rendering on a wide gamut display in WebGL, consistently on different browsers?
sRGB is the Standard
The standard for all web content is defined as sRGB by the W3C, and this applies through CSS3.
CSS level 4 will be introducing additional colorspaces, but sRGB is still the default standard.
When no colorspace is defined in the content — such as an untagged image — the user agent should assume sRGB. Most devices are sRGB, and do not use color management so non-sRGB content may display incorrectly on them.
WebGL
WebGL itself does not do color management. Note however there is work going on in this area, particularly as HDR and multiple colorspaces begin to take hold in developing standards.
Accessibility
Also, sRGB is the standard web colorspace for accessibility, and this will likely remain for the foreseeable future, as the red primary in sRGB is still fairly visible to protanopia color vision deficiency.
Majority Today
sRGB is the standard used for the majority of monitors and devices. Rec709, the standard for HDTV, uses the exact same primaries but has a slightly different transfer curve specification more suitable for viewing content in a darker environment.
Please let me know if you have additional questions.
In the official documentation of DXGI_FORMAT, it tells us that only a format with _SRGB enumeration postfix is in sRGB color space. I thought other format without this postfix are all in the linear space. But I found a very strange behavior of format conversion function in DirectXTex library. ( You can download it from http://directxtex.codeplex.com/ )
At first, I exported a texture file as DXGI_FORMAT_R32G32B32A32_FLOAT by using NVIDIA Photoshop DDS Plugin. Then I load this file by LoadFromDDSFile() function, and convert its format to DXGI_FORMAT_R16G16B16A16_UNORM by Convert() function. (Both of these two functions are provided by DirectXTex library.)
You guess what? After the image was converted to DXGI_FORMAT_R16G16B16A16_UNORM, the brightness of all pixels were also changed, the whole image becomes brighter than before.
If I manually convert the pixel values from sRGB space to Linear space after the image was converted to DXGI_FORMAT_R16G16B16A16_UNORM format, the resultant pixel values are same as input. Therefore, I suppose that the DirectXTex library treats DXGI_FORMAT_R32G32B32A32_FLOAT as a format in linear color space, and treats DXGI_FORMAT_R16G16B16A16_UNORM as a format in sRGB color space, then it did the color space transforming from linear space to sRGB space. ( I tried to find out why the Convert() function also converts the color space, but it was implemented by WIC, and there is no source code for it. )
So, is there any bug in DirectXTex library? Or is it the real standard for DXGI_FORMATs? If there were different color spaces for some special DXGI_FORMATs, please tell me that where can I find the specification for it.
Any help will be grateful. Thanks!
By convention float RGB values are linear, and integer RGB values are gamma-compressed. There is no particular benefit to gamma-compressing floats since the reason for gamma is to use more bits where it is perceptually needed, and floats have sufficient (perhaps excessive) number of bits throughout and are already pseudo-log encoded (using the exponent). (source)
Note that the colorspace of integer RGB textures in DXGI which are not specifically *_SRGB is not sRGB, it is driver dependent, and usually has a fixed gamma of 0.5.
The DirectXTex library does appear to be behaving correctly. However, please note that you are also relying on the behavior of whatever software you use to both capture and display the DDS files. A better test for just DirectXTex is simply to do a round-trip conversion float->int->float in the library and compare the results numerically rather than visually.
I have a PNG-Image with alpha values and need to reduce the amount of colors. I need to have no more than 256 colors for all the colors in the image and so far everything I tried (from paint shop to leptonica, etc...) strips the image of the alpha channel and makes it unusable. Is there anything out there that does what I want ?
Edit: I do not want to use a 8-bit palette. I just need to reduce the numbers of color so that my own program can process the image.
Have you tried ImageMagick?
http://www.imagemagick.org/script/index.php
8-bit PNGs with alpha transparency will only render alpha on newer webbrowsers.
Here are some tools and website that does the conversion:
free pngquant
Adobe Fireworks
and website: http://www.8bitalpha.com/
Also, see similar question
The problem you describe is inherent in the PNG format. See the entry at Wikipedia and notice there's no entry in the color options table for Indexed & alpha. There's an ability to add an alpha value to each of the 256 colors, but typically only one palette entry will be made fully transparent and the rest will be fully opaque.
Paint Shop Pro has a couple of options for blending or simulating partial transparency in a paletted PNG - I know because I wrote it.
how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/