how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/
Related
I am making this C++ program that has buttons, button containers, chat boxes, etc., and I want to wrap it around by textures.
I want to generate a smooth edge for all the rectangles I made, and I don't want vertex plotting method to do the work for it, since it consumes more CPU usage and not that looks pretty good, and I don't know if it can work with texture coordinates(i.e. glTexCoord(u, v) with glVertex2f(x, y) w/c should only be 4 since it is a quad)
I use to load textures using SDL_LoadBMP() w/c can load only a format of .bmp.(I'm not that sure because it says only LoadBMP there).
So my questions are:
can a .bmp format handle transparency? if so, how to do it?
can you show me some code samples using SOIL to load image of format .gif or any other formats that can handle image transparency?
can a quad handle an irregular/polygons shape like hexagon or star without drawing its background?
additional question
*how to import a primitive textbox that renders through c++ opengl so I can copy the texts there into clipboard? as for chatting session in my program.
I made my own library that draws the text using GL_POINTS and doesn't look good when resizing the window because the points were spread-out. It takes const char* for the text to avoid #include <*string*> because I want my program to be not dependent on core functions of C++.
So the better solution is to draw it using bitmaps.
Some suggest to draw it using images so I really need the transparency thing because I want it to draw using quad only.
Yes, the bitmap format does support transparency.
It depends on the compression method, the default RGB method supports 24-bit color, but BITFIELDS compression supports 32-bit color (24-bit + alpha channel).
You can read more at http://en.wikipedia.org/wiki/BMP_file_format
I am using it successfully in the Lime project, here is an implementation written in Haxe: https://github.com/openfl/lime/blob/4adb7d638d15612e2542ae3a0ef581a786d21183/src/lime/_internal/format/BMP.hx
This answer only addresses whether a bmp file can handle transparency, and how to load a png file using SOIL, and I think if you look further by inference it shows you how to load a gif file also. According to this wikipedia article, bmp is one file type that supports transparency through an alpha channel. But according to this SO article it doesn't. In my own experience I have not found a way to make bmp transparency work. So theoretically 32bit bmp files are supposed to support transparency, but I doubt it. (Maybe I will eat my words?)
Ok, from the SOIL website, this code tells how to load a png file which handles transparency:
/* load an image file directly as a new OpenGL texture */
GLuint tex_2d = SOIL_load_OGL_texture
(
"img.png"
SOIL_LOAD_AUTO,
SOIL_CREATE_NEW_ID,
SOIL_FLAG_MIPMAPS | SOIL_FLAG_INVERT_Y | SOIL_FLAG_NTSC_SAFE_RGB |
SOIL_FLAG_COMPRESS_TO_DXT
);
BMP transparency is possible in Photoshop, with a 24 bits depth, at least.
It doesn't appear as an option of "imwrite" in Matlab.
I want to get a screenshot of a x11 window and find the location of smaller images in it. I've had no experiences with working with images, I searched a lot, but I don't get much helpful results.
The image are from files and can be loaded with any format that is easier to use.
The getting screenshot is easy, using XGetImage. But then the question is that which format to use XYPixmap or ZPixmap? What's the difference? How each pixel is represented?
And then what about the images? Which file format is easier to use? And then how each pixel is represented in that format?
And which algorithm should I use to find the location of the images in the screenshot?
I'm really lost here. I need a push in the right direction and see some example code that can help me to understand what I'm dealing with. Couldn't find any similar work.
The language, frameworks or the tools doesn't really matter to me as long as I get it working on my ubuntu machine. I can work in either C, C++, haskell, python or javascript.
With XYPixmap, each image plane is a separate bitmap (one bit per pixel, with padding at the end each scanline). If you have 24-bit color, you get 24 separate bitmaps. To retrieve pixel value at some (x,y) coordinates, you need to fetch one bit from each of the bitmaps at these coordinates, and pack these bits into a pixel.
With ZPixmap, pixels are represented as sequences of bits, with padding at the end of each scanline. If you have 24-bit color, every 3 bytes is a pixel.
In both cases, there may bee padding in the end and sometimes in the beginning of each scanline. It is all described here.
I would not use either format directly. Convert your pixmap to a simple 1, 2, or 4 bytes-per-pixel 2D array, and do the same with the patterns you want to search. If you want to find exact matches, you can use a slightly modified string search algorithm like KMP. Fuzzy matches are tricky, I don't know of any methods that work well.
In the official documentation of DXGI_FORMAT, it tells us that only a format with _SRGB enumeration postfix is in sRGB color space. I thought other format without this postfix are all in the linear space. But I found a very strange behavior of format conversion function in DirectXTex library. ( You can download it from http://directxtex.codeplex.com/ )
At first, I exported a texture file as DXGI_FORMAT_R32G32B32A32_FLOAT by using NVIDIA Photoshop DDS Plugin. Then I load this file by LoadFromDDSFile() function, and convert its format to DXGI_FORMAT_R16G16B16A16_UNORM by Convert() function. (Both of these two functions are provided by DirectXTex library.)
You guess what? After the image was converted to DXGI_FORMAT_R16G16B16A16_UNORM, the brightness of all pixels were also changed, the whole image becomes brighter than before.
If I manually convert the pixel values from sRGB space to Linear space after the image was converted to DXGI_FORMAT_R16G16B16A16_UNORM format, the resultant pixel values are same as input. Therefore, I suppose that the DirectXTex library treats DXGI_FORMAT_R32G32B32A32_FLOAT as a format in linear color space, and treats DXGI_FORMAT_R16G16B16A16_UNORM as a format in sRGB color space, then it did the color space transforming from linear space to sRGB space. ( I tried to find out why the Convert() function also converts the color space, but it was implemented by WIC, and there is no source code for it. )
So, is there any bug in DirectXTex library? Or is it the real standard for DXGI_FORMATs? If there were different color spaces for some special DXGI_FORMATs, please tell me that where can I find the specification for it.
Any help will be grateful. Thanks!
By convention float RGB values are linear, and integer RGB values are gamma-compressed. There is no particular benefit to gamma-compressing floats since the reason for gamma is to use more bits where it is perceptually needed, and floats have sufficient (perhaps excessive) number of bits throughout and are already pseudo-log encoded (using the exponent). (source)
Note that the colorspace of integer RGB textures in DXGI which are not specifically *_SRGB is not sRGB, it is driver dependent, and usually has a fixed gamma of 0.5.
The DirectXTex library does appear to be behaving correctly. However, please note that you are also relying on the behavior of whatever software you use to both capture and display the DDS files. A better test for just DirectXTex is simply to do a round-trip conversion float->int->float in the library and compare the results numerically rather than visually.
I have a PNG-Image with alpha values and need to reduce the amount of colors. I need to have no more than 256 colors for all the colors in the image and so far everything I tried (from paint shop to leptonica, etc...) strips the image of the alpha channel and makes it unusable. Is there anything out there that does what I want ?
Edit: I do not want to use a 8-bit palette. I just need to reduce the numbers of color so that my own program can process the image.
Have you tried ImageMagick?
http://www.imagemagick.org/script/index.php
8-bit PNGs with alpha transparency will only render alpha on newer webbrowsers.
Here are some tools and website that does the conversion:
free pngquant
Adobe Fireworks
and website: http://www.8bitalpha.com/
Also, see similar question
The problem you describe is inherent in the PNG format. See the entry at Wikipedia and notice there's no entry in the color options table for Indexed & alpha. There's an ability to add an alpha value to each of the 256 colors, but typically only one palette entry will be made fully transparent and the rest will be fully opaque.
Paint Shop Pro has a couple of options for blending or simulating partial transparency in a paletted PNG - I know because I wrote it.
i am trying to read an image with ITK and display with VTK.
But there is a problem that has been haunting me for quite some time.
I read the images using the classes itkGDCMImageIO and itkImageSeriesReader.
After reading, i can do two different things:
1.
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
2.
The second scenario is the registration pipeline. Here, i read the image as before, then use the classes shown in the ITK Software Guide chapter about registration. Then i resample the image and use the itkImageSeriesWriter.
And that's when the problem appears. After writing the image to a file, i compare this new image with the image i used as input in the XMedcon software. If the image i wrote ahs been shown too bright in my software, there no changes when i compare both of them in XMedcon. Otherwise, if the image was too dark in my software, it appears all messed up in XMedcon.
I noticed, when comparing both images (the original and the new one) that, in both cases, there are changes in modality, pixel dimensions and glmax.
I suppose the problem is with the glmax, as the major changes occur with the darker images.
I really don't know what to do. Does this have something to do with color level/window? The most strange thing is that all the images are very similar, with identical tags and only some of them display errors when shown/written.
I'm not familiar with the particulars of VTK/ITK specifically, but it sounds to me like the problem is more general than that. Medical images have a high dynamic range and often the images will appear very dark or very bright if the window isn't set to some appropriate range. The DICOM tags Window Center (0028, 1050) and Window Width (0028, 1051) will include some default window settings that were selected by the modality. Usually these values are reasonable, but not always. See part 3 of the DICOM standard (11_03pu.pdf is the filename) section C.11.2.1.2 for details on how raw image pixels are scaled for display. The general idea is that you'll need to apply a linear scaling to the images to get appropriate pixel values for display.
What pixel types do you use? In most cases, it's simpler to use a floating point type while using ITK, but raw medical images are often in short, so that could be your problem.
You should also write the image to the disk after each step (in MHD format, for example), and inspect it with a viewer that's known to work properly, such as vv (http://www.creatis.insa-lyon.fr/rio/vv). You could also post them here as well as your code for further review.
Good luck!
For what you describe as your first issue:
I can convert the ITK image to vtkImageData using itkImageToVTKImageFilter and the use vtkImageReslicer to get all three axes. Then, i use the classes vtkImageMapper, vtkActor2D, vtkRenderer and QVTKWidget to display the image.
In this case, when i display the images, there are several problems with colors. Some of them are shown very bright, others are so dark you can barely see them.
I suggest the following: Check your window/level in VTK, they probably aren't adequate to your images. If they are abdominal tomographies window = 350 level 50 should be a nice color level.