Does Java ME support 0xAARRGGBB color format? I mean can I change the opacity of an image by modifying the alpha value of pixels? I work with Graphics and Image classes.
This will undoubtedly vary between devices. Why not just try it and see?
Edit: OK so I guess you want to know actually how to do it. The method you will need is Image.getRGB() -- as per the docs:
Obtains ARGB pixel data from the
specified region of this image and
stores it in the provided array of
integers. Each pixel value is stored
in 0xAARRGGBB format, where the
high-order byte contains the alpha
channel and the remaining bytes
contain color components for red,
green and blue, respectively. The
alpha channel specifies the opacity of
the pixel, where a value of 0x00
represents a pixel that is fully
transparent and a value of 0xFF
represents a fully opaque pixel.
Modify the pixels you want then use the data to create a new image using Image.createRGBImage().
Edit: The spec says that all devices should allow alpha manipulation.
One way to tell whether they do or not is to create a mutable image, get its graphics object, write some ARGB data to it, then read it back, and see if the alpha data is still present; if not then you can bet alpha transparency is not supported.
The methods I've linked to should be all you need to do this, the actual method I leave as an exercise ;)
Related
During creation of vulkan swapchain i've stuck at imageColorSpace and imageFormat parameters in VkSwapchainCreateInfoKHR. I do understand that imageFormat is how actual pixel is encoded and stored in memory and if it color isn't linear then vulkan converts it to such during sampling and writing operations in shaders. My hardware supports only VK_COLOR_SPACE_SRGB_NONLINEAR_KHR which i suppose means that color in swapchain image should be stored in a non-linear fashion but VK_FORMAT_B8G8R8A8_SRGB results in gamma 0.44 corected colors, while VK_FORMAT_B8G8R8A8_UNORM works fine. How imageFormat relates to imageColorSpace and what does imageColorSpace specify?
Vulkan is colorspace agnostic. Though at some point before the bits are converted to photons, the colorspace must become known. That's why it is asked for in the swapchain creation.
imageFormat specifies what the image format is (same as it does in vkCreateImage()). imageColorSpace is how the swapchain\display interprets the values when the image is presented.
So if you have UNORM, that means unspecified color space, and they are good ol' raw bits. Or rather it means that you are in charge of the color space (if the image even stores color, and not something else entirely). If you use VK_COLOR_SPACE_SRGB_NONLINEAR_KHR with *_UNORM, it means the swapchain takes your bits and it is gonna assume it is already SRGB when presented. If it is *_SRGB, then that is no different for the swapchain itself, but what happens is that the created VkImages are *_SRGB format, which means when you use them then the color is being linearized for you (same as would happen if you created a non-swapchain VkImage).
All other possible values of VkColorSpaceKHR specify a recognised colour space encoding, and I assume that creating a swapchain with those values would mean that the final pixel data submitted by the application to the presentation queue would be transformed, somewhere down the line, into a decoded space, through a known EOTF.
The specification defines VK_COLOR_SPACE_PASS_THROUGH_EXT as follows:
"VK_COLOR_SPACE_PASS_THROUGH_EXT specifies that color components are used 'as is'. This is intended to allow applications to supply data for color spaces not described here."
What would this mean for my application? Let's say (entirely hypothetically) that I was building some fancy spectrum renderer and I was using a custom system for representing colours, and then that would all be transformed into a final RGB format, with just the three channels of intensity values, independent of colour space or any encoding.
Would my image really just be blitted to the screen "as is", as the specification claims? If so, how does the implementation know what to do with the given pixel data? Would it result in every pixel on my display being set to those raw RGB intensities, with no transfer functions being applied?
Thanks, and as always, I do not really know what I'm talking about so please bear with me.
Pass-through means pass it through. In terms of display presentation, the Vulkan implementation is an intermediary between the user and the display engine (ie: the part of the OS that deals with how a window accesses and manipulates the screen). Pass-through means for Vulkan to pass the data through as-is. The user is providing the data in the color space that the display engine expects it to be.
From a large collection of jpeg images, I want to identify those more likely to be simple logos or text (as opposed to camera pictures). One identifying characteristic would be low color count. I expect most to have been created with a drawing program.
If a jpeg image has a palette, it's simple to get a color count. But I expect most files to be 24-bit color images. There's no restriction on image size.
I suppose I could create an array of 2^24 (16M) integers, iterate through every pixel and inc the count for that 24-bit color. Yuck. Then I would count the non-zero entries. But if the jpeg compression messes with the original colors I could end up counting a lot of unique pixels, which might be hard to distinguish from a photo. (Maybe I could convert each pixel to YUV colorspace, and save fewer counts.)
Any better ideas? Library suggestions? Humorous condescensions?
Sample 10000 random coordinates and make a histogram, then analyze the histogram.
Does, anyone know, how I can change brightness and contrast of color image. I know about vtkImageMapToWindowLevel, but after setting level or window of image in this class, the color image becomes grayscale.
Thanks for answers;
By definition, a color image is already color mapped, and you cannot change the brightness/contrast of the image without decomposition and recomposition.
First, define a pair of numbers called brightness and contrast in whatever way you want. Normally, I'd take brightness as the maximum value, and contrast as the ratio between minimum and maximum. Similarly, if you want to use Window/Level semantics, "level" is the minimum scalar value, and window is the difference between maximum and minimum.
Next, you find the scalar range - the minimum and maximum values in your desired output image, using the brightness and contrast. If you're applying brightness/contrast, the scalar range is:
Maximum = brightness
Minimum = Maximum / contrast
Assume a color lookup table (LUT), with a series of colors at different proportional values, say, in the range of 0 to 1. Now, since we know the brightness and contrast, we can setup the LUT with the lower value (range 0) mapping to "minimum" and the upper value (range 1) mapping to "maximum". When this is done, a suitable class, like vtkImageMapToColors can take the single-component input and map it to a 3 or 4 component image.
Now, all this can happen only for a single-component image, as the color LUT classes (vtkScalarsToColors and related classes) are defined only on single-component images.
If you have access to the original one-component image, and you're using vtkImageMapToColors or some similar class, I'd suggest handling it at that stage.
If you don't, there is one way I can think of:
Extract the three channels as three different images using vtkImageExtractComponents (you'll need three instances, each with the original image as input).
Independently scale the 3 channels using vtkImageShiftScale (shift by brightness, scale by contrast)
Combine the channels back using vtkImageAppendComponents
Another possibility is to use vtkImageMagnitude, which will convert the image back to grey-scale (by taking the magnitude of the three channels together), and re-applying the color table using vtkImageMapToColors and any of the vtkScalarsToColors classes as your lookup table.
The first method is better if your image is a real photograph or something similar, where the colors are from some 3-component source, and the second would work better if your input image is already using false colors (say an image from an IR camera, or some procedurally generated fractal that's been image mapped).
how can I see the color space of my image with openCV ?
I would like to be sure it is RGB, before to convert to another one using cvCvtColor() function
thanks
Unfortunately, OpenCV doesn't provide any sort of indication as to the color space in the IplImage structure, so if you blindly pick up an IplImage from somewhere there is just no way to know how it was encoded. Furthermore, no algorithm can definitively tell you if an image should be interpreted as HSV vs. RGB - it's all just a bunch of bytes to the machine (should this be HSV or RGB?). I recommend you wrap your IplImages in another struct (or even a C++ class with templates!) to help you keep track of this information. If you're really desperate and you're dealing only with a certain type of images (outdoor scenes, offices, faces, etc.) you could try computing some statistics on your images (e.g. build histogram statistics for natural RGB images and some for natural HSV images), and then try to classify your totally unknown image by comparing which color space your image is closer to.
txandi makes an interesting point. OpenCV has a BGR colorspace which is used by default. This is similar to the RGB colorspace except that the B and R channels are physically switched in the image. If the physical channel ordering is important to you, you will need to convert your image with this function: cvCvtColor(defaultBGR, imageRGB, CV_BGR2RGB).
As rcv said, there is no method to programmatically detect the color space by inspecting the three color channels, unless you have a priori knowledge of the image content (e.g., there is a marker in the image whose color is known). If you will be accepting images from unknown sources, you must allow the user to specify the color space of their image. A good default would be to assume RGB.
If you modify any of the pixel colors before display, and you are using a non-OpenCV viewer, you should probably use cvCvtColor(src,dst,CV_BGR2RGB) after you have finished running all of your color filters. If you are using OpenCV for the viewer or will be saving the images out to file, you should make sure they are in BGR color space.
The IplImage struct has a field named colorModel consisting of 4 chars. Unfortunately, OpenCV ignores this field. But you can use this field to keep track of different color models.
I basically split the channels and display each one to figure out the color space of the image I'm using. It may not be the best way, but it works for me.
For detailed explanation, you can refer the below link.
https://dryrungarage.wordpress.com/2018/03/11/image-processing-basics/