When I load an image in OpenCV it is always darker than the original. Why? - colors

So I load a color .png file that has been taken with an iphone using cvLoadImage. And after it's been loaded, when I immediately display it in my X11 terminal, the image is definitely darker than the original png file.
I currently use this to load the image:
IplImage *img3 = cvLoadImage( "bright.png", 1);
For the second parameter I have tried all of the following:
CV_LOAD_IMAGE_UNCHANGED
CV_LOAD_IMAGE_GRAYSCALE
CV_LOAD_IMAGE_COLOR
CV_LOAD_IMAGE_ANYDEPTH
CV_LOAD_IMAGE_ANYCOLOR
but none of these have worked. Grayscale definitely made the image grayscale. But as suggested from http://www.cognotics.com/opencv/docs/1.0/ref/opencvref_highgui.htm, even using CV_LOAD_IMAGE_ANYDEPTH | CV_LOAD_IMAGE_ANYCOLOR to load the image as truthfully as possible resulted in a darker image being displayed in the terminal.
Does anyone have any ideas on how to get the original image to display properly?
Thanks a lot in advance.

Yes, OpenCV does not apply Gamma correction.
// from: http://gegl.org/
// value: 0.0-1.0
static inline qreal
linear_to_gamma_2_2 (qreal value){
if (value > 0.0030402477)
return 1.055 * pow (value, (1.0/2.4)) - 0.055;
return 12.92 * value;
}
// from: http://gegl.org/
static inline qreal
gamma_2_2_to_linear (qreal value){
if (value > 0.03928)
return pow ((value + 0.055) / 1.055, 2.4);
return value / 12.92;
}

It only happens when you load it in OpenCV? Opening with any other viewer doesn't show a difference?
I can't confirm this without a few tests but I believe the iPhone display gamma is 1.8 (source: http://www.colorwiki.com/wiki/Color_on_iPhone#The_iPhone.27s_Display). Your X11 monitor probably is adjusted for 2.2 (like the rest of the world).
If this theory holds, yes, images are going to appear darker on X11 than on the iPhone. You may change your monitor calibration or do some image processing to account for the difference.
Edit:
I believe OpenCV really does not apply gamma correction. My reference to this is here:
http://permalink.gmane.org/gmane.comp.lib.opencv.devel/837
You might want to implement it yourself or "correct" it with ImageMagick. This page instructs you on how to do so:
http://www.4p8.com/eric.brasseur/gamma.html

I usually load an image with:
cvLoadImage("file.png", CV_LOAD_IMAGE_UNCHANGED);
One interesting test you could do to detect if OpenCV is really messing with the image data, is simply creating another image with cvCreateImage(), then copy the data to this newly created image and save it to another file with cvLoadImage().
Maybe, it's just a display error. Of course, I would suggest you to update to the most recent version of OpenCV.

Related

Vulkan output RGBA color to BGRA color attachment

I wrote a simple Vulkan demo with BGRA images queried from swapchain. If I output RGBA pixels to it, the R channel and G channel in final result were swapped.
But when I checked the demo in Vulkan SDK dir, I found that it also use a BGRA image view for color output (I checked this in code and RenderDoc), but the final result is correct!
So, when did the conversion happen? Did I miss something?
vulkan demo output
Vulkan demo has a BGRA format output
My poor result
Edit:
Sorry about the lack of the code. The first answer gave me a hint to find the real problem in loading image from disk by FreeImage. Images it loads are stored as BGRA format with default setting on windows:
Thanks for your help and advice, #opa and #solidpixel.
You're either setting up your texture reads incorrectly, or you're setting up your swapchain writes incorrectly. If I had to guess you're uploading data into the wrong texture format, but without a complete example it's hard to tell.

Vuforia video playback with fixed dimension

I am using vuforia video playback demo with cloud recognition.
I have combined both projects and it is working properly. But currently video dimension is according to detected object. But i need fixed width and height when video plays.
Can anyone help me ?
Thanks in advance.
Well apparently Vuforia fixes the width and height at the start of the game no matter what the size of the object is. I could not find when exactly this operation is conducted but it is done at beginning of your game. When you change the size of the ImageTarget in runtime it is not fixed anymore. Add these lines to your OnTrackingFound function of DefaultTrackableEventHandler.cs
if (this.name == "WhateverTheNameOfYourRelatedImageTarget"&& !isScaled)
{
//Increase the size however you want i just added 1 to each dimension
this.transform.localScale += Vector3.one;
// make isScaled true not to scale every time it is found initially it shoud be false
isScaled = true;
}
Good luck!
What i usually do is , instead of Videoplayback , i play the video on canvas object and hook this object to Defaulttrackableeventhandler script. So when target is found, gameobject.setactive(true) and gameobject.setactive(false) when target is lost. By this method the size of the gameobject is fixed and it stays in a fixed location .
I just made an example you can get here (have to import it to any project and open the scene Assets/VideoExample/Examples). You can see there a bit clearer what the ScreenSpace - Overlay does ... it might be better to just switch to ScreenSpace - Camera in general

Is there a way in node gm(graphic magic) crop by shape?

I'm trying to crop picture by a shape picture and tried this
let imageDoc = gm(filePath).resize(100, 100);
imageDoc.mask(`${shapesPath}/hexagon.svg`);
It acts like nothing was done but resizes correctly.
Also tried using png file instead of svg, but there is no result at all, maybe there is some way to debug it, or I'm doing something wrong?
According to user Pirijan:
Mask doesn't do anything on it's own, it's pretty useless really. It merely takes the supplied mask image and uses it to write protect the masked pixels from subsequent alteration if additional processing / drawing is performed on the image.
So it seems that .mask() it only useful when used together with another command.
The documentation for GraphicsMagick can be quite confusing, and I'm sure there are multiple ways of masking an image. Here's how I do it:
function mask(img, mask){
gm()
.command("composite")
.compose("CopyOpacity")
.in(img, mask, "-matte")
.write(img, function(err){
if(err){
console.log(err)
} else {
console.log("Success! Image " + img + " was masked with mask " + mask);
}
});
}
However, this doesn't use the alpha channel from mask, instead it works with a black and white mask with no alpha channel. It also requires both img and mask to have identical dimensions.
It works by copying the value of each pixel in mask to the alpha channel of img. The -matte option tells gm to create an alpha channel on img if it doesn't already have one.
Since node-gm uses the debug library from visionmedia you can turn on debug output to the console by setting the environment variable DEBUG=gm, like so (in Unix/OS X):
DEBUG=gm node index.js
This will print the exact commands that node-gm invokes.

OpenCV store image in Object (OOP)

I'm programming in C++ using OpenCV in an object oriented approach. Basically I have an array of object called People[8]. For each array, I want to allocate an image to it by taking a picture using webcam. I did something like this:
for (int i=0; i<8; i++){
cvWaitKey(0); //wait for input then take picture
Mat grabbed = cam1.CamCapture();
People[i].setImage(grabbed);
imshow("picture", grabbed);
cvWaitKey(1);
}
I face 2 problems here:
1) The imshow does not display the 'latest' image captured, it display the image previously taken i.e (i-1) instead of i.
2) When I display all the images together, 8 windows appear and all of them are displaying the last image captured on the camera.
I do not have any clue what is wrong, could anyone please advice? Thank you in advance.
"all of them are displaying the last image captured on the camera."
the images you get from the capture point to driver memory. so the former image gets overwritten by the latter.
you need to store a clone() of the mat you get, like:
People[i].setImage( grabbed.clone() );
I have not worked with OpenCV for a while but I would move around cvWaitKey( 1 ), I also would not have 2 calls to it, from what I remember it is similar to glFlush(). Also I would change 1 to 10. For some reason I remember 1 not working.

How to get moving object's mask using OpenCV BackgroundSubtractorMOG2

I want to mask the moving objects from video.
I found that OpenCV has some built-in BackgroundSubtractors which could possibly saving my time a lot. However, according to the official reference, the function:
void BackgroundSubtractorMOG2::operator()(InputArray image, OutputArray fgmask, double learningRate=-1)
should output a mask, fgmask, but it doesn't. The fgmask variable will contain the "contour of the mask" instead after invoking above method. That's weird. All I want is a simple closed region filled with white color(for example) to represent the moving objects. How could I do that?
Any reply or recommendation would be very appreciate. Thanks a lot.
Here's my code:
int main(int argc, char *argv[])
{
cv::BackgroundSubtractorMOG2 bg = BackgroundSubtractorMOG2(30,16.0,false);
cv::VideoCapture cap(0);
cv::Mat frame, mask, _frame, _fmask;
cvNamedWindow("mask", CV_WINDOW_AUTOSIZE);
for(;;)
{
cap >> frame;
bg(frame,fmask,-1);
_frame = IplImage(frame);
_fmask = IplImage(fmask);
cvShowImage("mask", &_fmask);
if(cv::waitKey(30) >= 0) break;
}
return 0;
}
A snapshot of the output video is:
p.s. My working environment is OpenCV2.4.3 on OSX 10.8 and XCode 4.5.2 with apple LLVM compiler 4.1.
If you want to acquire the whole objects filled with white pixels in the foreground then I would ask you to tell me something about your experience.
My question is, for the code, you mentioned above, do you get more white pixels when you generate more motion in front of your camera?
If yes then there are two paramenters to learn about for your requirement.
First is the History parameter. which you have configured as 30 in the constructor BackgroundSubtractorMOG2(30,16.0,false);. You can test this param by incresing, say to 300. It will maintain the motion history of the object in the foreground. So if you have moved completely from your starting location within the 300 frames then you will get whole object covered with white pixels as you want. but it will be erased gradually. So it cannot give you the 100% solution.
The second parameter is called learning rate. In the code you mentioned bg(frame,fmask,-1); where -1 is your learning rate. you can set it to 0.0 to 1.0 and default is -1. When you set it 0, you will get what you want for the objects which are not part of the frame in the starting of the video. You can call this kind of object "foreign objects". You will get foreign object covered with white pixels.
Explore your testing from the information I have mentioned above and share your experience.

Resources