How to read jpg using FreeImage - visual-c++

I am new to FreeImage. I just want to read an JPEG image and display it in my MFC dialog. How do I do that? I try that using ImageStone by doing:
img.Load(blob.data, size, IMG_JPG);
img.Draw(hdc, DC);
Now, how do I do the same thing using FreeImage?

I tried and it worked exactly the way you described.... I used SHCreateMemStream and fed the stream to the overloaded LOAD method of CImage. Everything worked perfectly.
Thank you so much,
Makoto
CImage im;
IStream* is = SHCreateMemStream(Blob.pData, nSize);
HRESULT hr = im.Load(is);
RECT rect = { 0, 0, 500, 500 };
BOOL b = im.Draw(hdc, rect);

Related

Issues when writing an image

Am into an J2ME project right now where I need to select an image and write this image to a particular folder say somewhere in memory card with a desired file name. Am able to select image and display it but when trying to save it am having trouble. When I try to save, an image file is created but its size is 0.0 kb and when I click on the image it says "File Format not supported"
This is my code
fileCon = (FileConnection)Connector.open(path+"Contacts/contactImages/"+FIRST_NAME+".png",Connector.READ_WRITE);
if(!fileCon.exists())
{
fileCon.create();
}
int h = contactImage.getHeight();
int w = contactImage.getWidth();
int[] size = new int[w*h];
contactImage.getRGB(size, 0, w, 0, 0, w, h);
ByteArrayOutputStream baos = new ByteArrayOutputStream();
DataOutputStream dos = new DataOutputStream(baos);
for (int i = 0; i < size.length; i++)
{
dos.writeInt(size[i]);
}
But you are writing pixel data into (in-memory) ByteArrayOutputStream, and not a file stream. Shouldn't there be something like
DataOutputStream dos = fileCon.openDataOutputStream();
And of course output stream should be closed to make sure all data is flushed.
Another thing is that you are saving raw ARGB data and not encoded PNG image, so .png extension may confuse some image viewers. Perhaps .bmp would be better.

Converting 24bit RGB image to 8bit grayscale with opencv-node in node.js

I'm trying to capture images within node.js from a webcam connected to a Raspberry Pi. Capturing works fine, but now when I want to transmit the images I have some serious problems with framerate and lags.
Now my first idea was to convert the RGB images to 8bit grayscale, that should increase the performance by factor 3 (i hope..).
I'm using node.js and opencv-node for this. Here you can see some code snippet for the moment:
var startT = new Date().getTime();
var capture = new cv.VideoCapture();
var frame = new cv.Mat;
var grey = new cv.Mat;
var imgPath = __dirname + "/ramdisk/";
var frame_number = 0;
capture.open(0);
if (!capture.isOpened())
{
console.log("aCapLive could not open capture!");
return;
}
function ImgCap()
{
var elapsed = (new Date().getTime() - startT) / 1000;
capture.read(frame);
cv.imwrite(imgPath+".bmp", frame);
id = setImmediate(ImgCap);
}
ImgCap();
I tried to use something like
cv.cvtColor(frame, grey, "CV_BGR2GRAY");
after reading the image but i only get some TypeError saying that argument 2 has to be an integer... I dont know what to do at the moment. I referred to http://docs.opencv.org/doc/tutorials/introduction/load_save_image/load_save_image.html#explanation for converting a rgb to a grayscale image.
Beside I'm still not sure if this just gives mit a 24bit grayscale image instead of 8bit..?!
Thanks for any help in advance! :)
use CV_BGR2GRAY instead of "CV_BGR2GRAY".
the former is an integral constant, the latter a char*
and, no fear, that will be 8bit grayscale.

UIImage resizing. OK on simulator, not on device

In an iPhone app, I wrote some code to resize an image taken from the photo album, in order to use it as background for the app.
The code is strongly inspired from what I could find looking on the net. Namely here:
http://forrst.com/posts/UIImage_simple_resize_and_crop_image-sUG#comment-land
It works fine on the simulator. But it looks like no resizing at all is happening on the device. Why is that? Any idea?
Thank you for any tip.
hi Michel.
i dont know more about your code but i am using this code you can try
this you will find your solution.
here you need to pass image and the required width and height for
that image in this function and you will get another image with that
new size.
-(UIImage *)resizeImage:(UIImage *)image3 width:(int)width height:(int)height {
CGImageRef imageRef = [image3 CGImage];
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
//if (alphaInfo == kCGImageAlphaNone)
alphaInfo = kCGImageAlphaNoneSkipLast;
CGContextRef bitmap = CGBitmapContextCreate(NULL, width, height, CGImageGetBitsPerComponent(imageRef), 4 * width, CGImageGetColorSpace(imageRef), alphaInfo);
CGContextDrawImage(bitmap, CGRectMake(0, 0, width, height), imageRef);
CGImageRef ref = CGBitmapContextCreateImage(bitmap);
UIImage *result = [UIImage imageWithCGImage:ref];
CGContextRelease(bitmap);
CGImageRelease(ref);
return result;
}

System.Drawing.Graphics.Transform has no effect in Mono under Ubuntu

I have a very simple code that draws an image on a bitmap, the image must be drawn in the lower right corner. I use TranslateTransform to move the image. This works fine when run under Windows, however, TranslateTransform has no effect when run in Mono under Linux.
byte[] imageBytes = File.ReadAllBytes(#"/home/alexey/Downloads/test.png");
using (Bitmap bmp = new Bitmap(500, 500))
{
using (Graphics gr = Graphics.FromImage(bmp))
{
ImageAttributes attr = null;
using (Image image = Image.FromStream(new MemoryStream(imageBytes)))
{
GraphicsUnit srcGU = GraphicsUnit.Pixel;
RectangleF srcRect = image.GetBounds(ref srcGU);
RectangleF bounds = new RectangleF(0, 0, 100, 100);
// Destination points specify the bounding parallelogram.
PointF[] dstPoints = new PointF[]
{ bounds.Location,
new PointF(bounds.X + bounds.Width, bounds.Y),
new PointF(bounds.X, bounds.Y + bounds.Height) };
// Image must be in the in the lower right corner and it is if run the code under Windows.
// But is run code under linux, the image is in the upper left corner.
gr.TranslateTransform(400,400);
gr.DrawImage(image, dstPoints, srcRect, srcGU, attr);
}
}
bmp.Save(#"/home/alexey/Downloads/out.png", ImageFormat.Png);
}
Of course, the code is a simplified version of the real code that must work in both windows and Linux environments. I narrowed down the code and found that the problems under linux occur because Graphics.Transform has no effect in Mono under linux. Any ideas?
I think the easiest solution would be to simply add 400 to the X and Y components of your dstPoints.

Brightness method shows "Out of memory" exception

To change brightness of an image in c#.net 4 i have used the following method.
public void SetBrightness(int brightness)
{
imageHandler.RestorePrevious();
if (brightness < -255) brightness = -255;
if (brightness > 255) brightness = 255;
ColorMatrix cMatrix = new ColorMatrix(CurrentColorMatrix.Array);
cMatrix.Matrix40 = cMatrix.Matrix41 = cMatrix.Matrix42 = brightness / 255.0F;
imageHandler.ProcessBitmap(cMatrix);
}
internal void ProcessBitmap(ColorMatrix colorMatrix)
{
Bitmap bmap = new Bitmap(_currentBitmap.Width, _currentBitmap.Height)
ImageAttributes imgAttributes = new ImageAttributes();
imgAttributes.SetColorMatrix(colorMatrix);
Graphics g = Graphics.FromImage(bmap);
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.DrawImage(_currentBitmap, new Rectangle(0, 0, _currentBitmap.Width,
_currentBitmap.Height), 0, 0, _currentBitmap.Width,
_currentBitmap.Height, GraphicsUnit.Pixel, imgAttributes);
_currentBitmap = (Bitmap)bmap.Clone();
}
If brightness is changed several times then "Out of memory" exception is shown. I have tried to use "Using block" but went in vein.
Any ideas?
please see the link
http://www.codeproject.com/Articles/227016/Image-Processing-using-Matrices-in-Csharp
and suggest if any types of optimization is possible in the methods (Rotation, brightness, crop and undo).
I have downloaded the projects from CodeProject and I have fixed the memory leak. You need to dispose the Graphics object and the _currentBitmap image before you override it. Also, you need to stop using .Clone.
If you replace the contents of the ProcessBitmap function with this code, the memory leak is gone:
internal void ProcessBitmap(ColorMatrix colorMatrix)
{
Bitmap bmap = new Bitmap(_currentBitmap.Width, _currentBitmap.Height);
ImageAttributes imgAttributes = new ImageAttributes();
imgAttributes.SetColorMatrix(colorMatrix);
using (Graphics g = Graphics.FromImage(bmap))
{
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.DrawImage(_currentBitmap, new Rectangle(0, 0, _currentBitmap.Width, _currentBitmap.Height), 0, 0, _currentBitmap.Width, _currentBitmap.Height, GraphicsUnit.Pixel, imgAttributes);
}
_currentBitmap.Dispose();
_currentBitmap = bmap;
}
Also, here are some tips for further optimization:
Stop using .Clone(). I have seen the code, and it uses .Clone() everywhere. Don't clone objects unless really necessary. In image processing, you need a lot of memory to store large image files. You need to make as much processing as you can in-place.
You can pass Bitmap objects by reference between methods. You can improve performance and reduce the memory cost that way.
Always use using blocks when working with Graphics objects.
Call .Dispose() on the Bitmap objects when you're sure you don't need them anymore

Resources