LoadImage unexpected size LR_DEFAULTSIZE - visual-c++

If I am reading the documentation correctly then the following call should return a bitmap of dimensions cx,cy. However it does not. Am I doing something wrong?
The IDB_BITMAP1 is 16x16 pixels.
HBITMAP hBmp = (HBITMAP)LoadImage(AfxGetInstanceHandle(), MAKEINTRESOURCE(IDB_BITMAP1), IMAGE_BITMAP, 0, 0, LR_DEFAULTSIZE);
::GetObject(hBmp, sizeof(BITMAP), &bmp);
int cx = GetSystemMetrics(SM_CXICON);
int cy = GetSystemMetrics(SM_CYICON);
results: cx=32, cy=32, bmp.Width=16, bmp.Height=16

Related

Drawing sprite with additional alpha channel

I can draw image with alpha channel fine,
but can't modify alpha channel via color parameter.
Tried this:
d3dImage->Begin(D3DXSPRITE_ALPHABLEND|D3DXSPRITE_SORT_DEPTH_BACKTOFRONT|D3DXSPRITE_DO_NOT_ADDREF_TEXTURE);
I'm using sprite to render rectangle and images:
void DrawRect(float x, float y, int width, int height, DWORD color)
{
imgPosition.x = x;
imgPosition.y = y;
imgSize.left = 0;
imgSize.right = width;
imgSize.top = 0;
imgSize.bottom = height;
d3dImage->Draw(texWhite, &imgSize, NULL, &imgPosition, color);
}
Works with 8-bit textures only.

DrawText doesn't work, but Graphics::DrawString is ok

I am creating a bitmap in the memory which combine with an image and text. My code is:
HDC hdcWindow = GetDC();
HDC hdcMemDC = CreateCompatibleDC(hdcWindow);
HBITMAP hbmDrag = NULL;
if (!hdcMemDC) {
ReleaseDC(hdcWindow);
return NULL;
}
RECT clientRect = {0};
GetClientRect(&clientRect);
hbmDrag = CreateCompatibleBitmap(hdcWindow, 256, 256);
if(hbmDrag) {
SelectObject(hdcMemDC, hbmDrag);
FillRect(hdcMemDC, &clientRect, mSelectedBkgndBrush);
Graphics graphics(hdcMemDC);
// Draw the icon
graphics.DrawImage(mImage, 100, 100, 50, 50);
#if 1
CRect desktopLabelRect(0, y, clientRect.right, y);
HFONT desktopFont = mNameLabel.GetFont();
HGDIOBJ oldFont = SelectObject(hdcMemDC, desktopFont);
SetTextColor(hdcMemDC, RGB(255,0,0));
DrawText(hdcMemDC, mName, -1, desktopLabelRect, DT_CENTER | DT_END_ELLIPSIS | DT_CALCRECT);
#else
// Set font
Font font(hdcMemDC, mNameLabel.GetFont());
// Set RECT
int y = DEFAULT_ICON_HEIGHT + mMargin;
RectF layoutRect(0, y, clientRect.right, y);
// Set display format
StringFormat format;
format.SetAlignment(StringAlignmentCenter);
// Set brush
SolidBrush blackBrush(Color(255, 0, 0, 0));
// Draw the label
int labelWide = DEFAULT_ICON_WIDTH + mMargin;
CString labelName = GetLayOutLabelName(hdcMemDC, labelWide, mName);
graphics.DrawString(labelName, -1, &font, layoutRect, &format, &blackBrush);
#endif
}
DeleteDC(hdcMemDC);
ReleaseDC(hdcWindow);
return hbmDrag;
The image can be outputted to the bitmap success.
For the text, if I use "DrawText", it can't be shown in the bitmap although the return value is correct;
But Graphics::DrawString can output the text success.
I don't know the reason. Anybody can pls tell me?
Thanks a lot.
You are passing the DT_CALCRECT flag to DrawText(). This flag is documented as (emphasis mine):
Determines the width and height of the rectangle. If there are
multiple lines of text, DrawText uses the width of the rectangle
pointed to by the lpRect parameter and extends the base of the
rectangle to bound the last line of text. If the largest word is wider
than the rectangle, the width is expanded. If the text is less than
the width of the rectangle, the width is reduced. If there is only one
line of text, DrawText modifies the right side of the rectangle so
that it bounds the last character in the line. In either case,
DrawText returns the height of the formatted text but does not draw
the text.

OpenCL image2d_t writing mostly zeros

I am trying to use OpenCL and image2d_t objects to speed up image convolution. When I noticed that the output was a blank image of all zeros, I simplified the OpenCL kernel to a basic read from the input and write to the output (shown below). With a little bit of tweaking, I got it to write a few scattered pixels of the image into the output image.
I have verified that the image is intact up until the call to read_imageui() in the OpenCL kernel. I wrote the image to GPU memory with CommandQueue::enqueueWriteImage() and immediately read it back into a brand new buffer in CPU memory with CommandQueue::enqueueReadImage(). The result of this call matched the original input image. However, when I retrieve the pixels with read_imageui() in the kernel, the vast majority of the pixels are set to 0.
C++ source:
int height = 112;
int width = 9216;
unsigned int numPixels = height * width;
unsigned int numInputBytes = numPixels * sizeof(uint16_t);
unsigned int numDuplicatedInputBytes = numInputBytes * 4;
unsigned int numOutputBytes = numPixels * sizeof(int32_t);
cl::size_t<3> origin;
origin.push_back(0);
origin.push_back(0);
origin.push_back(0);
cl::size_t<3> region;
region.push_back(width);
region.push_back(height);
region.push_back(1);
std::ifstream imageFile("hri_vis_scan.dat", std::ifstream::binary);
checkErr(imageFile.is_open() ? CL_SUCCESS : -1, "hri_vis_scan.dat");
uint16_t *image = new uint16_t[numPixels];
imageFile.read((char *) image, numInputBytes);
imageFile.close();
// duplicate our single channel image into all 4 channels for Image2D
cl_ushort4 *imageDuplicated = new cl_ushort4[numPixels];
for (int i = 0; i < numPixels; i++)
for (int j = 0; j < 4; j++)
imageDuplicated[i].s[j] = image[i];
cl::Buffer imageBufferOut(context, CL_MEM_WRITE_ONLY, numOutputBytes, NULL, &err);
checkErr(err, "Buffer::Buffer()");
cl::ImageFormat inFormat;
inFormat.image_channel_data_type = CL_UNSIGNED_INT16;
inFormat.image_channel_order = CL_RGBA;
cl::Image2D bufferIn(context, CL_MEM_READ_ONLY | CL_MEM_COPY_HOST_PTR, inFormat, width, height, 0, imageDuplicated, &err);
checkErr(err, "Image2D::Image2D()");
cl::ImageFormat outFormat;
outFormat.image_channel_data_type = CL_UNSIGNED_INT16;
outFormat.image_channel_order = CL_RGBA;
cl::Image2D bufferOut(context, CL_MEM_WRITE_ONLY, outFormat, width, height, 0, NULL, &err);
checkErr(err, "Image2D::Image2D()");
int32_t *imageResult = new int32_t[numPixels];
memset(imageResult, 0, numOutputBytes);
cl_int4 *imageResultDuplicated = new cl_int4[numPixels];
for (int i = 0; i < numPixels; i++)
for (int j = 0; j < 4; j++)
imageResultDuplicated[i].s[j] = 0;
std::ifstream kernelFile("convolutionKernel.cl");
checkErr(kernelFile.is_open() ? CL_SUCCESS : -1, "convolutionKernel.cl");
std::string imageProg(std::istreambuf_iterator<char>(kernelFile), (std::istreambuf_iterator<char>()));
cl::Program::Sources imageSource(1, std::make_pair(imageProg.c_str(), imageProg.length() + 1));
cl::Program imageProgram(context, imageSource);
err = imageProgram.build(devices, "");
checkErr(err, "Program::build()");
cl::Kernel basic(imageProgram, "basic", &err);
checkErr(err, "Kernel::Kernel()");
basic.setArg(0, bufferIn);
basic.setArg(1, bufferOut);
basic.setArg(2, imageBufferOut);
queue.finish();
cl_ushort4 *imageDuplicatedTest = new cl_ushort4[numPixels];
for (int i = 0; i < numPixels; i++)
{
imageDuplicatedTest[i].s[0] = 0;
imageDuplicatedTest[i].s[1] = 0;
imageDuplicatedTest[i].s[2] = 0;
imageDuplicatedTest[i].s[3] = 0;
}
double gpuTimer = clock();
err = queue.enqueueReadImage(bufferIn, CL_FALSE, origin, region, 0, 0, imageDuplicatedTest, NULL, NULL);
checkErr(err, "CommandQueue::enqueueReadImage()");
// Output from above matches input image
err = queue.enqueueNDRangeKernel(basic, cl::NullRange, cl::NDRange(height, width), cl::NDRange(1, 1), NULL, NULL);
checkErr(err, "CommandQueue::enqueueNDRangeKernel()");
queue.flush();
err = queue.enqueueReadImage(bufferOut, CL_TRUE, origin, region, 0, 0, imageResultDuplicated, NULL, NULL);
checkErr(err, "CommandQueue::enqueueReadImage()");
queue.flush();
err = queue.enqueueReadBuffer(imageBufferOut, CL_TRUE, 0, numOutputBytes, imageResult, NULL, NULL);
checkErr(err, "CommandQueue::enqueueReadBuffer()");
queue.finish();
OpenCL kernel:
__kernel void basic(__read_only image2d_t input, __write_only image2d_t output, __global int *result)
{
const sampler_t smp = CLK_NORMALIZED_COORDS_TRUE | //Natural coordinates
CLK_ADDRESS_NONE | //Clamp to zeros
CLK_FILTER_NEAREST; //Don't interpolate
int2 coord = (get_global_id(1), get_global_id(0));
uint4 pixel = read_imageui(input, smp, coord);
result[coord.s0 + coord.s1 * 9216] = pixel.s0;
write_imageui(output, coord, pixel);
}
The coordinates in the kernel are currently mapped to (x, y) = (width, height).
The input image is a single channel greyscale image with 16 bits per pixel, which is why I had to duplicate the channels to fit into OpenCL's Image2D. The output after convolution will be 32 bits per pixel, which is why numOutputBytes is set to that. Also, although the width and height appear weird, the input image's dimensions are 9216x7824, so I'm only taking a portion of it to test the code first, so it doesn't take forever.
I added in a write to global memory after reading from the image in the kernel to see if the issue was reading the image or writing the image. After the kernel executes, this section of global memory also contains mostly zeros.
Any help would be greatly appreciated!
The documentation for read_imageui states that
Furthermore, the read_imagei and read_imageui calls that take integer coordinates must use a sampler with normalized coordinates set to CLK_NORMALIZED_COORDS_FALSE and addressing mode set to CLK_ADDRESS_CLAMP_TO_EDGE, CLK_ADDRESS_CLAMP or CLK_ADDRESS_NONE; otherwise the values returned are undefined.
But you're creating a sampler with CLK_NORMALIZED_COORDS_TRUE (but seem to be passing in non-normalized coords :S ?).

unsigned char* buffer to System::Drawing::Bitmap

I'm trying to create a tool/asset converter that rasterises a font to a texture page for an XNA game using the FreeType2 engine.
Below, the first image is the direct output from the FreeType2]1 engine. The second image is the result after attempting to convert it to a System::Drawing::Bitmap.
target http://www.freeimagehosting.net/uploads/fb102ee6da.jpg currentresult http://www.freeimagehosting.net/uploads/9ea77fa307.jpg
Any hints/tips/ideas on what is going on here would be greatly appreciated. Links to articles explaining byte layout and pixel formats would also be helpful.
FT_Bitmap *bitmap = &face->glyph->bitmap;
int width = (face->bitmap->metrics.width / 64);
int height = (face->bitmap->metrics.height / 64);
// must be aligned on a 32 bit boundary or 4 bytes
int depth = 8;
int stride = ((width * depth + 31) & ~31) >> 3;
int bytes = (int)(stride * height);
// as *.bmp
array<Byte>^ values = gcnew array<Byte>(bytes);
Marshal::Copy((IntPtr)glyph->buffer, values, 0, bytes);
Bitmap^ systemBitmap = gcnew Bitmap(width, height, PixelFormat::Format24bppRgb);
// create bitmap data, lock pixels to be written.
BitmapData^ bitmapData = systemBitmap->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, bitmap->PixelFormat);
Marshal::Copy(values, 0, bitmapData->Scan0, bytes);
systemBitmap->UnlockBits(bitmapData);
systemBitmap->Save("Test.bmp");
Update. Changed PixelFormat to 8bppIndexed.
FT_Bitmap *bitmap = &face->glyph->bitmap;
// stride must be aligned on a 32 bit boundary or 4 bytes
int depth = 8;
int stride = ((width * depth + 31) & ~31) >> 3;
int bytes = (int)(stride * height);
target = gcnew Bitmap(width, height, PixelFormat::Format8bppIndexed);
// create bitmap data, lock pixels to be written.
BitmapData^ bitmapData = target->LockBits(Rectangle(0, 0, width, height), ImageLockMode::WriteOnly, target->PixelFormat);
array<Byte>^ values = gcnew array<Byte>(bytes);
Marshal::Copy((IntPtr)bitmap->buffer, values, 0, bytes);
Marshal::Copy(values, 0, bitmapData->Scan0, bytes);
target->UnlockBits(bitmapData);
Ah ha. Worked it out.
FT_Bitmap is an 8bit image, so the correct PixelFormat was 8bppIndexed, which resulted this output.
Not aligned to 32byte boundary http://www.freeimagehosting.net/uploads/dd90fa2252.jpg
System::Drawing::Bitmap needs to be aligned on a 32 bit boundary.
I was calculating the stride but was not padding it when writing the bitmap. Copied the FT_Bitmap buffer to a byte[] and then wrote that to a MemoryStream, adding the necessary padding.
int stride = ((width * pixelDepth + 31) & ~31) >> 3;
int padding = stride - (((width * pixelDepth) + 7) / 8);
array<Byte>^ pad = gcnew array<Byte>(padding);
array<Byte>^ buffer = gcnew array<Byte>(size);
Marshal::Copy((IntPtr)source->buffer, buffer, 0, size);
MemoryStream^ ms = gcnew MemoryStream();
for (int i = 0; i < height; ++i)
{
ms->Write(buffer, i * width, width);
ms->Write(pad, 0, padding);
}
Pinned the memory so the GC would leave it alone.
// pin memory and create bitmap
GCHandle handle = GCHandle::Alloc(ms->ToArray(), GCHandleType::Pinned);
target = gcnew Bitmap(width, height, stride, PixelFormat::Format8bppIndexed, handle.AddrOfPinnedObject());
ms->Close();
As there is no Format8bppIndexed Grey the image was still not correct.
alt text http://www.freeimagehosting.net/uploads/8a883b7dce.png
Then changed the bitmap palette to grey scale 256.
// 256-level greyscale palette
ColorPalette^ palette = target->Palette;
for (int i = 0; i < palette->Entries->Length; ++i)
palette->Entries[i] = Color::FromArgb(i,i,i);
target->Palette = palette;
alt text http://www.freeimagehosting.net/uploads/59a745269e.jpg
Final solution.
error = FT_Load_Char(face, ch, FT_LOAD_RENDER);
if (error)
throw gcnew InvalidOperationException("Failed to load and render character");
FT_Bitmap *source = &face->glyph->bitmap;
int width = (face->glyph->metrics.width / 64);
int height = (face->glyph->metrics.height / 64);
int pixelDepth = 8;
int size = width * height;
// stride must be aligned on a 32 bit boundary or 4 bytes
// padding is the number of bytes to add to make each row a 32bit aligned row
int stride = ((width * pixelDepth + 31) & ~31) >> 3;
int padding = stride - (((width * pixelDepth) + 7) / 8);
array<Byte>^ pad = gcnew array<Byte>(padding);
array<Byte>^ buffer = gcnew array<Byte>(size);
Marshal::Copy((IntPtr)source->buffer, buffer, 0, size);
MemoryStream^ ms = gcnew MemoryStream();
for (int i = 0; i < height; ++i)
{
ms->Write(buffer, i * width, width);
ms->Write(pad, 0, padding);
}
// pin memory and create bitmap
GCHandle handle = GCHandle::Alloc(ms->ToArray(), GCHandleType::Pinned);
target = gcnew Bitmap(width, height, stride, PixelFormat::Format8bppIndexed, handle.AddrOfPinnedObject());
ms->Close();
// 256-level greyscale palette
ColorPalette^ palette = target->Palette;
for (int i = 0; i < palette->Entries->Length; ++i)
palette->Entries[i] = Color::FromArgb(i,i,i);
target->Palette = palette;
FT_Done_FreeType(library);
Your "depth" value doesn't match the PixelFormat of the Bitmap. It needs to be 24 to match Format24bppRgb. The PF for the bitmap needs to match the PF and stride of the FT_Bitmap as well, I don't see you take care of that.

Problems when i using GetTextExtentExPoint to calculate the extent of a string

I created a font object and selected it into the device content, then calculate the extent of the string using WIN32 API GetTextExtentExPoint, but the extent i got is the extent while using system default font.
For example, when i using system default font, the extent of the string is 36 pixel width and 16 pixel height, and 72 pixel width and 24 pixel height while using the font i created. But, i always got 36 pixel no mater using system default font or the font i created.
What's the problem with my codes?
Codes:
HDC hDC = GetDC();
ATLASSERT(hDC);
HFONT _hFontTitle = 0;
HFONT hSysFont = (HFONT)GetCurrentObject(hDC, OBJ_FONT);
ATLASSERT(hSysFont);
LOGFONT lf;
if(0 == GetObject(hSysFont, sizeof(LOGFONT), &lf))
_hFontTitle = CreateFont(16, 12, 0, 0, FW_BOLD, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, PROOF_QUALITY, FIXED_PITCH|FF_DONTCARE, _T("Fixedsys"));
else{
lf.lfHeight = 16;
lf.lfWidth = 12;
lf.lfWeight = FW_BOLD;
_hFontTitle = CreateFontIndirect(&lf);
ATLASSERT(_hFontTitle);
}
HFONT _hFontContent = 0;
HFONT hSysFont = (HFONT)GetCurrentObject(hDC, OBJ_FONT);
LOGFONT lf;
if(0 == GetObject(hSysFont, sizeof(LOGFONT), &lf))
_hFontContent = CreateFont(12, 9, 0, 0, FW_NORMAL, FALSE, FALSE, FALSE, DEFAULT_CHARSET, OUT_DEFAULT_PRECIS, CLIP_DEFAULT_PRECIS, PROOF_QUALITY, FIXED_PITCH|FF_DONTCARE, _T("Fixedsys"));
else{
lf.lfHeight = 12;
lf.lfWidth = 9;
lf.lfWeight = FW_NORMAL;
_hFontContent = CreateFontIndirect(&lf);
ATLASSERT(_hFontContent);
}
SIZE sizeTitle = TextMetricsHelper::GetTextLayout(hDC, _szTitle.c_str(), _szTitle.size(), _hFontTitle);
SIZE sizeContent = TextMetricsHelper::GetTextLayout(hDC, _szContent.c_str(), _szContent.size(), _hFontContent);
While GetTextLayout is:
SIZE GetTextLayout(HDC hDC, LPCTSTR lpszText, unsigned int cbText, HFONT hFont)
{
//RECT rcText = {0, 0, 8, 10};
HFONT hOldFont = (HFONT)SelectObject(hDC, (HGDIOBJ)hFont);
SIZE textSize;
GetTextExtentPoint32(hDC, lpszText, cbText, &textSize);
//GetTextExtentExPoint(hDC, lpszText, cbText, 0, 0, 0, &sizeOfTitle);
//DrawText(hDC, lpszText, cbText, &rcText, DT_CALCRECT);
SelectObject(hDC, hOldFont);
return textSize;
}
Don't know much about creating fonts and so on, but I notice that both your CreateFont lines assign their result to the same variable _hFontTitle. Is this the intention ?

Resources