cimg draw_text memory leak when changing font size - memory-leaks

I am using CImg to draw a simple rectangle containing some text of a user defined size.
To do this I use draw_text on an empty CImg as it allocates an image of the exact size to contain the text.
However it is leaking memory each time I change the font size. Any ideas on why it is leaking memory?
CImg<unsigned char> imgtext;
const unsigned char textColor[] = {foreground.r,foreground.g,foreground.b};
const unsigned char textBackgroundColor[] = {background.r,background.g,background.b};
imgtext.draw_text(0,0,label.c_str(),textColor, textBackgroundColor, 1.0F, fontSize);

Related

Is a cursor greater than 512x512 pixels in size possible?

Goal:
I'm trying to create a cursor file which can cover the whole screen with a flashlight effect on a full hd (1920x1080) screen. For that, the cursor image resolution would need to be at 4K (3840x2160) along with having an alpha channel (32bpp). Axialis Cursor Workshop is the only cursor creation program I've tried which goes above the usual 256² pixel limit, but still caps at 512² pixels...
File format analysis:
Looking at the file format specifications, the usual upper bound of 256² pixels might be caused by the CUR/ICO format working with 8 bits for width and height fields each. ANI format looks more promising since it has 32 bits reserved for those. On the flip side, it seems to have no hotspot fields, and itself uses CUR/ICO format for the animation frames, unless the IconFlag bit is set to FALSE. Looking at a cursor file produced by Axialis CW, I see the flag set to TRUE weirdly enough.
Hex edit approach:
I've tried inserting raster data from a (converted) bmp of same size (521²) by the means of hex editing. Then I tried to insert raster data from a 1024² bpm, updating image dimensions and the file size in the headers. Which only kind of works, I guess.
I'd appreciate any help or pointers in the right direction.
Related things, in no particular order:
install cursor scheme.inf (Creates a certain cursor scheme from cur/ani files)
Set Cursor.ps1 (Applies a certain cursor scheme & size)
File format specification index (For the technical details)
PNG to BMP Converter (Properly converts png to 32bpp bmp files)
Axialis CursorWorkshop (Can create ani files up to 512² pixels at 32bpp)
Got it working with Hex Editor Neo and a binary template I put together for the ico/cur file format:
// ico.h
#pragma once
#pragma byte_order(LittleEndian)
#include "stddefs.h"
#include "bitmap.h"
struct ICONDIRENTRY;
struct ICONFILE;
public struct ICONDIR {
[description("")]
uint16 Reserved;
$assert(Reserved==0);
[description("Specifies image type: 1 for icon (.ICO) image, 2 for cursor (.CUR) image. Other values are invalid.")]
uint16 Type;
[description("Specifies number of images in the file.")]
uint16 Count;
[description("")]
ICONDIRENTRY Entries[Count];
};
struct ICONDIRENTRY {
var entryIndex = array_index;
[description("Cursor Width")]
uint8 Width;
[description("Cursor Height (added height of XORbitmap and ANDbitmap). A negative value would indicate pixel order being top to bottom")]
int8 Height;
[description("Specifies number of colors in the color palette. Should be 0 if the image does not use a color palette.")]
uint8 ColorCount;
[description("")]
uint8 Reserved;
$assert(Reserved==0);
[description("In ICO format: Specifies color planes. Should be 0 or 1. In CUR format: Specifies the horizontal coordinates of the hotspot in number of pixels from the left.")]
uint16 XHotspot;
[description("In ICO format: Specifies bits per pixel. In CUR format: Specifies the vertical coordinates of the hotspot in number of pixels from the top.")]
uint16 YHotspot;
[description("Size of (InfoHeader + ANDBitmap + XORBitmap)")]
uint32 SizeInBytes;
[description("FilePos, where InfoHeader starts")]
uint32 FileOffset as ICONFILE*;
};
struct ICONFILE {
BITMAPINFO Info;
// no idea why this isn't working
/*var bmiv1header = BITMAPINFOHEADER(Info.bmiHeader);
var size = bmiv1header.biSizeImage;
if(size == 0) {
size = Entries[entryIndex].SizeInBytes - bmiv1header.biSize;
}
uint8 RawData[size];*/
uint8 __firstPixel;
};
The cursor file I created successfully looks something like this with the template applied:
The trick was to set value of the image height field in the BITMAPHEADERINFO structure to twice the amount of pixels in height. The reason for this is that two separate pixel arrays are expected which are applied using bitwise XOR and AND. I was surprised when it already worked in the preview without even adding an AND pixel array. Seems like you can omit that or something, idk.

Computing on variable length arrays in OpenCL

I am using OpenCL (Xcode, Intel GPU), and I am trying to implement a kernel that calculates moving averages and deviations. I want to pass several double arrays of varying lengths to the kernel. Is this possible to implement, or do I need to pad smaller arrays with zeroes so all the arrays are the same size?
I am new to OpenCL and GPGPU, so please forgive my ignorance of any nomenclature.
You can pass to the kernel any buffer, the kernel does not need to use it all.
For example, if your kernel reduces a buffer you can query at run time the amount of work items (items to reduce) using get_global_size(0). And then call the kernel with the proper parameters.
An example (unoptimized):
__kernel reduce_step(__global float* data)
{
int id = get_global_id(0);
int size = get_global_size(0);
int size2 = size/2;
int size2p = (size+1)/2;
if(id<size2) //Only reduce up to size2, the odd element will remain in place
data[id] += data[id+size2p];
}
Then you can call it like this.
void reduce_me(std::vector<cl_float>& data){
size_t size = data.size();
//Copy to a buffer already created, equal or bigger size than data size
// ... TODO, check sizes of buffer or change the buffer set to the kernel args.
queue.enqueueWriteBuffer(buffer,CL_FALSE,0,sizeof(cl_float)*size,data.data());
//Reduce until 1024
while(size > 1024){
queue.enqueueNDRangeKernel(reduce_kernel,cl::NullRange,cl::NDRange(size),cl::NullRange);
size /= 2;
}
//Read out and trim
queue.enqueueReadBuffer(buffer,CL_TRUE,0,sizeof(cl_float)*size,data.data());
data.resize(size);
}

Read and convert Monochrome bitmap file into CByteArray MFC

In my MFC project, I need to read and convert a Monochrome bitmap file into CByteArray. While reading the bitmap file by using 'CFile' class with the mode of 'Read', it seems like it gives more length than its original.
My MFC code:-
CFile ImgFile;
CFileException FileExcep;
CByteArray* pBinaryImage = NULL;
strFilePath.Format("%s", "D:\\Test\\Graphics0.bmp");
if(!ImgFile.Open((LPCTSTR)strFilePath,CFile::modeReadWrite,&FileExcep))
{
return NULL;
}
pBinaryImage = new CByteArray();
pBinaryImage->SetSize(ImgFile.GetLength());
// get the byte array's underlying buffer pointer
LPVOID lpvDest = pBinaryImage->GetData();
// perform a massive copy from the file to byte array
if(lpvDest)
{
ImgFile.Read(lpvDest,pBinaryImage->GetSize());
}
ImgFile.Close();
Note: File length is been set to bytearray obj.
I checked with C# with the following sample:-
Bitmap bmpImage = (Bitmap)Bitmap.FromFile("D:\\Test\\Graphics0.bmp");
ImageConverter ic = new ImageConverter();
byte[] ImgByteArray = (byte[])ic.ConvertTo(bmpImage, typeof(byte[]));
While comparing the size of "pBinaryImage" and "ImgByteArray", its not same and I guess "ImgByteArray" size is the correct one since from this array value, I can get my original bitmap back.
As I noted in comments, by reading the whole file with CFile, you are also reading the bitmap headers, which will be corrupting your data.
Here is an example function, showing how to load a monochrome bitmap from file, wrap it in MFC's CBitmap object, query the dimensions etc. and read the pixel data into an array:
void LoadMonoBmp(LPCTSTR szFilename)
{
// load bitmap from file
HBITMAP hBmp = (HBITMAP)LoadImage(NULL, szFilename, IMAGE_BITMAP, 0, 0,
LR_LOADFROMFILE | LR_MONOCHROME);
// wrap in a CBitmap for convenience
CBitmap *pBmp = CBitmap::FromHandle(hBmp);
// get dimensions etc.
BITMAP pBitMap;
pBmp->GetBitmap(&pBitMap);
// allocate a buffer for the pixel data
unsigned int uBufferSize = pBitMap.bmWidthBytes * pBitMap.bmHeight;
unsigned char *pPixels = new unsigned char[uBufferSize];
// load the pixel data
pBmp->GetBitmapBits(uBufferSize, pPixels);
// ... do something with the data ....
// release pixel data
delete [] pPixels;
pPixels = NULL;
// free the bmp
DeleteObject(hBmp);
}
The BITMAP structure will give you information about the bitmap (MSDN here) and, for a monochrome bitmap, the bits will be packed into the bytes you read. This may be another difference with the C# code, where it is possible that each bit is unpacked into a whole byte. In the MFC version, you will need to interpret this data correctly.

How do you count registers in HLSL?

With shader model 2.0, you can have 256 constant registers. I have been looking at various shaders, and trying to figure out what constitutes a single register?
For example, in my instancing shader, I have the following variables declared at the top, outside of functions:
float4x4 InstanceTransforms[40];
float4 InstanceDiffuses[40];
float4x4 View;
float4x4 Projection;
float3 LightDirection = normalize(float3(-1, -1, -1));
float3 DiffuseLight = 1;
float3 AmbientLight = 0.66;
float Alpha;
texture Texture;
How many registers have I consumed? How do I count them?
Each constant register is a float4.
float3, float2 and float will each allocate a whole register. float4x4 will use 4 registers. Arrays will simply multiply the number of registers allocated by the number of elements. And the compiler will probably allocate a few registers itself to use as constants in various calculations.
The only way to really tell what the shader is using is to disassemble it. To that end you may be interested in this question that I asked a while ago: HLSL: Enforce Constant Register Limit at Compile Time
You might also find this one worth a look: HLSL: Index to unaligned/packed floats. It explains why an array of 40 floats will use 40 registers, and how you can make it use 10 instead.
Your texture will use a texture sampler (you have 16 of these), not a constant register.
For reference, here are the list of ps_2_0 registers and vs_2_0 registers.

is there a way to change the text's font in Cimg?

i wanted to know if i can draw a text in Cimg graphic library draw_text function and change the font of the text to another font ?
You can't load own fonts in CImg, but you can see example https://github.com/tttzof351/CImgAndFreetype for load custom fonts using freetype and render text on bitmap using cimg.
No. CImg's text drawing is very simplistic.
CImg<T>& draw_text(const int x0, const int y0,
const char *const text,
const int, const tc *const background_color,
const float opacity, const CImgList<t>& font, ...)
font is just a CImgList of letters (i.e. font[letter-'a'] is an image of "letter"). Either make your own or use one of the built-in options:
static const CImgList<T>& font(const unsigned int font_height,
const bool variable_size=true);
or
static CImgList<T> _font(const unsigned int *const font,
const unsigned int w, const unsigned int h,
const bool variable_size)
where font here is one of the predefined fonts at the top of CImg.h such as font12x24.
Assuming you mean this Cimg library, a couple of the overloads of draw_text take parameters named "font". Those seem like a reasonable starting point...

Resources