"Invalid Handle" Create CGBitmapContext - xamarin.ios

I've got a problem with the CGBitmapcontext.
I get en error while creating the CGBitmapContext with the message "invalid Handle".
Here is my code:
var previewContext = new CGBitmapContext(null, (int)ExportedImage.Size.Width, (int)ExportedImage.Size.Height, 8, (int)ExportedImage.Size.Height * 4, CGColorSpace.CreateDeviceRGB(), CGImageAlphaInfo.PremultipliedFirst);
Thank you;

That is because you are passing null to the first parameter. The CGBitmapContext is for drawing directly into a memory buffer. The first parameter in all the overloads of the constructor is (Apple docs):
data
A pointer to the destination in memory where the drawing is to be rendered. The size of this memory block should be at least
(bytesPerRow*height) bytes.
In MonoTouch, we get two overloads that accept a byte[] for convenience. So you should use it like this:
int bytesPerRow = (int)ExportedImage.Size.Width * 4; // note that bytes per row should
//be based on width, not height.
byte[] ctxBuffer = new byte[bytesPerRow * (int)ExportedImage.Size.Height];
var previewContext =
new CGBitmapContext(ctxBuffer, (int)ExportedImage.Size.Width,
(int)ExportedImage.Size.Height, 8, bytesPerRow, colorSpace, bitmapFlags);

This can also happen if the width or height parameters passed into the method have a value of 0.

Related

What may be wrong about my use of SetGraphicsRootDescriptorTable in D3D12?

For 7 meshes that I would like to draw, I load 7 textures and create the corresponding SRVs in a descriptor heap. Then there's another SRV for IMGUI. There are also 3 CBVs, for triple buffer usage. So it should be like: | srv x7 | srv x1 | cbv x3| in the heap.
The problem I met is that when I called SetGraphicsRootDescriptorTable on range 0, which should be an SRV (which is the texture actually), something went wrong. Here's the code:
ID3D12DescriptorHeap* ppHeaps[] = { pCbvSrvDescriptorHeap, pSamplerDescriptorHeap };
pCommandList->SetDescriptorHeaps(_countof(ppHeaps), ppHeaps);
pCommandList->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pCommandList->IASetIndexBuffer(pIndexBufferViewDesc);
pCommandList->IASetVertexBuffers(0, 1, pVertexBufferViewDesc);
CD3DX12_GPU_DESCRIPTOR_HANDLE srvHandle(pCbvSrvDescriptorHeap->GetGPUDescriptorHandleForHeapStart(), indexMesh, cbvSrvDescriptorSize);
pCommandList->SetGraphicsRootDescriptorTable(0, srvHandle);
pCommandList->SetGraphicsRootDescriptorTable(1, pSamplerDescriptorHeap->GetGPUDescriptorHandleForHeapStart());
If indexMesh is 5, SetGraphicsRootDescriptorTable will cause the following error though the render output seems still good. And when indexMesh is 6, the following error will still occur and there will be another identical error except that the offset 8 turns into 9.
D3D12 ERROR: CGraphicsCommandList::SetGraphicsRootDescriptorTable: Specified GPU Descriptor Handle (ptr = 0x400750000002c0 at 8 offsetInDescriptorsFromDescriptorHeapStart) of type CBV, for Root Signature (0x0000020A516E8BF0:'m_rootSignature')'s Descriptor Table (at Parameter Index [0])'s Descriptor Range (at Range Index [0] of type D3D12_DESCRIPTOR_RANGE_TYPE_SRV) have mismatching types. All descriptors of descriptor ranges declared STATIC (not-DESCRIPTORS_VOLATILE) in a root signature must be initialized prior to being set on the command list. [ EXECUTION ERROR #646: INVALID_DESCRIPTOR_HANDLE]
That is really weird, because I suppose that the only thing that may cause this is that cbvSrvDescriptorSize is not right. It is 64, and it is set by m_device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);which I think should work. Besides, if I set it to another value such as 32, the application would crash.
So if cbvSrvDescriptorSize is right, why would the correct indexMesh cause the wrong offset of the descriptor handle? The consequence of this error is that it seems to be influencing my CBV which breaks the render output. Any suggestion would be appreciated, thanks!
Thanks for Chuck's suggestion, here's the code about the rootSig:
CD3DX12_DESCRIPTOR_RANGE1 ranges[3];
ranges[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 4, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
ranges[1].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SAMPLER, 1, 0);
ranges[2].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_DATA_STATIC);
CD3DX12_ROOT_PARAMETER1 rootParameters[3];
rootParameters[0].InitAsDescriptorTable(1, &ranges[0], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[1].InitAsDescriptorTable(1, &ranges[1], D3D12_SHADER_VISIBILITY_PIXEL);
rootParameters[2].InitAsDescriptorTable(1, &ranges[2], D3D12_SHADER_VISIBILITY_ALL);
CD3DX12_VERSIONED_ROOT_SIGNATURE_DESC rootSignatureDesc;
rootSignatureDesc.Init_1_1(_countof(rootParameters), rootParameters, 0, nullptr, D3D12_ROOT_SIGNATURE_FLAG_ALLOW_INPUT_ASSEMBLER_INPUT_LAYOUT);
ComPtr<ID3DBlob> signature;
ComPtr<ID3DBlob> error;
ThrowIfFailed(D3DX12SerializeVersionedRootSignature(&rootSignatureDesc, featureData.HighestVersion, &signature, &error));
ThrowIfFailed(m_device->CreateRootSignature(0, signature->GetBufferPointer(), signature->GetBufferSize(), IID_PPV_ARGS(&m_rootSignature)));
NAME_D3D12_OBJECT(m_rootSignature);
And here's some declarations in the pixel shader:
Texture2DArray g_textures : register(t0);
SamplerState g_sampler : register(s0);
cbuffer cb0 : register(b0)
{
float4x4 g_mWorldViewProj;
float3 g_lightPos;
float3 g_eyePos;
...
};
It's not very often I come across the exact problem I'm experiencing (my code is almost verbatim) and it's an in-progress post! Let's suffer together.
My problem turned out to be the calls to CreateConstantBufferView()/CreateShaderResourceView() - I was passing srvHeap->GetCPUDescriptorHandleForHeapStart() as the destDescriptor handle. These need to be offset to match your table layout (the offsetInDescriptorsFromTableStart param of CD3DX12_DESCRIPTOR_RANGE1).
I found it easier to just maintain one D3D12_CPU_DESCRIPTOR_HANDLE to the heap and just increment handle.ptr after every call to CreateSomethingView() which uses that heap.
CD3DX12_DESCRIPTOR_RANGE1 rangesV[1] = {{}};
CD3DX12_DESCRIPTOR_RANGE1 rangesP[1] = {{}};
// Vertex
rangesV[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_CBV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 0); // b0 at desc offset 0
// Pixel
rangesP[0].Init(D3D12_DESCRIPTOR_RANGE_TYPE_SRV, 1, 0, 0, D3D12_DESCRIPTOR_RANGE_FLAG_NONE, 1); // t0 at desc offset 1
CD3DX12_ROOT_PARAMETER1 rootParameters[2] = {{}};
rootParameters[0].InitAsDescriptorTable(1, &rangesV[0], D3D12_SHADER_VISIBILITY_VERTEX);
rootParameters[1].InitAsDescriptorTable(1, &rangesP[0], D3D12_SHADER_VISIBILITY_PIXEL);
D3D12_CPU_DESCRIPTOR_HANDLE srvHeapHandle = srvHeap->GetCPUDescriptorHandleForHeapStart();
// ----
device->CreateConstantBufferView(&cbvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
// ----
device->CreateShaderResourceView(texture, &srvDesc, srvHeapHandle);
srvHeapHandle.ptr += device->GetDescriptorHandleIncrementSize(D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV);
Perhaps an enum would help keep it tidier and more maintainable, though. I'm still experimenting.

Read and convert Monochrome bitmap file into CByteArray MFC

In my MFC project, I need to read and convert a Monochrome bitmap file into CByteArray. While reading the bitmap file by using 'CFile' class with the mode of 'Read', it seems like it gives more length than its original.
My MFC code:-
CFile ImgFile;
CFileException FileExcep;
CByteArray* pBinaryImage = NULL;
strFilePath.Format("%s", "D:\\Test\\Graphics0.bmp");
if(!ImgFile.Open((LPCTSTR)strFilePath,CFile::modeReadWrite,&FileExcep))
{
return NULL;
}
pBinaryImage = new CByteArray();
pBinaryImage->SetSize(ImgFile.GetLength());
// get the byte array's underlying buffer pointer
LPVOID lpvDest = pBinaryImage->GetData();
// perform a massive copy from the file to byte array
if(lpvDest)
{
ImgFile.Read(lpvDest,pBinaryImage->GetSize());
}
ImgFile.Close();
Note: File length is been set to bytearray obj.
I checked with C# with the following sample:-
Bitmap bmpImage = (Bitmap)Bitmap.FromFile("D:\\Test\\Graphics0.bmp");
ImageConverter ic = new ImageConverter();
byte[] ImgByteArray = (byte[])ic.ConvertTo(bmpImage, typeof(byte[]));
While comparing the size of "pBinaryImage" and "ImgByteArray", its not same and I guess "ImgByteArray" size is the correct one since from this array value, I can get my original bitmap back.
As I noted in comments, by reading the whole file with CFile, you are also reading the bitmap headers, which will be corrupting your data.
Here is an example function, showing how to load a monochrome bitmap from file, wrap it in MFC's CBitmap object, query the dimensions etc. and read the pixel data into an array:
void LoadMonoBmp(LPCTSTR szFilename)
{
// load bitmap from file
HBITMAP hBmp = (HBITMAP)LoadImage(NULL, szFilename, IMAGE_BITMAP, 0, 0,
LR_LOADFROMFILE | LR_MONOCHROME);
// wrap in a CBitmap for convenience
CBitmap *pBmp = CBitmap::FromHandle(hBmp);
// get dimensions etc.
BITMAP pBitMap;
pBmp->GetBitmap(&pBitMap);
// allocate a buffer for the pixel data
unsigned int uBufferSize = pBitMap.bmWidthBytes * pBitMap.bmHeight;
unsigned char *pPixels = new unsigned char[uBufferSize];
// load the pixel data
pBmp->GetBitmapBits(uBufferSize, pPixels);
// ... do something with the data ....
// release pixel data
delete [] pPixels;
pPixels = NULL;
// free the bmp
DeleteObject(hBmp);
}
The BITMAP structure will give you information about the bitmap (MSDN here) and, for a monochrome bitmap, the bits will be packed into the bytes you read. This may be another difference with the C# code, where it is possible that each bit is unpacked into a whole byte. In the MFC version, you will need to interpret this data correctly.

c# - byte array improper conversion to MB

The file is about 24mb, and it's held in a DataBase so I convert it to a bit array and then, after multiple suggestions, I use bitconverter.tosingle(,) and this is giving me bad results, here's my code:
byte[] imgData = prod.ImageData;
float myFloat = BitConverter.ToSingle(imgData, 0);
float mb = (myFloat / 1024f) / 1024f;
When I debug, I get these results:
byte[24786273]
myFloat = 12564.0361
mb = 0.0119819986
what is weird is that he size of the array is exactly as the file should be. How do I correctly convert this to float and then so it shows as mb?
EDIT: I tried setting up myFloat as imgData.Length, then the size is correct, however is this a correct way to do it, and can it cause a problem in the future with bigger values?
You are taking the first four bytes out of the image and converting it to an IEEE floating point. I'm not an expert on image files so I'm not sure if the first four bytes are always the length, even if this is the case it would still not be correct (see the specification). However the length of the file is already known through the length of the array, so an easier way to get the size is:
byte[] imgData = prod.ImageData;
float mb = (imgData.Length / 1024f) / 1024f;
To address your concerns: this will still work for large files, consider a 24TB example.
var bytes = 24L * 1024 * 1024 * 1024 * 1024;
var inMb = (bytes / 1024.0F / 1024.0F);

Memory leak in CoreGraphics while switching sprite texture. Code inside

EDIT: I tried using the push/pop thing, but now it crashes.
I have a feeling what I'm attempting to do is way off.. Is there any way to just get core graphics showing up on the screen? I need something thats able to be updated every frame, like drawing a line between two points that always move around..
Even if someone knows of a complete alternative Ill try it.
in .h
CGContextRef context;
in .m
in init method
int width = 100;
int height = 100;
void *buffer = calloc(1, width * height * 4);
context = CreateBitmapContextWithData(width, height, buffer);
CGContextSetRGBFillColor(context, 1, 1, 1, 1);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
hud_sprite = [CCSprite spriteWithCGImage:image key:#"hud_image1"];
free(buffer);
free(image);
hud_sprite.anchorPoint = CGPointMake(0, 0);
hud_sprite.position = CGPointMake(0, 0);
[self addChild:hud_sprite z:100];
in a method I call when I want to update it.
int width = 100;
int height = 100;
UIGraphicsPushContext(context);
CGContextClearRect(context, CGRectMake(0, 0, width, height)); //<-- crashes here. bad access...
CGContextSetRGBFillColor(context, random_float(0, 1),random_float(0, 1),random_float(0, 1), .8);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
//CGContextRelease(ctx);
[[CCTextureCache sharedTextureCache] removeTextureForKey:#"hud_image1"];
[hud_sprite setTexture:[[CCTextureCache sharedTextureCache] addCGImage:image forKey:#"hud_image1"]];
free(image);
You are calling UIGraphicsPushContext(context). You must balance this with UIGraphicsPopContext(). Since you're not calling UIGraphicsPopContext(), you are leaving context on UIKit's graphics context stack, so it never gets deallocated.
Also, you are calling UIGraphicsBeginImageContext, which creates a new graphics context that you later release (correctly) by calling UIGraphicsEndImageContext. But you never use this context. You would access the context by calling UIGraphicsGetCurrentContext, but you never call that.
UPDATE
Never call free on a Core Foundation object!
You are getting a CGImage (which is a Core Foundation object) with this statement:
CGImageRef image = CGBitmapContextCreateImage(context);
Then later you are calling free on it:
free(image);
You must never do that.
Go read the Memory Management Programming Guide for Core Foundation. When you are done with a Core Foundation object, and you have ownership of it (because you got it from a Create function or a Copy function), you must release it with a Release function. In this case you can use either CFRelease or CGImageRelease:
CGImageRelease(image);
Furthermore, you are allocating buffer using calloc, then passing it to CreateBitmapContextWithData (which I guess is your wrapper for CGBitmapContextCreateWithData), and then freeing buffer. But context keeps a pointer to the buffer. So after you free(buffer), context has a dangling pointer. That's why you're crashing. You cannot free buffer until after you have released context.
The best way to handle this is to let CGBitmapContextCreateWithData take care of allocating the buffer itself by passing NULL as the first (data) argument. There is no reason to allocate the buffer yourself in this case.

GetDIBits: bitmap modifications, but crashes out?

GetDIBits: trying to modify the bitmap, but not sure how to go about it? I tried lpvBits but it crashes out in the comparison in the "pig" area. How should I do this? thx
LPVOID lpvBits=NULL; // pointer to bitmap bits array
BITMAPINFO bi;
ZeroMemory(&bi.bmiHeader, sizeof(BITMAPINFOHEADER));
bi.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
if (!GetDIBits(dc, m_bmp, 0, 400, lpvBits, &bi, DIB_RGB_COLORS))
AfxMessageBox("1");
char *pig = (char*)lpvBits;
for (int m=0;m<100;m++)
{
if (pig[m] > 100)
{
pig[m] = 250;
}
}
SetDIBits( dc, m_bmp, 0, 400, (void *)pig, &bi, DIB_RGB_COLORS );
http://msdn.microsoft.com/en-us/library/dd144879(v=vs.85).aspx
lpvBits [out]
A pointer to a buffer to receive the bitmap data. If this parameter is NULL, the function passes the dimensions and format of the bitmap to the BITMAPINFO structure pointed to by the lpbi parameter.
example found here:
http://msdn.microsoft.com/en-us/library/dd183402(v=vs.85).aspx
http://msdn.microsoft.com/en-us/library/ms969901.aspx
http://www.codeproject.com/KB/graphics/drawing2bitmap.aspx
http://www.cplusplus.com/forum/general/28469/
Read the documentation for GetDIBits carefully - the lpvBits pointer is not returned to you - you need to allocate enough memory for the bitmap data you want to fetch, and pass it to GetDIBits to fill it in with image data.

Resources