+ (CFArrayRef)getLinesForText:(NSAttributedString *)text width:(CGFloat)width {
UIBezierPath *path = [UIBezierPath bezierPathWithRect:CGRectMake(0, 0, width, 10000)];
CTFramesetterRef frameSetterRef = CTFramesetterCreateWithAttributedString((__bridge CFAttributedStringRef)text);
CTFrameRef frameRef = CTFramesetterCreateFrame(frameSetterRef, CFRangeMake(0, 0), path.CGPath, nil);
CFArrayRef lines = CFArrayCreateCopy(NULL, CTFrameGetLines(frameRef));
CFRelease(frameRef);
CFRelease(frameSetterRef);
return lines;
}
When using CoreText , I have some code like this. After called the method , I use CFRelease to release the returned lines . But when profile with leaks , these codes still have leaks ?How can this happened ?
It is my fault. I need use CFAutorelease when the method returns, rather than let the caller release the returned lines.
Related
I am learning DirectX12 from this guide here and one thing that has me confused is when they are resizing the depth buffer. This is how they do it below:
void Tutorial2::ResizeDepthBuffer(int width, int height)
{
if (m_ContentLoaded)
{
// Flush any GPU commands that might be referencing the depth buffer.
Application::Get().Flush(); //this here wait for any pending command lists to finish execution on the command queue
width = std::max(1, width);
height = std::max(1, height);
auto device = Application::Get().GetDevice();
// Resize screen dependent resources.
// Create a depth buffer.
D3D12_CLEAR_VALUE optimizedClearValue = {};
optimizedClearValue.Format = DXGI_FORMAT_D32_FLOAT;
optimizedClearValue.DepthStencil = { 1.0f, 0 };
ThrowIfFailed(device->CreateCommittedResource(
&CD3DX12_HEAP_PROPERTIES(D3D12_HEAP_TYPE_DEFAULT),
D3D12_HEAP_FLAG_NONE,
&CD3DX12_RESOURCE_DESC::Tex2D(DXGI_FORMAT_D32_FLOAT, width, height,
1, 0, 1, 0, D3D12_RESOURCE_FLAG_ALLOW_DEPTH_STENCIL),
D3D12_RESOURCE_STATE_DEPTH_WRITE,
&optimizedClearValue,
IID_PPV_ARGS(&m_DepthBuffer)
));
// Update the depth-stencil view.
D3D12_DEPTH_STENCIL_VIEW_DESC dsv = {};
dsv.Format = DXGI_FORMAT_D32_FLOAT;
dsv.ViewDimension = D3D12_DSV_DIMENSION_TEXTURE2D;
dsv.Texture2D.MipSlice = 0;
dsv.Flags = D3D12_DSV_FLAG_NONE;
device->CreateDepthStencilView(m_DepthBuffer.Get(), &dsv,
m_DSVHeap->GetCPUDescriptorHandleForHeapStart());
}
}
Do we have to release old Depth/Stencil views, because the in the above code they are just overwriting the descriptor in the descriptor heap with a new view (CreateDepthStencilView), but not releasing the old one? Is that a leak?
this is the github link to the code
(if it is in a descriptor (in a descriptor heap) vs just a stack based view, do i need to deallocate both of them, if so how?)
The SRV, CBV, UAV, RTV, and DSV "Views" in DirectX 12 are in memory 'owned' by the heaps they are allocated into. You can just reuse those slots if you want. The Create*View methods just fill out data into that memory. The memory itself is freed when the associated heap is freed.
Vertex Buffer and Index Buffer Views are just simple structures as well.
The ref-counted part you need to make sure you release are the ID3D12Resource and ID3D12Heap objects.
In addition to that tutorial, you may want to take a look at DirectX Tool Kit for DX12.
In my application's InitInstance function, I have the following code to rewrite the location of the CHM Help Documentation:
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
free((void*)m_pszHelpFilePath);
m_pszHelpFilePath = _tcsdup(strHelp);
It is all functional but it gives me a code analysis warning:
C26408 Avoid malloc() and free(), prefer the nothrow version of new with delete (r.10).
When you look at the official documentation for m_pszHelpFilePath it does state:
If you assign a value to m_pszHelpFilePath, it must be dynamically allocated on the heap. The CWinApp destructor calls free( ) with this pointer. You many want to use the _tcsdup( ) run-time library function to do the allocating. Also, free the memory associated with the current pointer before assigning a new value.
Is it possible to rewrite this code to avoid the code analysis warning, or must I add a __pragma?
You could (should?) use a smart pointer to wrap your reallocated m_pszHelpFilePath buffer. However, although this is not trivial, it can be accomplished without too much trouble.
First, declare an appropriate std::unique_ptr member in your derived application class:
class MyApp : public CWinApp // Presumably
{
// Add this member...
public:
std::unique_ptr<TCHAR[]> spHelpPath;
// ...
};
Then, you will need to modify the code that constructs and assigns the help path as follows (I've changed your C-style cast to an arguably better C++ cast):
// First three (almost) lines as before ...
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
free(const_cast<TCHAR *>(m_pszHelpFilePath));
// Next, allocate the shared pointer data and copy the string...
size_t strSize = static_cast<size_t>(strHelp.GetLength() + 1);
spHelpPath std::make_unique<TCHAR[]>(strSize);
_tcscpy_s(spHelpPath.get(), strHelp.GetString()); // Use the "_s" 'safe' version!
// Now, we can use the embedded raw pointer for m_pszHelpFilePath ...
m_pszHelpFilePath = spHelpPath.get();
So far, so good. The data allocated in the smart pointer will be automatically freed when your application object is destroyed, and the code analysis warnings should disappear. However, there is one last modification we need to make, to prevent the MFC framework from attempting to free our assigned m_pszHelpFilePath pointer. This can be done by setting that to nullptr in the MyApp class override of ExitInstance:
int MyApp::ExitInstance()
{
// <your other exit-time code>
m_pszHelpFilePath = nullptr;
return CWinApp::ExitInstance(); // Call base class
}
However, this may seem like much ado about nothing and, as others have said, you may be justified in simply supressing the warning.
Technically, you can take advantage of the fact that new / delete map to usual malloc/free by default in Visual C++, and just go ahead and replace. The portability won't suffer much as MFC is not portable anyway. Sure you can use unique_ptr<TCHAR[]> instead of direct new / delete, like this:
CString strHelp = GetProgramPath();
strHelp += _T("MeetSchedAssist.CHM");
std::unique_ptr<TCHAR[]> str_old(m_pszHelpFilePath);
auto str_new = std::make_unique<TCHAR[]>(strHelp.GetLength() + 1);
_tcscpy_s(str_new.get(), strHelp.GetLength() + 1, strHelp.GetString());
m_pszHelpFilePath = str_new.release();
str_old.reset();
For robustness for replaced new operator, and for least surprise principle, you should keep free / strdup.
If you replace multiple of those CWinApp strings, suggest writing a function for them, so that there's a single place with free / strdup with suppressed warnings.
EDIT: I tried using the push/pop thing, but now it crashes.
I have a feeling what I'm attempting to do is way off.. Is there any way to just get core graphics showing up on the screen? I need something thats able to be updated every frame, like drawing a line between two points that always move around..
Even if someone knows of a complete alternative Ill try it.
in .h
CGContextRef context;
in .m
in init method
int width = 100;
int height = 100;
void *buffer = calloc(1, width * height * 4);
context = CreateBitmapContextWithData(width, height, buffer);
CGContextSetRGBFillColor(context, 1, 1, 1, 1);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
hud_sprite = [CCSprite spriteWithCGImage:image key:#"hud_image1"];
free(buffer);
free(image);
hud_sprite.anchorPoint = CGPointMake(0, 0);
hud_sprite.position = CGPointMake(0, 0);
[self addChild:hud_sprite z:100];
in a method I call when I want to update it.
int width = 100;
int height = 100;
UIGraphicsPushContext(context);
CGContextClearRect(context, CGRectMake(0, 0, width, height)); //<-- crashes here. bad access...
CGContextSetRGBFillColor(context, random_float(0, 1),random_float(0, 1),random_float(0, 1), .8);
CGContextAddRect(context, CGRectMake(0, 0, width, height));
CGContextFillPath(context);
CGImageRef image = CGBitmapContextCreateImage(context);
UIGraphicsPopContext();
//CGContextRelease(ctx);
[[CCTextureCache sharedTextureCache] removeTextureForKey:#"hud_image1"];
[hud_sprite setTexture:[[CCTextureCache sharedTextureCache] addCGImage:image forKey:#"hud_image1"]];
free(image);
You are calling UIGraphicsPushContext(context). You must balance this with UIGraphicsPopContext(). Since you're not calling UIGraphicsPopContext(), you are leaving context on UIKit's graphics context stack, so it never gets deallocated.
Also, you are calling UIGraphicsBeginImageContext, which creates a new graphics context that you later release (correctly) by calling UIGraphicsEndImageContext. But you never use this context. You would access the context by calling UIGraphicsGetCurrentContext, but you never call that.
UPDATE
Never call free on a Core Foundation object!
You are getting a CGImage (which is a Core Foundation object) with this statement:
CGImageRef image = CGBitmapContextCreateImage(context);
Then later you are calling free on it:
free(image);
You must never do that.
Go read the Memory Management Programming Guide for Core Foundation. When you are done with a Core Foundation object, and you have ownership of it (because you got it from a Create function or a Copy function), you must release it with a Release function. In this case you can use either CFRelease or CGImageRelease:
CGImageRelease(image);
Furthermore, you are allocating buffer using calloc, then passing it to CreateBitmapContextWithData (which I guess is your wrapper for CGBitmapContextCreateWithData), and then freeing buffer. But context keeps a pointer to the buffer. So after you free(buffer), context has a dangling pointer. That's why you're crashing. You cannot free buffer until after you have released context.
The best way to handle this is to let CGBitmapContextCreateWithData take care of allocating the buffer itself by passing NULL as the first (data) argument. There is no reason to allocate the buffer yourself in this case.
I built a tff to D3D texture function using freetype2(2.3.9) to generate grayscale maps from the fonts. it works great under native win32, however, on WoW64 it just explodes (well, FT_Done and FT_Load_Glyph do). from some debugging, it seems to be a problem with HeapFree as called by free from FT_Free.
I know it should work, as games like WCIII, which to the best of my knowledge use freetype2, run fine, this is my code, stripped of the D3D code(which causes no problems on its own):
FT_Face pFace = NULL;
FT_Error nError = 0;
FT_Byte* pFont = static_cast<FT_Byte*>(ARCHIVE_LoadFile(pBuffer,&nSize));
if((nError = FT_New_Memory_Face(pLibrary,pFont,nSize,0,&pFace)) == 0)
{
FT_Set_Char_Size(pFace,nSize << 6,nSize << 6,96,96);
for(unsigned char c = 0; c < 95; c++)
{
if(!FT_Load_Glyph(pFace,FT_Get_Char_Index(pFace,c + 32),FT_LOAD_RENDER))
{
FT_Glyph pGlyph;
if(!FT_Get_Glyph(pFace->glyph,&pGlyph))
{
LOG("GET: %c",c + 32);
FT_Glyph_To_Bitmap(&pGlyph,FT_RENDER_MODE_NORMAL,0,1);
FT_BitmapGlyph pGlyphMap = reinterpret_cast<FT_BitmapGlyph>(pGlyph);
FT_Bitmap* pBitmap = &pGlyphMap->bitmap;
const size_t nWidth = pBitmap->width;
const size_t nHeight = pBitmap->rows;
//add to texture atlas
}
}
}
}
else
{
FT_Done_Face(pFace);
delete pFont;
return FALSE;
}
FT_Done_Face(pFace);
delete pFont;
return TRUE;
}
ARCHIVE_LoadFile returns blocks allocated with new.
As a secondary question, I would like to render a font using pixel sizes, I came across FT_Set_Pixel_Sizes, but I'm unsure as to whether this stretches the font to fit the size, or bounds it to a size. what I would like to do is render all the glyphs at say 24px (MS Word size here), then turn it into a signed distance field in a 32px area.
Update
After much fiddling, I got a test app to work, which leads me to think the problems are arising from threading, as my code is running in a secondary thread. I have compiled freetype into a static lib using the multithread DLL, my app uses the multithreaded libs. gonna see if i can set up a multithreaded test.
Also updated to 2.4.4, to see if the problem was a known but fixed bug, didn't help however.
Update 2
After some more fiddling, it turns out I wasn't using the correct lib for 2.4.4 -.- after fixing that, the test app works 100%, but the main app still crashes when FT_Done_Face is called, still seems to be a crash in the memory heap management of windows. is it possible that there is a bug in freetype2 that makes it blow up under user threads?
I've got a strange problem and really don't understand what's going on.
I made my application multi-threaded using the MFC multithreadclasses.
Everything works well so far, but now:
Somewhere in the beginning of the code I create the threads:
m_bucketCreator = new BucketCreator(128,128,32);
CEvent* updateEvent = new CEvent(FALSE, FALSE);
CWinThread** threads = new CWinThread*[numThreads];
for(int i=0; i<8; i++){
threads[i]=AfxBeginThread(&MyClass::threadfunction, updateEvent);
m_activeRenderThreads++;
}
this creates 8 threads working on this function:
UINT MyClass::threadfunction( LPVOID params ) //executed in new Thread
{
Bucket* bucket=m_bucketCreator.getNextBucket();
...do something with bucket...
delete bucket;
}
m_bucketCreator is a static member. Now I get some thread error in the deconstructor of Bucket on the attempt to delete a buffer (however, the way I understand it this buffer should be in the memory of this thread, so I don't get why there is an error). On the attempt of delete[] buffer, the error happens in _CrtIsValidHeapPointer() in dbgheap.c.
Visual studio outputs the message that it trapped a halting point and this can be either due to heap corruption or because the user pressed f12 (I didn't ;) )
class BucketCreator {
public:
BucketCreator();
~BucketCreator(void);
void init(int resX, int resY, int bucketSize);
Bucket* getNextBucket(){
Bucket* bucket=NULL;
//enter critical section
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
int height = min(m_resolutionY-m_nextY,m_bucketSize);
int width = min(m_resolutionX-m_nextX,m_bucketSize);
bucket = new Bucket(width, height);
//leave critical section
singleLock.Unlock();
return bucket;
}
private:
int m_resolutionX;
int m_resolutionY;
int m_bucketSize;
int m_nextX;
int m_nextY;
//multithreading:
CCriticalSection m_criticalSection;
};
and class Bucket:
class Bucket : public CObject{
DECLARE_DYNAMIC(RenderBucket)
public:
Bucket(int a_resX, int a_resY){
resX = a_resX;
resY = a_resY;
buffer = new float[3 * resX * resY];
int buffersize = 3*resX * resY;
for (int i=0; i<buffersize; i++){
buffer[i] = 0;
}
}
~Bucket(void){
delete[] buffer;
buffer=NULL;
}
int getResX(){return resX;}
int getResY(){return resY;}
float* getBuffer(){return buffer;}
private:
int resX;
int resY;
float* buffer;
Bucket& operator = (const Bucket& other) { /*..*/}
Bucket(const Bucket& other) {/*..*/}
};
Can anyone tell me what could be the problem here?
edit: this is the other static function I'm calling from the threads. Is this safe to do?
static std::vector<Vector3> generate_poisson(double width, double height, double min_dist, int k, std::vector<std::vector<Vector3> > existingPoints)
{
CSingleLock singleLock(&m_criticalSection);
singleLock.Lock();
std::vector<Vector3> samplePoints = std::vector<Vector3>();
...fill the vector...
singleLock.Unlock();
return samplePoints;
}
All the previous replies are sound. For the copy constructor, make sure that it doesn't just copy the buffer pointer, otherwise that will cause the problem. It needs to allocate a new buffer, not the pointer value, which would cause an error in 'delete'. But I don't get the impression that the copy contructor will get called in your code.
I've looked at the code and I am not seeing any error in it as is. Note that the thread synchronization isn't even necessary in this GetNextBucket code, since it's returning a local variable and those are pre-thread.
Errors in ValidateHeapPointer occur because something has corrupted the heap, which happens when a pointer writes past a block of memory. Often it's a for() loop that goes too far, a buffer that wasn't allocated large enough, etc.
The error is reported during a call to 'delete' because that's when the heap is validated for bugs in debug mode. However, the error has occurred before that time, it just happens that the heap is checked only in 'new' and 'delete'. Also, it isn't necessarily related to the 'Bucket' class.
What you need to need to find this bug, short of using tools like BoundsChecker or HeapValidator, is comment out sections of your code until it goes away, and then you'll find the offending code.
There is another method to narrow down the problem. In debug mode, include in your code, and sprinkle calls to _CrtCheckMemory() at various points of interest. That will generate the error when the heap is corrupted. Simply move the calls in your code to narrow down at what point the corruption begins to occur.
I don't know which version of Visual C++ you are using. If you're using a earlier one like VC++ 6.0, make sure that you are using the Multitreaded DLL version of the C Run Time Library in the compiler option.
You're constructing a RenderBucket. Are you sure you're calling the 'Bucket' class's constructor from there? It should look like this:
class RenderBucket : public Bucket {
RenderBucket( int a_resX, int a_resY )
: Bucket( a_resX, a_resY )
{
}
}
Initializers in the Bucket class to set the buffer to NULL is a good idea... Also making the Default constructor and copy constructor private will help to make double sure those aren't being used. Remember.. the compiler will create these automatically if you don't:
Bucket(); <-- default constructor
Bucket( int a_resx = 0, int a_resy = 0 ) <-- Another way to make your default constructor
Bucket(const class Bucket &B) <-- copy constructor
You haven't made a private copy constructor, or any default constructor. If class Bucket is constructed via one of these implicitly-defined methods, buffer will either be uninitialized, or it will be a copied pointer made by a copy constructor.
The copy constructor for class Bucket is Bucket(const Bucket &B) -- if you do not explicitly declare a copy constructor, the compiler will generate a "naive" copy constructor for you.
In particular, if this object is assigned, returned, or otherwise copied, the copy constructor will copy the pointer to a new object. Eventually, both objects' destructors will attempt to delete[] the same pointer and the second attempt will be a double deletion, a type of heap corruption.
I recommend you make class Bucket's copy constructor private, which will cause attempted copy construction to generate a compile error. As an alternative, you could implement a copy constructor which allocates new space for the copied buffer.
Exactly the same applies to the assignment operator, operator=.
The need for a copy constructor is one of the 55 tips in Scott Meyer's excellent book, Effective C++: 55 Specific Ways to Improve Your Programs and Designs:
This book should be required reading for all C++ programmers.
If you add:
class Bucket {
/* Existing code as-is ... */
private:
Bucket() { buffer = NULL; } // No default construction
Bucket(const Bucket &B) { ; } // No copy construction
Bucket& operator= (const Bucket &B) {;} // No assignment
}
and re-compile, you are likely to find your problem.
There is also another possibility: If your code contains other uses of new and delete, then it is possible these other uses of allocated memory are corrupting the linked-list structure which defines the heap memory. It is common to detect this corruption during a call to delete, because delete must utilize these data structures.