Cairo.Surface is leaking... How to debug it with Monodevelop? - memory-leaks

I have many doubts related with Cairo and GTK# (that runs on .NET and Mono). I'm developing a GTK# application for MS Windows and Linux. I'm using GTK# 2.12 over .NET right now while I'm working on the application.
I've created a custom widget that uses Cairo.ImageSurface and Cairo.Context objects. As far as I know, I'm calling the Dispose method of every ImageSurface object and every Context object I create inside the widget code.
The widget responds to the "MouseOver" event, redrawing some parts of its DrawingArea.
The (first) problem:
almost every redrawing operation increases a little bit the amount of used memory. When the amount of used memory has increased 3 or 4 Kbytes the Monodevelop tracelog panel shows me the following message:
Cairo.Surface is leaking, programmer is missing a call to Dispose
Set MONO_CAIRO_DEBUG_DISPOSE to track allocation traces
The code that redraws a part of the widget is something like:
// SRGB is a custom struct, not from Gdk nor Cairo
void paintSingleBlock(SRGB color, int i)
{
using (Cairo.Context g = CairoHelper.Create (GdkWindow)) {
paintSingleBlock (g, color, i);
// We do this to avoid memory leaks. Cairo does not work well with the GC.
g.GetTarget().Dispose ();
g.Dispose ();
}
}
void paintSingleBlock(Cairo.Context g, SRGB color, int i)
{
var scale = Math.Pow (10.0, TimeScale);
g.Save();
g.Rectangle (x(i), y(i), w(i), h(i));
g.ClosePath ();
g.Restore ();
// We don't directly use stb.Color because in some cases we need more flexibility
g.SetSourceRGB (color.R, color.G, color.B);
g.LineWidth = 0;
g.Fill ();
}
The (second) problem: Ok, Monodevelop tells me that I should set MONO_CAIRO_DEBUG_DISPOSE to "track allocation traces" (In order to find the leak, I suppose)... but I don't know how to set this environment variable (I'm in Windows). I've tried using bash and executing something like:
MONO_CAIRO_DEBUG_DISPOSE=1 ./LightCreator.exe
But nothing appears in stderr nor stdout... (neither the messages that appear in the Monodevelop's applicationt trace panel). I also don't know how to get the debugging messages that see inside Monodevelop but without Monodevelop.
There's anyone with experience debugging GTK# or Cairo# memory leaks?
Thanks in advance.

Just wanted to throw my 2c here as I was fighting a similar leak problem in Cairo with surfaces. What I noticed is that if I create a Surface object the ReferenceCount property becomes 1 and if I attach this surface to a Context if becomes not 2 but 3. After disposing the Context the ReferenceCount comes back but to 2.
So I used some reflection to call the native methods in Cairo to decrease the ReferenceCount when I really want to Dispose a surface. I use this code:
public static void HardDisposeSurface (this Surface surface)
{
var handle = surface.Handle;
long refCount = surface.ReferenceCount;
surface.Dispose ();
refCount--;
if (refCount <= 0)
return;
var asm = typeof (Surface).Assembly;
var nativeMethods = asm.GetType ("Cairo.NativeMethods");
var surfaceDestroy = nativeMethods.GetMethod ("cairo_surface_destroy", BindingFlags.Static | BindingFlags.NonPublic);
for (long i = refCount; i > 0; i--)
surfaceDestroy.Invoke (null, new object [] { handle });
}
After using it I still have some leaks, but they seem to be related to other parts of Cairo and not with the surfaces.

I have found that a context created with CairoHelper.Create() will have a reference count of two.
A call to dispose reduces the reference count by one. Thus the context is never freed and keeps its target alive, too.
The native objects have manual reference counting, but the Gtk# wrappers want to keep a native object alive as long as there is a C# instance referencing it.
If a native object is created for a C# wrapper instance it does not need to increment the reference count because the wrapper instance 'owns' the native object and the reference count has the correct value of one. But if a wrapper instance is created for an already existing native object the reference count of the native object needs to be manually incremented to keep the object alive.
This is decided by a bool parameter when a wrapper instance is created.
Looking at the code for CairoHelper.Create() will show something like this
public static Cairo.Context Create(Gdk.Window window) {
IntPtr raw_ret = gdk_cairo_create(window == null ? IntPtr.Zero : window.Handle);
Cairo.Context ret = new Cairo.Context (raw_ret, false);
return ret;
}
Even though the native context was just created 'owned' will be false and the C# context will increment the reference count.
There is no fixed version right now, it can only be corrected by fixing the source and building Gtk# yourself.
CairoHelper is an auto-generated file, to change the parameter to true this attribute must be included in gdk/Gdk.metadata.
<attr path="/api/namespace/class[#cname='GdkCairo_']/method[#name='Create']/return-type" name="owned">true</attr>
Everything to build Gtk# can be found here.
https://github.com/mono/gtk-sharp

Related

Getting an Animation's Frame as a Texture using C#

I have this code I want to use to generate Frames resources from a reference Frames resource, so that these new resources would have the same animations, only using different (same size and layout) sprite sheets.
public void UpdateTexture(Texture _texture){
SpriteFrames _referenceFrames = Sprite.Frames;
SpriteFrames _updatedFrames = new SpriteFrames();
foreach (string _anim in _referenceFrames.GetAnimationNames()){
GD.Print(_anim);
if (_anim != "default"){
_updatedFrames.AddAnimation(_anim);
_updatedFrames.SetAnimationSpeed(_anim, _referenceFrames.GetAnimationSpeed(_anim));
_updatedFrames.SetAnimationLoop(_anim, _referenceFrames.GetAnimationLoop(_anim));
for (int i = 0; i < _referenceFrames.GetFrameCount(_anim); i++)
{
AtlasTexture _updatedTexture = _referenceFrames.GetFrame(_anim, i).Duplicate();
_updatedTexture.Atlas = _texture;
_updatedFrames.AddFrame(_anim, _updatedTexture);
}
}
}
_updatedFrames.RemoveAnimation("default");
_referenceFrames = _updatedFrames;
}
Using this code, I get an error; apparently, calling _referenceFrames.GetFrame(_anim, i).Duplicate(); returns an object of type Resource, not Texture. What can I do to get this Frame as a Texture so that the code properly executes?
Using this code, I get an error; apparently, calling _referenceFrames.GetFrame(_anim, i).Duplicate(); returns an object of type Resource, not Texture. What can I do to get this Frame as a Texture so that the code properly executes?
Cast it.
What happens is that _referenceFrames.GetFrame(_anim, i) is a Texture... But Duplacate is defined in Resource (which is the base class of Texture) and it returns Resource. It is like old school .NET Clone() which returns Object. But what it returns is actually of the same type of the object you calling it on.
So cast it. For example:
_referenceFrames.GetFrame(_anim, i).Duplicate() as Texture

Java Graphics synchronisation issues drawing dynamic data

I have a program where I want to draw data that is constantly being updated (it is microhpone-line-data incidentally). The data is an 8000-length array of doubles, I don't really care about 'losing' data which is overriden between takes of the paint method.
In my naive implementation it became obvous that there's synchronisation issues, where the audio-data is updated while the painting routine is under-way.
I'm also aware I'm slightly out-of-date on Java and the Concurrency package it has, but my first response was to just put synchronised blocks around the shared data code. Unsurprisingly this blocks the graphics thread sometimes, so I'm thinking there's probaly a much better way of doing this.
Essentially I just dont have much experience with synchronisation and am screwing things up a bit somewhere. I wonder if someone with a better understanding of these matters might be able to suggest a more elegant solution that doesn't block the graphics thread?
My naive code:
Object lock = new Object();
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
synchronized( lock ) {
this.audio=audio; // and some brief processing
}
repaint();
}
public void paint( Graphics g ) {
synchronized( lock ) {
// draw the contents of this.audio
}
}
Self-answer, unless some more intelligent person can offer something better, I just save a reference to the audio array at the beginning of the routine and draw from that, then any updates to the audio buffer do their calculations in a separate array and then assign this.audio to the new array in a single step.
It appears to work, although the paint routine does get an occasional flicker it is nothing like it was, where it was flashing very noticeably about 10% of the time due to synchronised blocking. The audio-data does not update half-way through the drawing routine either. so... problem solved. probably.
double[] audio = new double[8000]
// array size is always exactly 8000
public update( double[] audio ) {
// do any brief processing
this.audio=audio; // the reference is re-assigned in one step
repaint();
}
public void paint( Graphics g ) {
audioNow = this.audio; // save the reference
// draw the contents of audioNow (not this.audio)
}

CCTexture2D leaks of memory

can anybody answer, i should change sprites image and i do it using my function
-(void) openKeyWithSprite:(id) sender withSpriteName:(NSString*)spriteName
can this lead to leaks of memory or it's ok?
in init
_spriteBonus=[CCSprite spriteWithFile:#"monstr_1_1.png"];
in schedule
-(void) openKeyWithSprite:(id) sender withSpriteName:(NSString*)spriteName
{
CCTexture2D* tex = [[CCTextureCache sharedTextureCache] addImage:spriteName];
[_spriteBonus setTexture: tex];
}
The code you've shown here should be fine and won't leak any memory. setTexture releases the old texture reference and retains the new one.
However, assuming you're not using ARC: [CCSprite spriteWithFile:#"monstr_1_1.png"]; is in the autorelease pool, so make sure you add it to a parent CCNode (before exiting whatever function you're creating it in) in order to keep it retained in memory.
If you want to create frame animation, use CCAnimate action. For example,
id yourAnimation = [CCAnimation animationWithFrames: arrayOfFrames];
id animateAction = [CCAnimate actionWithDuration: animationDuration
animation: yourAnimation
restoreOriginalFrame: YES];
[yourSprite runAction: animateAction];
if you need to repeat your animation, use CCRepeat of CCRepeatForever action, for example
id resultAction = [CCRepeatForever actionWithAction: animateAction];
[yourSprite runAction: resultAction];

Freeing a BSTR using ::SysFreeString(). More Platform Dependant?

I am writing a COM Server which have a plenty of Interfaces and methods. And most of the methods have the BSTR as the parameters and as local parameters used for the return. A snippet looks like
Update 5:
The real code. This fetches from bunch of Data based on a specific condition the DB to populate an array of Object.
STDMETHODIMP CApplication::GetAllAddressByName(BSTR bstrParamName, VARIANT *vAdddresses)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
//check the Database server connection
COleSafeArray saAddress;
HRESULT hr;
// Prepare the SQL Strings dan Query the DB
long lRecCount = table.GetRecordCount();
if (lRecCount > 0)
{
//create one dimension safe array for putting details
saAddress.CreateOneDim(VT_DISPATCH,lRecCount);
IAddress *pIAddress = NULL;
//retrieve details
for(long iRet = table.MoveFirst(),iCount=0; !iRet; iRet = table.MoveNext(),iCount++)
{
CComObject<CAddress> *pAddress;
hr = CComObject<CAddress>::CreateInstance(&pAddress);
if (SUCCEEDED(hr))
{
BSTR bstrStreet = ::SysAllocString(table.m_pRecordData->Street);
pAddress->put_StreetName(bstrStreet);
BSTR bstrCity = ::SysAllocString(table.m_pRecordData->City);
pAddress->put_CityName(bstrCity);
}
hr = pAddress->QueryInterface(IID_IAddress, (void**)&pIAddress);
if(SUCCEEDED(hr))
{
saAddress.PutElement(&iCount,pIAddress);
}
}
*vAdddresses=saAddress.Detach();
}
table.Close();
return S_OK;
}
STDMETHODIMP CAddress::put_CityName(BSTR bstrCityName)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
// m_sCityName is of CComBSTR Type
m_sCityName.Empty();//free the old string
m_sCityName = ::SysAllocString(bstrCityName);//create the memory for the new string
return S_OK;
}
The problem lies in the Memory Freeing part. The code works very fine in any Win XP machines, but when comes to WIN2K8 R2 and WIN7 the code crashes and pointing to the ::SysFreeString() as the culprit. The MSDN is not adequate to the solution.
Can anyone please help in finding the right solution?
Thanks a lot in advance :)
Update 1:
I have tried using the CComBSTR as per the suggestion in the place of raw BSTR, initialized using direct CString's and excluded the SysFreeString(). But for my trouble, on getting out of scope the system is calling the SysFreeString() which again causes the crash :(
Update 2:
With the same CComBSTR i tried to allocate using the SysAllocString() , the problem remains same :(
Update 3:
I am tired of all the options and in peace I am having only question in mind
Is it necessary to free the BSTR through SysFreeString() which was
allocated using SysAllocString()/string.AllocSysString()?
Update 4:
I missed to provide the information about the crash. When I tried to debug the COM server crashed with a error saying
"Possible Heap Corruption"
. Please help me out of here.. :(
// Now All Things are packed in to the Object
obj.Name = bstrName;
obj.Name2 = bstrname2;
I don't quite understand what do you mean by saying that things are packed since you're just copying pointers to the strings, and at the moment when you call SysFreeString obj.Name and obj.Name2 will point to an invalid block of memory. Although this code is not safe, it looks like if the source of your problem is class CFoo. You should show us more details of your code
I suggest you to use a CComBSTR class which will take a responsibility for releasing the memory.
UPDATE
#include <atlbase.h>
using namespace ATL;
...
{
CComBSTR bstrname(_T("Some Name"));
CComBSTR bstrname2(_T("Another Name"));
// Here one may work with these variables if needed
...
// Copy the local values to the Obj's member Variable
bstrname.Copy(&obj.Name);
bstrname2.Copy(&obj.Name2);
}
UPDATE2
First of all one should free bstrCity and bstrStreetName with SysFreeString or use CComBSTR instead within this block:
if (SUCCEEDED(hr))
{
BSTR bstrStreet = ::SysAllocString(table.m_pRecordData->Street);
pAddress->put_StreetName(bstrStreet);
BSTR bstrCity = ::SysAllocString(table.m_pRecordData->City);
pAddress->put_CityName(bstrCity);
// SysFreeString(bstrStreet)
// SysFreeString(bstrCity)
}
Consider to amplify the loop's condition !iRet with iCount < lRecCount.
for(...; !iRet /* && (iCount < lRecCount) */; ...)
Also, here:
m_sCityName = ::SysAllocString(bstrCityName);
you allocate memory but never release it since internally CComBSTR& operator = (OLESTR ..) allocates a new storage itself. One should rewrite is as follows:
m_sCityName = bstrCityName;
Everything else, looks good for me
UPDATE3
Well, Heap corruption is often a consequence of writing some values outside of the allocated memory block. Say you allocate an array of length 5 and put some value to the 6th position
Finally I have found the real reason for the Heap Corruption that happened in the code.
The put_StreetName/put_CityName of the IAddress/CAddress is designed in a following way.
STDMETHODIMP CAddress::put_CityName(BSTR bstrCityName)
{
AFX_MANAGE_STATE(AfxGetStaticModuleState())
m_sCityName.Empty();
TrimBSTR(bstrCityName);
m_sCityName = ::SysAllocString(bstrCityName);
return S_OK;
}
BSTR CAddress::TrimBSTR(BSTR bstrString)
{
CString sTmpStr(bstrString);
sTmpStr.TrimLeft();
sTmpStr.TrimRight();
SysReAllocString(&bstrString,sTmpStr); // The Devilish Line
}
The Devilish Line of code is the real culprit that caused the Memory to go hell.
What caused the trouble?
In this line of code, the BSTR string passed as a parameter is from another application and the real memory is in another realm. So the system is trying to reallocate teh string. Either succeed or not, the same is tried to cleared off from the memory in original application/realm, thus causing a crash.
What still unsolved?
Why the same piece of code doesn't crashed a single time in Win XP and
older systems? :(
Thanks for all who took their time to answer and solve my problem :)

iOS 5 + GLKView: How to access pixel RGB data for colour-based vertex picking?

I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];

Resources