Xna Xbox framedrops when GC kicks in - garbage-collection

I'm developing an app (XNA Game) for the XBOX, which is a pretty simple app. The startpage contains tiles with moving gif images. Those gif images are actually all png images, which gets loaded once by every tile, and put in an array. Then, using a defined delay, these images are played (using a counter which increases every time a delay passes).
This all works well, however, I noticed some small lag every x seconds in the movement of the GIF images. I then started to add some benchmarking stuff:
http://gyazo.com/f5fe0da3ff81bd45c0c52d963feb91d8
As you can see, the FPS is pretty low for such a simple program (This is in debug, when running the app from the Xbox itself, I get an avg of 62fps).
2 important settings:
Graphics.SynchronizeWithVerticalRetrace = false;
IsFixedTimeStep = false;
Changing isFixedTimeStep to true increases the lag. The settings tile has wheels which rotate, and you can see the wheels go back a little every x seconds. The same counts for SynchronizeWVR, also increases lag.
I noticed a connection between the lag and the moment the garbage collector kicks in, every time it kicks in, there is a lag...
Don't mind the MAX HMU(Heap memory usage), as this is takes the amount of the start, the avg is more realistic.
Here is another screen from the performance monitor, however I don't understand much from this tool, first time I'm using it... Hope it helps:
http://gyazo.com/f70a3d400657ac61e6e9f2caaaf17587

After a little research I found the culprit.
I have custom components that all derive from GameComponent, and who get added to the Component list of the main Game class.
This was one (of a total of 2) major problem, causing to update everything that wasn't needing an update. (The draw method was the only one who kept the page state in mind, and only drew if needed).
I fixed this by using different "screens" (or pages as I called them), wich are the only components who derive from GameComponent.
Then I only update the page wich is active, and the custom components on that page also get updated. Problem fixed.
The second big problem, is the following;
I made a class which helps me on positioning stuff on the screen, relative that is, with percentages and stuff like that. Parent containers, aligns & v-aligns etc etc.
That class had properties, for size & vectors, but instead of saving the calculated value in a backing field, I recalculated them everytime I accessed a property. But calculating complex stuff like that uses references (to parent & child containers for example) wich made it very hard for the CLR, because it had alot of work to do.
I now rebuilt the whole positioning class to a fully functional optimized class, with different flags for recalculating when necessairy, and instead of drops of 20fps, I now get an average of 170+fps!

Related

How to increase gif generation time

I have a web page with an IMG tag and a link my-server.com/animationID.gif.
When someone opens a web page, my server generates a new GIF animation which appears on the web page.
I'm using the gifencoder node package to generate dynamic 60-frame animations.
The animation updates every second, so I don't really see a good way to cache it...
It takes 1-3 seconds to generate the animation which is very slow.
A few years ago I used services like countdownmail and mailtimers which generate a 60 frame countdown timers. Somehow, they manage to generate it very fast in less than 0.5-1 second.
After some debugging it seems that the addFrame method takes the most of time (and it's called 60 times).
encoder.addFrame(ctx);
Is there a way to increase the generation speed or cache the animation?

Direct2D/DirectWrite Does IDWriteFactory::CreateTextLayout need to release every time when update the text in CreateTextLayout?

I am doing Direct2D show some text (like fps, resolution etc) on Direct3D surface. The weird thing is that in my Window Class there is a method called CalculateFrameStats() where every loop, use this to calcualte the FPS etc information and use Direct2D IDWriteFactory::CreateTextLayout to create a new Textlayout with latest updated FPS text strings. And do BeginDraw(), DrawTextLayout(), EndDraw() in the 3DFrameDraw() function. And after that I don't release the TextLayout pointer. And next round goes to CalculateFrameStats(), it CreateTextLayout again with newly update FPS etc strings. And in 3DFrameDraw() function, I drawTextlayout again. And it loops like this over and over. But when I run the program, it seems no memory leaks at all, the memory usage keeps low and constant.
But when put IDWriteFactory::CreateTextLayout in 3DFrameDraw() function, which means every 3D frame draw in the beginning I create a new TextLayout with updated FPS string and do some 3D manipulations and before D3D-present, I do BeginDraw(), DrawTextLayout(), EndDraw(). This is the same area in previous 3DFrameDraw(). But this time, the memory leaks, and I can see the memory keep growing when time elapse. But if I add Textlayout pointer->release() after BeginDraw(), DrawTextLayout(), EndDraw(), the memory leaks gone.
I don't really know why the first scenario Textlayout pointer never got release until close the program, the memory never leaks. Does TextLayout need to be released every time/frame when update its text string?

Why GBuffers need to be created for each frame in D3D12?

I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.

Folium + Bokeh: Poor performance and massive memory usage

I'm using Folium and Bokeh together in a Jupyter notebook. I'm looping through a dataframe, and for each row inserting a marker on the Folium map, pulling some data from a separate dataframe, creating a Bokeh chart out of that data, and then embedding the Bokeh chart in the Folium map popup in an IFrame. Code is as follows:
map = folium.Map(location=[36.710021, 35.086146],zoom_start=6)
for i in range (0,len(duty_station_totals)):
popup_table = station_dept_totals.loc[station_dept_totals['Duty Station'] == duty_station_totals.iloc[i,0]]
chart = Bar(popup_table,label=CatAttr(columns=['Department / Program'],sort=False),values='dept_totals',
title=duty_station_totals.iloc[i,0] + ' Staff',xlabel='Department / Program',ylabel='Staff',plot_width=350,plot_height=350)
hover = HoverTool(point_policy='follow_mouse')
hover.tooltips=[('Staff','#height'),('Department / Program','#{Department / Program}'),('Duty Station',duty_station_totals.iloc[i,0])]
chart.add_tools(hover)
html = file_html(chart, INLINE, "my plot")
iframe = folium.element.IFrame(html=html, width=400, height=400)
popup = folium.Popup(iframe, max_width=400)
marker = folium.CircleMarker(duty_station_totals.iloc[i,2],
radius=duty_station_totals.iloc[i,1] * 150,
color=duty_station_totals.iloc[i,3],
fill_color=duty_station_totals.iloc[i,3])
marker.add_to(map)
folium.Marker(duty_station_totals.iloc[i,2],icon=folium.Icon(color='black',icon_color=duty_station_totals.iloc[i,3]),popup=popup).add_to(map)
map
This loop runs extremely slowly, and adds approx. 200mb to the memory usage of the associated python 3.5 process, per run of the loop! In fact, after running the loop a couple times my entire macbook is slowing down to a crawl - even the mouse is lagging. The associated map also lags heavily when scrolling and zooming, and the popups are slow to open. In case it isn't obvious, I'm pretty new to the python analytics and web visualization world so maybe there is something clearly very inefficient here.
I'm wondering why this is and if there is a better way of having Bokeh charts appear in the map popups. From some basic experiments I've done, it doesn't seem that the issue is with the calls to Bar - the memory usage seems to really skyrocket when I include calls to file_html and just get worse as calls to folium.element.IFrame are added. Seems like there is some sort of memory leak going on due to the increasing memory usage on re-running of the same code.
If anyone has ideas as to how to achieve the same effect (Bokeh charts opening when clicking a Folium marker) in a more efficient manner I would really appreciate it!
Update following some experimentation
I've run through the loop step by step and observed changes in memory usage as more steps are added in to try and isolate what piece of code is driving this issue. On the Bokeh side, the biggest culprit seems to be the calls to file_html() - when running the loop through this step it adds about 5mb of memory usage to the associated python 3.5 process per run (the loop is creating 18 charts), even when including bokeh.io.curdoc().clear().
The bigger issue, however, seems to be driven by Folium. running the whole loop including the creation of the Folium IFrames with the Bokeh-generated HTML and the map markers linked to the IFrames adds 25-30mb to the memory usage of the Python process per run.
So, I guess this is turning in to more of a Folium question. Why is this structure so memory intensive and is there a better way? By the way, saving the resulting Folium map as an HTML file with map.save('map.html') creates a huge, 22mb HTML file.
There are lots of different use-cases, and some of them come with unavoidable trade-offs. In order to make some other use-cases very simple and convenient, Bokeh has an implicit "current document" and keeps accumulating things there. For the particular use-case of generating a bunch of plots sequentially in a loop, you will want to call bokeh.io.reset_output() in between each, to prevent this accumulation.

cairo surface flushes only 1 fps

I have constructed a cairo (v1.12.16) image surface with:
surface = cairo_image_surface_create (CAIRO_FORMAT_ARGB32, size.width, size.height);
and for 60 fps; cleared it, drew stuff and flushed with:
cairo_surface_flush(surface);
then, got the resulting canvas with:
unsigned char * data = cairo_image_surface_get_data(surface);
but the resulting data variable was only modified (approximately) every second, not 60 times a second. I got the same (unexpected) result even when using cairo's quartz backend... Are there any flush/refresh rate settings in cairo that I am not (yet) aware of?
Edit: I am just trying to draw some filled (random and/or calculated) rectangles; tested 100 to 10K rects in each frame. All related code is run in the same (display?) thread. I am not caching the 'data' variable. I even modified one corner of it to flicker and I could see flickers in 60fps (for 100 rects) and 2-3 fps (for 10K rects); meaning the 'data' variable returned is not refreshed!? In a different project using cairo's quartz backend, I got the same 1 fps result!??
Edit2: The culprit turned out to be the time() function; when used in srand(time(NULL)) it was producing the same random variables in the same second; used srand(std::clock()) instead. Thanks to the quick response/reply (and it still answers my question!!)..
No there are no such flush/refresh rate settings. Cairo draws everything you tell it to and then just returns control.
I have two ideas:
Either cairo is drawing fast enough and something else is slowing things down (e.g. your copying the result of the drawing somewhere). You should measure the time that elapses between when you begin drawing and your call to cairo_surface_flush().
You are drawing something really, really complex and cairo really does need a second to render this (However, I have no idea how one could accidentally cause such a complex rendering).

Resources