I am having an issue with a large leak in Perl/Tk running on Unix. I am unable to post most of the code due to certain restrictions, but I have been able to identify what is causing the memory leak and create a small program that has the same symptoms. Basically I am running a GUI where a frame is destroyed and repopulated regularly. This seems to cause a large memory leak that keeps growing however. I thought the destroy command would get rid of all traces in memory, but it does not seem to be that way. I am not too familiar with the garbage collection in Perl either. This program will be running for days or weeks at a time and so a memory leak is not ideal. Does anyone have any ideas to stop the memory leak? I am restricted to using Perl, so unfortunately I cannot just easily port the code to Python or something. Thanks for help in advance.
#!opt/project/COTS/bin/perl
use strict;
use warnings;
use Tk;
$Tk::mw = MainWindow->new;
$Tk::mw->repeat(10,\$build);
my $status;
&build;
sub build{
$status->destroy() if ($status);
$status = $Tk::mw->Frame->pack(-side => 'top');
my $left_frame = $status->Frame(
-relief =>'sunken',
-borderwidth => 1
)->pack(-side => 'left', -anchor => 'nw', -padx => 4, -pady => 5);
my $right_frame = $status->Frame(
-relief =>'sunken',
-borderwidth => 1
)->pack(-side => 'left', -anchor => 'nw', -padx => 4, -pady => 5);
}
MainLoop;
Yes, this is known. There are some leftovers in the Perl/Tk system when destroying a widget (actually it's not the fault of Perl, but of the Perl/Tk subsystem).
The best approach is to reuse widgets. Most of the time this is possible, as you can re-configure all options. If some subwidgets are about to vanish and reappear, then you can use packForget() to temporarily remove them from display, and use pack() again to make them visible.
Related
I have experience with D3D11 and want to learn D3D12. I am reading the official D3D12 multithread example and don't understand why the shadow map (generated in the first pass as a DSV, consumed in the second pass as SRV) is created for each frame (actually only 2 copies, as the FrameResource is reused every 2 frames).
The code that creates the shadow map resource is here, in the FrameResource class, instances of which is created here.
There is actually another resource that is created for each frame, the constant buffer. I kind of understand the constant buffer. Because it is written by CPU (D3D11 dynamic usage) and need to remain unchanged until the GPU finish using it, so there need to be 2 copies. However, I don't understand why the shadow map needs to do the same, because it is only modified by GPU (D3D11 default usage), and there are fence commands to separate reading and writing to that texture anyway. As long as the GPU follows the fence, a single texture should be enough for the GPU to work correctly. Where am I wrong?
Thanks in advance.
EDIT
According to the comment below, the "fence" I mentioned above should more accurately be called "resource barrier".
The key issue is that you don't want to stall the GPU for best performance. Double-buffering is a minimal requirement, but typically triple-buffering is better for smoothing out frame-to-frame rendering spikes, etc.
FWIW, the default behavior of DXGI Present is to stall only after you have submitted THREE frames of work, not two.
Of course, there's a trade-off between triple-buffering and input responsiveness, but if you are maintaining 60 Hz or better than it's likely not noticeable.
With all that said, typically you don't need to double-buffered depth/stencil buffers for rendering, although if you wanted to make the initial write of the depth-buffer overlap with the read of the previous depth-buffer passes then you would want distinct buffers per frame for performance and correctness.
The 'writes' are all complete before the 'reads' in DX12 because of the injection of the 'Resource Barrier' into the command-list:
void FrameResource::SwapBarriers()
{
// Transition the shadow map from writeable to readable.
m_commandLists[CommandListMid]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_DEPTH_WRITE, D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE));
}
void FrameResource::Finish()
{
m_commandLists[CommandListPost]->ResourceBarrier(1, &CD3DX12_RESOURCE_BARRIER::Transition(m_shadowTexture.Get(), D3D12_RESOURCE_STATE_PIXEL_SHADER_RESOURCE, D3D12_RESOURCE_STATE_DEPTH_WRITE));
}
Note that this sample is a port/rewrite of the older legacy DirectX SDK sample MultithreadedRendering11, so it may be just an artifact of convenience to have two shadow buffers instead of just one.
I get this every time I try to create an account to ask this on Stack Overflow:
Oops! Something Bad Happened!
We apologize for any inconvenience, but an unexpected error occurred while you were browsing our site.
It’s not you, it’s us. This is our fault.
That's the reason I post it here. I literally cannot ask it on Overflow, even after spending hours of my day (on and off) repeating my attempts and solving a million reCAPTCHA puzzles. Can you maybe fix this error soon?
With no meaningful/complete examples, and basically no documentation whatsoever, I've been trying to use the "shmop" part of PHP for many years. Now I must find a way to send data between two different CLI PHP scripts running on the same machine, without abusing the database for this. It must work without database support, which means I'm trying to use shmop, but it doesn't work at all:
$shmopid = shmop_open(1, 'w', 0644, 99999); // I have no idea what the "key" is supposed to be. It says: "System's id for the shared memory block. Can be passed as a decimal or hex.", so I've given it a 1 and also tried with 123. It gave an error when I set the size to 64, so I increased it to 99999. That's when the error changed to the one I now face above.
shmop_write($shmopid, 'meow 123', 0); // Write "meow 123" to the shared variable.
while (1)
{
$shared_string = shmop_read($shmopid, 0, 8); // Read the "meow 123", even though it's the same script right now (since this is an example and minimal test).
var_dump($shared_string);
sleep(1);
}
I get the error for the first line:
shmop_open(): unable to attach or create shared memory segment 'No error':
What does that mean? What am I doing wrong? Why is the manual so insanely cryptic for this? Why isn't this just a built-in "superarray" that can be accessed across the scripts?
About CLI:
It cannot work in standalone CLI processes, as an answer here says:
https://stackoverflow.com/a/34533749
The master process is the one to hold the shared memory block, so you will have to use php-fpm or mod_php or some other web/service-running version, and maybe even start/request/stop it all from a CLI php script.
About shmop usage itself:
Use "c" mode in shmop_open() for creating the segment before it can be used with "a" or "w".
I stumbled on this error in a different scenario where shared memory is completely optional to speed up some repeated operations. So I wanted to try reading first without knowing memory size and then allocate from actual data when needed. In my case I had to call it as #shmop_open() to hide this error output.
About shmop on Windows:
PHP 7 crashed Apache worker process causing its restart with status 3221225477 when trying to reallocate a segment with the same predefined (arbitrary number) key but different size, even after shmop_delete(). As a workaround for this, I took filemtime() of the source file which contains data to be stored in memory, and used that timestamp as the key in shmop_open(). It still was not flawless IIRC, and I don't know if it would cause memory leaks, but it was enough to test my code which would mainly run on Linux anyway.
Finally, as of PHP version 8.0.7, shmop seems to work fine with Apache 2.4.46 and mod_php in Windows 10.
I am trying to get rid of memory leak but my understanding of things is pretty low in this area and I have nobody to ask for help expect you guys. My script is killing server RAM and I can't figure out what is wrong with my approach.
I have this function:
function getPages(params){
gmail.users.messages.list(params, (err, resp)=>{
for (var message of resp.messages) {
message['ownerEmail'] = currentUser;
getMessage(message); // this does something with it later
var message = null;
}
if(resp.nextPageToken){
params.pageToken = resp.nextPageToken;
getPages(params);
} else {
// resolve end here...
}
})//gmail.users.messages.list
}//fetchPages
getPages(params);
Basically it gets messages from the API and should do something with it afterwards. It will execute itself as long as there is more data to fetch. (as long as nextPageToken exists in response).
Now I ran this command:
$ free -lm
total used free shared buff/cache available
Mem: 11935 1808 7643 401 2483 9368
Low: 11935 4291 7643
High: 0 0 0
Swap: 6062 0 6062
As script is running buff/cache is constantly increasing.
What is the buff/cache thing actually and how is it related to my Node script?
How do I manage what is buffered/cached and how do I kill/clear such stuff?
How do I optimize function above to forget everything that is already processed?
How do I make sure that script takes absolutely zero resources once it is finished? (I even tried process.exit at the end of the script)
How do I debug and monitor RAM usage from my Node.js script?
I don't think that there is a memory leak. I think that you are in an infinite loop with the recursion. The gmail.users.messages returns the response with the resp.nextPageToken being present (I suppose) and then you are calling the getPages(params); again. Can you put a console.log just before the getPages(params); function call? Something like that:
if (resp.nextPageToken) {
params.pageToken = resp.nextPageToken;
console.log('token', params.pageToken)
getPages(params);
}
and check how many times do you print this and if you ever get out of the recursion. Also, why do you set the message to null into the iteration? There is a redefinition of the variable.
You can use N|Solid (its free for development), you'll launch your app inside its wrapper. Its quite easy to use and it allows you to make full profile where leak occurs.
You can also do it manually with built in debugger, check memory consumption at each step.
Just to answer one of questions within the post:
How do I make sure that script takes absolutely zero resources once it
is finished? (I even tried process.exit at the end of the script)
There has been misunderstanding:
http://www.linuxatemyram.com/
Don't Panic! Your ram is fine!
What's going on? Linux is borrowing unused memory for disk caching.
This makes it looks like you are low on memory, but you are not!
Everything is fine!
I'm using gloss to create am RTS game in Haskell, but I've noticed that even a very simple program will occupy more and more memory as it runs. The following program, for example, will gradually increase its memory use (it will require ~0.025mb per second ).
module Main (
main
)
where
import Graphics.Gloss
import Graphics.Gloss.Interface.IO.Game
main =
playIO (InWindow "glossmem" (500, 500) (0,0)) white 10 0
(\world -> return (translate (-250) 0 (text $ show world)))
(\event -> (\world -> return world))
(\timePassed -> (\world -> return $ world + timePassed))
I've tried limiting the heap size at runtime but that just causes the program to crash when it hits the limit. I'm concerned this behaviour will become a performance issue when I have a more complex world, is there a way to use gloss such that this won't be an issue? Or am I using the wrong tool for the job?
Thanks, I fixed this in gloss-1.7.7.1. It was a typical laziness-induced space leak in the code that manages the frame timing for animations. Your example program now runs in constant space.
I'm finding CG appears to have a memory leak. I submitted a report via nvidia.com, but if you try this here:
If you remove the line that says
cgD3D11SetTextureParameter( g.theTexture, g.sharedTex ) ;
The leak stops.
Does CG 3.0 really leak?
Using ATI Radeon 5850 GPU / Windows 7 64-bit.
Yes, it leaks. Internally it creates a ShaderResourceView on every call, and never releases it. I think the API is ill-designed, they should have taken a ShaderResourceView* as a parameter to this function, instead of just a Resource*.
I posted about this on nvidia forums about 6 months ago and never got a response
Is your report posted publicly? Or some kind of private support ticket?
Yes, Cg 3.0 leaks every time you call cgD3D11SetTextureParameter(), causing your application's memory usage to climb. Unfortunately it makes Cg 3.0 with D3D11 completely unusable. One symptom of this is that, after a while of your application running, it will stop rendering and the screen will just go black. I wasted a lot of time trying to determine the cause of this before discovering the Cg bug.
If anybody is wondering why this isn't apparent with the Cg D3D11 demos, its because the few that actually use textures are so simple that they can get away with only calling cgD3D11SetTextureParameter() once at the start.
This same bug remains with Cg Toolkit 3.1 (April 2012).
jmp [UPDATE] ;; skip obsolete text segment
Could it be that Cg is being destroyed after d3d so it doesn't release the reference on time? Or vice-versa? such as the function acquiring the texture but not releasing it before d3d closes, because when you set a texture to a shader, the texture is acquired until shader resources are released somehow. You are destroying the d3d context, here:
SAFE_RELEASE( g.d3d );
SAFE_RELEASE( g.gpu );
Later on, you free the shader, as follows CleanupCg():
cgDestroyProgram( g.v_vncShader );
checkForCgError( "destroying vertex program" );
cgDestroyProgram( g.px_vncShader );
checkForCgError( "destroying fragment program" );
Try to change the order of the calls in a way you first release all resources from both cg and d3d, this: cgD3D11SetDevice( g.cgContext, NULL ); should also be called before releasing the d3d context, just in case.
UPDATE:
This should be different inside WinMain():
initD3D11() ; // << FIRST you init D3D
initCg() ; // << SECOND you init CG with the D3D pointers
initD2D1() ; //
initVBs() ;
// Main message loop
while( WM_QUIT != msg.message ){ /* loop code */ }
CleanupDevice(); //// << FIRST you release all D3D, when Cg is still referencing it (why?).
CleanupCg(); //// << SECOND if something in the Cg runtime depend on d3dcontext which you just destroyed, it will crash or leak or do whatever it wants
so you should swap them to ensure Cg to free any d3d pointer:
CleanupCg(); //// << FIRST release Cg to ensure it's not referencing D3D anymore.
CleanupDevice(); //// << SECOND D3D isn't either referencing or being referenced by Cg, so just release it all
You could also provide the debugger output and other info as I asked down there, because you're basically saying "Cg seems to be broken, this is the whole code, look the line ###, is it broken?" but there are more than a thousand lines (1012) of C, C++ and shader code in your file, you basically provide no info but readily point to a Cg bug (based on... what?) which of course, if you're so sure, why would anyone look at the code if the code is fine? Which isn't by the way, not that I don't like it but... it got these little things such as the call ordering which are silly mistakes but that can make debugging a real hell, it's a clear bug, and I may also think that if I just looked into Main and found a bug, well there is a long way up to the render call and the Cg implementation, isn't it? I can't run the app on WinXP, but these errors are in the most predictable places :)
So... when your code is clean of any bug... ohh! look! what I've just found..
~VertexBuffer()
{
SAFE_RELEASE( vb );
SAFE_RELEASE( layout ) ;
}
turns out in VertexBuffer constructor you call iD3D->GetImmediateContext( &gpu ); and store the pointer in a private member, so... shouldn't you add:
SAFE_RELEASE( gpu ); // ? there are 3 VertexBuffers instances, so that's another memory leak.
Ok so there are some things you should fix in your code that cause memory leaks, and I just took a look on it, so you didn't really try. On the other hand, it seems your code is clear and full of explanations and I need to learn some DX11, so actually I should thank you for it. The downvote was somewhat rude though :P specially because I'm probably right, and other people would avoid reading your code as soon as the page displays.