I have some conceptual questions about Chromium. I would appreciate it if you help me.
Garbage Collection except V8 & Blink?
I know that Blink has 'Oilpan' Garbage Collector and V8 has another GC mechanism.
(Maybe they'll be integrated to 'Unified GC'.)
But how about the Browser process, Renderer compositor thread and GPU, etc?
Is there any GC mechanism for them?
Unified GC: https://www.youtube.com/watch?v=9CukfHGuadc&=&index=26&=&list=PL9ioqAuyl6UJ2KrDYYQwdHfmi28PeLQJS&=&t=0s
V8 Orinoco GC project: https://v8.dev/blog/trash-talk
I know that the compositing(?) part of Browser process is being transferred to 'Viz' service.
But It seems to be experimental feature. So the Question is
Which thread is the DisplayCompositor(which aggregates CompositorFrames) lives in now?
I/O Thread of the Browser process?
Sincerely,
Thanks for danakj#chromium.org
Outside for blink/v8 renderer code we use explicit malloc/free. I don't know of any GC usage in the browser or gpu process, no.
As the text in about:flags says, the display compositor is in the gpu process (on a compositor thread). It is not in the browser process.
This has been enabled on most platforms now, so I think experimental is not the correct qualifier at this point. :)
Related
I am reading CUDA By Example and I found that when they introduced events, they called cudaEventDestroy for each event they created.
However I noticed that some later examples neglected this cleanup function. Are there any undesirable side-effects of forgetting to destroy created events and streams (i.e. like a memory leak when you forget to free allocated memory)?
Any resources the app is still holding at the time it exits will be automatically free'ed by the OS / drivers. So, if the app creates only a limited number of events, it is not strictly necessary to free them manually. Still, letting the app exit without freeing all resources on purpose is bad practice because it becomes hard to distinguish between genuine leaks and "on purpose" leaks.
You have identified bugs in the book's sample code.
CUDA events are lightweight, but a resource leak is a resource leak. Over time, if you leak enough of them, you won't be able to create them anymore.
I've written an app which is handling videos. As we know, video processing takes a huge amount of memory while dealing with HD resolution. My App always seemed to crash. But actually I am 100% sure, that there is no memory leak in my code. Instruments is showing no leak.
At the beginning I am startin up one OpenGLES view and the video engine. For a very short time the memory consumption is high, but falling down to normal level after the initializations are done. I am always getting memory warnings during this period. Normally this is no problem. But if I have a lot of apps in suspended mode running, the App seems to be crashing. Watching into the crash log and using the debugger shows up, that I am only running out of memory.
My customers are flooding my support mail with "app is crashing" mails. But I do know, that they have too much Apps running in the background, so there is no memory left to go. I think it's bad style programing saying the customer that he has to close Background tasks before running the app.
According to this post this is a common problem.
My question is: Is it possible to tell the OS that one needs a lot of memory so the OS should terminate some suspended Apps? This memory stuff makes me crazy, because it's no bug I could fix.
No. It is not possible to affect anything outside of your sandbox without API calls. None exist for affecting other processes in the public API.
Have you tried to minimize your memory usage? In my experience once a memory warning it thrown apps can be more likely to have problems once they are in the background, even when memory usages drops.
If you are using OpenGLES and textures, if you haven't already compress your textures. What is the specific cause of your memory allocation spike?
I'm debugging a multi-thread delphi app.
We are having a trouble that, after connect to the server, the client app is getting 100% of the CPU.
Is there a way for me to debug and know shich thread is dois that?
Process Explorer will give you usage details down to the thread level for any process.
Run your app
Run Process Explorer (after downloading it ;-)
Double click on your executable in the process list
Select the Threads tab and there you will see:
The Thread ID
CPU Usage
Cycles Delta
And the start address
The TID ought to be enough to nail down your CPU hogging thread.
As Paul Sasik suggests, Process Explorer is probably what you want to do. If your debugging strategy involves monitoring code that is in your application itself, use GetThreadTimes.
I want to provide a way to upload plugins (assemblies) to a site by users for a scripting propose. Through mono.cecil I can analyse those assemblies and limit access only to a predefined list of functions, but I also need to limit memory usage, execution time and kill the thread if it goes to overdraft this resources.
I think I can monitor the memory usage by the profiler api, but as I know there are no tools to abort thread with guarantee. Is there any way to abort thread with guarantee? Maybe I should run code using embedding mono and control the execution of thread in native part of an application, is it possible?
You could use Thread.Abort() as long as you don't allow the plugin code to ResetAbort().
Thread level control was not practical IMHO (anyone did that in the past). Typically you should consider process level control of memory usage or application domain level.
My VPS account has been occasionally running out of memory. It's using Apache on Linux. Support says it's a slow memory leak and has enabled MaxRequestsPerChild to deal with it.
I have a few questions about this. When a child process dies, will it cause my scripts to lose session data? Does anyone have advice on how I can track down this memory leak?
Thanks
No, when a child process dies you will not lose any data unless it was in the middle of a request at the time (which should not happen if it exits due to MaxRequestsPerChild).
You should try to reproduce the memory leak using an identical software stack on your test system. You can use tools such as Valgrind to try to detect it.
You can also try a debug build of your web server and its modules, which will enable you to detect what's going on.
It's difficult to reproduce the behaviour of production systems in non-production ones. If you have auto-test coverage of your web application, you could try using your full auto-test suite, but in practice this is unlikely to cover every code path therefore may miss the leaky one.
When a child process dies, will it cause my scripts to lose session data?
Without knowing what scripting language and session handler you are using (and the actual code) it rather hard to say.
In most cases, using scripting languages in modules or via [fast] cgi, then its very unlikely that the session data would actually be lost - although if the process dies in the middle of processing a request it may not get the chance to write the updated session back to whatever is storing the session. And in the very unlikely event it dies during the writeback, it may corrupt the session data. These are quite exceptional circumstances.
OTOH if your application logic is implemented via a daemon (e.g. a Java container) then its quite probable that memory leaks could accumulate (although these would be reported against a different process).
Note that if the problem is alleviated by setting MaxRequestsPerChild then it implies that the problem is occurring in an Apache module.
The production releases of Apache itself, in my experience, is very stable without memory leaks. However I've not used all the modules. Not sure if ExtendedStatus gives a breakdwon of memory usage by module - might be worth checking.
I've previously seen problems with the memory management of modules loaded by the PHP module not respecting PHP's memory limits - these did clear down at the end of the request though.
C.