OS; resources automatically clean up - resources

From this answer: When is a C++ terminate handler the Right Thing(TM)?
It would be nice to have a list of resources that 'are' and 'are not' automatically cleaned up by the OS when an application quits. In your answer it would be nice if you can specify the OS/resource and preferably a link to some documentaiton (if appropriate).
The obvious one:
Memory: Yes automatically cleaned up.
Question. Are there any exceptions?

There are some obscure resources that Windows does not clean up when an app crashes or exits without explicitly releasing them, mostly because the OS doesn't know if they're important to leave around or not.
Temporary files -- as others have mentioned.
Globally registered WNDCLASSes ("No window classes registered by a DLL are unregistered when the DLL is unloaded. A DLL must explicitly unregister its classes when it is unloaded." MSDN) If your global window class also has a class DC, then that DC will leak as well.
Global ATOMs (a relatively limited resource).
Window message IDs created with RegisterWindowMessage. These are designed to leak, since there's no UnregisterWindowMessage.
Semaphores and Events aren't technically leaked, but when the owning application goes away without signalling them, then other processes can hang. This is not true for a Mutex. If the owning application goes away, other processes waiting on that Mutex are released.
There may be some residual weirdness on Windows XP and earlier if you don't unregister a hot key before exiting. Other applications may be unable to register the same hot key.
On Windows XP and earlier, it's not uncommon to have a zombie console window live on after a process crashes. (Specifically, a GUI application that also creates a console window.) It shows up on the task bar. All you can do is minimize, restore, or move the window.
Buggy drivers can be aggravated by apps that don't explicitly release resources when they exit. Non-paged pool leaks are fairly common.
Data copied to the clipboard. I guess that doesn't really count because it's owned by the OS at that point, not the application that put it there.
Globally installed hooks aren't unloaded when the installing process crashes before removing the hook.

Temporary files is a good example of something that will not be cleaned up - the handle is released but the file isn't deleted

In Windows, just about anything you can get handle to should be in fact be managed by the OS - that's why you only get a handle. This includes, but is not limited tom
the following (list copied from MSDN docs for CloseHandle() API):
Communications device
Console input
Console screen buffer
Event
File
File mapping
Job
Mailslot
Mutex
Named pipe
Process
Semaphore
Socket
Thread
Token
All of these should be recovered by the OS when an application closes, though possibly not immediately, depending on their use by other processes.
Other operating systems work in the same way. It's hard to an imagine an OS worth its name (I exclude embedded systems etc.) where this is not the case - resource management is the #1 raison d'etre for an operating system.

Any exception is a bug - applications can and do crash and do contain leaks. An OS needs to be reliable and not exhaust resources even in the face of poorly written applications. This also applies to non-OS resources. Services that hand out resources to processes need to free those resources when the process exits. If they don't it is a bug that needs to be fixed.
If you're looking for program artifacts which can persist beyond process exit, on Windows you have at least:
Registry keys that are created
without REG_OPTION_VOLATILE
Files created without FILE_FLAG_DELETE_ON_CLOSE
Event log entries
Paper that was used for print jobs

Related

Is there a node.js api that allows to store the current running node process and resume it later?

Googling for it results in many “how to persist data in a node app” but I’m looking on a way to store the program counter, memory status, event loop, call stack etc in persistent storage, and resume it later.
Benefits: if you see the runtime (a server, container, serverless function) is about to terminate, instead of using business logic to pause and resume (custom work), use the same way operating systems handle multiple processes / threads. Store everything, then resume it later form a different infrastructure (but with identical specs).
I’m sure there is something like this, but simply can’t find the right search term probably.
Ps this might be an OS feature that I’m looking for and not node specific, but if this can be done from within Node’s API (Eg v8 internals) I can basically get an unlimited / long running lambda ;) (which is a bad idea but I want to know if it’s possible).
(V8 developer here.)
V8 definitely doesn't support this.
What V8 does support is taking a heap snapshot, and deserializing that on renewed process startup (and I believe Node is making use of this functionality). That's quite different from freezing an entire running process though.
I'm not sure what you mean by "the same way operating systems handle multiple processes / threads". Operating systems don't usually let you snapshot a process and transfer it to a different machine.
On the same machine, you could literally just let the OS do it: pause the process (e.g. press Ctrl+Z if you started it at a Linux command line, or use equivalent Task Manager functionality if your OS provides it, or similar), and resume it later. If the process itself doesn't fire any repeated tasks/timers, then that's almost equivalent to simply doing nothing: a process that executes no work won't get scheduled by the kernel anyway; a server that isn't serving any requests can just sit around waiting.
If you actually need to transfer a running process to another machine, your best bet may be a VM which you can snapshot, transfer, resume.

Stop/abort/terminate required (loaded) module

Is there a possible way to stop/abort/terminate a required/loaded module?
I found here (https://stackoverflow.com/a/6677355/5781499) something:
var name = require.resolve('moduleName');
delete require.cache[name];
But this does not stop/abort a running timer or similar.
It just keeps doing what the script does.
The reason for me to need this, I want to implement a plugin system where you can start & stop plugins.
"Starting" is easy, just load with require(...) the code.
But what would be the best way to stop everything the plugin is doing?
I have though about a VM, but in node there is no way to abort either a vm execution.
Next thing that came to my mind, was "Worker Threads". They provide a .terminate method which does what I need. (But now I have to deal with inter process communication, which is very complex to keep everything synced)
Would be awesome if someone could give me a hint/tip.
Nodejs does not provide any feature to do what you want so you will have to do a bunch of things manually. As you've discovered, deleting the module for the module cache only affects what happens if you try to load the code again, it does not affect the already loaded code at all.
If you're going to keep the plug-ins in the same process, then you can implement a required method in your plug-ins called something like "shutdown" where the plug-in shuts itself down manually (stops timers, unregisters event handlers, etc...). Implemented correctly, this should disconnect it entirely from anything in your nodejs program. If you then delete the module from the require cache, you can then load a new module in its place. The one downside to this is that nodejs does not ever unload the original code - it just stays in memory. If you're not accessing that original module handle, that code never gets used again, but it isn't freed or GCed by nodejs.
A bit more robust system would be to put each plug-in in their own child-process or worker thread and just communicate with them via the built-in interprocess communication between parent and child process which is essentially just messaging. As long as you don't have to send large amounts of data between parent and child/worker or have super high bandwidth data, then the messaging is pretty simple to use and works well.
If using a separate child process, you can then kill the child process at anytime and the OS will reclaim all resources used by the process (not quite so true for a workerThread). This has its own downsides in that it will likely use a lot more memory since a whole new nodejs process or workerThread in nodejs is a much heavier weight thing than just loading a single module into your existing nodejs process.
Running it in a child process has the advantage that your main process is much more protected from errant code in the plug-in (either accidental or malicious) since they are different processes and the plug-in can't directly mess with the parent process. But, don't fool yourself here, unless you run it in a sandboxed VM, the plug-in can still wreak some havoc on the system since it has access to many resources on the system (disk, network, other peripherals, etc...).

How to list threads when debugging in Visual Studio Express 2010

I am trying to track down the reason why my WPF application is not ending cleanly while debugging. By 'cleanly' I mean that all the windows are closed, I can see various messages in the Output window showing that the app has ended but the process is still active and the 'Stop' button in the debugger is still active.
I call the Shutdown() method but something is stopping the application from ending. I am pretty sure it has something to do with the ethernet connection to an IO device but cannot see what I am doing wrong. (When I comment out the call to connect the device the app can exit cleanly)
I was wondering if VSE 2010 can list all active threads as this might give a clue as to what is still 'alive' after the main program ends. Or is there an external tool that might help here?
You should be able to use the Visual Studio Threads window to see which threads are still active. I'm not entirely sure this window is available in the Express edition (the documentation doesn't mention such a limitation), but should you not have it, then you can also use WinDbg to list all threads. WinDbg is part of the debugging tools for Windows. You might need to install the latest version of the Windows SDK to get it.
Use the debugger first. Debug + Break All, Debug + Windows + Threads to see what threads are still running. You can double-click one and use Debug + Windows + Call Stack to see what it is doing. The typical case is a thread you started but forgot to tell to terminate. The Thread.IsBackground property is a way to let the CLR abort a thread automatically for you.
Technically it is possible to have a problem with a device that prevents a process from shutting down. The Threads window would then typically show only one thread with an otherwise impenetrable stack trace. If you use Task Manager, Processes tab, View + Select Columns, tick Handles, then you may see only one handle still in use. The diagnostic then is that you have a lousy device driver on your machine that doesn't properly support I/O cancellation. Which could leave a kernel thread running that doesn't quit, preventing the process from terminating. Very unusual, look for the reasons given in the first paragraph first.

Should CUDA events and streams always be destroyed?

I am reading CUDA By Example and I found that when they introduced events, they called cudaEventDestroy for each event they created.
However I noticed that some later examples neglected this cleanup function. Are there any undesirable side-effects of forgetting to destroy created events and streams (i.e. like a memory leak when you forget to free allocated memory)?
Any resources the app is still holding at the time it exits will be automatically free'ed by the OS / drivers. So, if the app creates only a limited number of events, it is not strictly necessary to free them manually. Still, letting the app exit without freeing all resources on purpose is bad practice because it becomes hard to distinguish between genuine leaks and "on purpose" leaks.
You have identified bugs in the book's sample code.
CUDA events are lightweight, but a resource leak is a resource leak. Over time, if you leak enough of them, you won't be able to create them anymore.

Watchdog win service to watch another win service

I want to make a windows service that monitors another windows service, and make sure that it is working.
sometimes the Win Service that I want to watch stay in the memory (appear in task manager, so it is considered a running service, but the fact is that it is doing nothing, it is dead, its timer is not firing for one reason, which is not the subject for this question).
what I need is to make a watch dog Win Service that somehow reads a value in the memory that the other watched service is periodically writing.
I thought about using Named Pipes but I don't want to add communication issues to my services, I want to know if there is a way to create such a shared memory between 2 applications (possibly using a named system wide Mutex?)
Since you have to deal with detecting a zombie service I don't think using a kernel object like a mutex will help, you need to detect activity. A semaphore isn't a good fit either.
My personal preference would be a named pipe sending small heartbeat messages (since that could be detected across a network as well), but if you want to avoid the complexity of pipe comms - which I guess is understandable - then you could update a DWORD in a predetermined registry key. If both services run under LocalSystem you could write a key/value into HKEY_LOCAL_MACHINE. Run a pump-up timer and watch for changes to the key every so often (watch out for counter wrap-around). You won't have a normal window/message pump so SetTimer is off-limits, but you can still use timeSetEvent or waitable timers.
HKLM won't be available if one of the services runs under a non-admin account, but that's a pretty rare situation for services. Of course all this assumes you have access to the code of both services. Watching a third-party service would severely limit your options.

Resources