I'm using node-cache to cache data from a CLI application that observes changes in files and caches them to avoid new data processing.
the problem is that I noticed that this cache is destroyed on each command, since each time the tool is called in the terminal a new instance is generated and the old one is destroyed. probably, the data is also destroyed.
I need to keep, for a specific TTL, two things in cache/memory, even if the process ends:
the processed data
the specific instance of fs.watcher, watching and executing caching operations
the question is: how do i do it? I've been searching for days on the internet and trying alternatives and I can't find a solution.
I need to keep ... things in cache/memory, even if the process ends
That's, pretty much by definition, not possible. When a process terminates, all its resources are freed up for use by something else (barring a memory-leak bug in the OS itself).
It sounds like you need to refactor your app into a service that can run in the background and separate front-end that can communicate with it.
Related
Googling for it results in many “how to persist data in a node app” but I’m looking on a way to store the program counter, memory status, event loop, call stack etc in persistent storage, and resume it later.
Benefits: if you see the runtime (a server, container, serverless function) is about to terminate, instead of using business logic to pause and resume (custom work), use the same way operating systems handle multiple processes / threads. Store everything, then resume it later form a different infrastructure (but with identical specs).
I’m sure there is something like this, but simply can’t find the right search term probably.
Ps this might be an OS feature that I’m looking for and not node specific, but if this can be done from within Node’s API (Eg v8 internals) I can basically get an unlimited / long running lambda ;) (which is a bad idea but I want to know if it’s possible).
(V8 developer here.)
V8 definitely doesn't support this.
What V8 does support is taking a heap snapshot, and deserializing that on renewed process startup (and I believe Node is making use of this functionality). That's quite different from freezing an entire running process though.
I'm not sure what you mean by "the same way operating systems handle multiple processes / threads". Operating systems don't usually let you snapshot a process and transfer it to a different machine.
On the same machine, you could literally just let the OS do it: pause the process (e.g. press Ctrl+Z if you started it at a Linux command line, or use equivalent Task Manager functionality if your OS provides it, or similar), and resume it later. If the process itself doesn't fire any repeated tasks/timers, then that's almost equivalent to simply doing nothing: a process that executes no work won't get scheduled by the kernel anyway; a server that isn't serving any requests can just sit around waiting.
If you actually need to transfer a running process to another machine, your best bet may be a VM which you can snapshot, transfer, resume.
Is there a possible way to stop/abort/terminate a required/loaded module?
I found here (https://stackoverflow.com/a/6677355/5781499) something:
var name = require.resolve('moduleName');
delete require.cache[name];
But this does not stop/abort a running timer or similar.
It just keeps doing what the script does.
The reason for me to need this, I want to implement a plugin system where you can start & stop plugins.
"Starting" is easy, just load with require(...) the code.
But what would be the best way to stop everything the plugin is doing?
I have though about a VM, but in node there is no way to abort either a vm execution.
Next thing that came to my mind, was "Worker Threads". They provide a .terminate method which does what I need. (But now I have to deal with inter process communication, which is very complex to keep everything synced)
Would be awesome if someone could give me a hint/tip.
Nodejs does not provide any feature to do what you want so you will have to do a bunch of things manually. As you've discovered, deleting the module for the module cache only affects what happens if you try to load the code again, it does not affect the already loaded code at all.
If you're going to keep the plug-ins in the same process, then you can implement a required method in your plug-ins called something like "shutdown" where the plug-in shuts itself down manually (stops timers, unregisters event handlers, etc...). Implemented correctly, this should disconnect it entirely from anything in your nodejs program. If you then delete the module from the require cache, you can then load a new module in its place. The one downside to this is that nodejs does not ever unload the original code - it just stays in memory. If you're not accessing that original module handle, that code never gets used again, but it isn't freed or GCed by nodejs.
A bit more robust system would be to put each plug-in in their own child-process or worker thread and just communicate with them via the built-in interprocess communication between parent and child process which is essentially just messaging. As long as you don't have to send large amounts of data between parent and child/worker or have super high bandwidth data, then the messaging is pretty simple to use and works well.
If using a separate child process, you can then kill the child process at anytime and the OS will reclaim all resources used by the process (not quite so true for a workerThread). This has its own downsides in that it will likely use a lot more memory since a whole new nodejs process or workerThread in nodejs is a much heavier weight thing than just loading a single module into your existing nodejs process.
Running it in a child process has the advantage that your main process is much more protected from errant code in the plug-in (either accidental or malicious) since they are different processes and the plug-in can't directly mess with the parent process. But, don't fool yourself here, unless you run it in a sandboxed VM, the plug-in can still wreak some havoc on the system since it has access to many resources on the system (disk, network, other peripherals, etc...).
In Node.js cluster mode, if multiple jobs exist in the event loop for one process, should the current job crash the process, what happens to the remaining job?
I'm assuming the remaining jobs in the event loop would go unfulfilled or return a server error. My question is, why is this an acceptable risk? Why would someone opt to use Node.js cluster mode in production then, rather than use something like PHP in production, where there is no risk of this, because PHP handles each request in its own process.
Edit:
Obviously this doesn't just apply to Node.js cluster mode. It can happen on a single instance, in which case obviously the end user would just get a server error. Cluster mode just happens to be my personal use case.
I'm looking for a way to pick back up a job in the queue job should a previous job cause the process to exit, before the subsequent job gets a change to be fulfilled. I am currently reading about how you can use a tool like RabbitMQ to handle your job queue outside of the node.js cluster, and each cluster instance just pulls jobs from the RabbitMQ queue. If anyone has any input on that, that would also be greatly appreciated.
If multiple jobs exist in the event loop for one process. What happens to the remaining jobs if the current job crashes the process?
If a node.js process crashes, the same thing happens to it that happens to any other process. All open sockets get automatically disconnected and the client will receive an immediate close on their socket (socket connection dropped essentially).
If you were using a Java server that was in the middle of handling 10 requests (perhaps in threads) and it crashed, the consequences would be the same. All 10 socket connections would get dropped.
If process isolation from one request to another is your #1 criteria for selecting a server environment, then I guess you wouldn't pick any environment that ever serves multiple requests from the same process. But, you would give up a lot of get that. One of the reasons for the node.js design is that is scales really, really well for a high number of concurrent connections that are all doing mostly I/O things (disk, networking, database stuff, etc...) which happens to be most web servers. Whereas a design that fires up a new process for every incoming connection does not scale as well for a large number of concurrent connections because a process is a much more heavy-weight thing in the eyes of the operating system (memory usage, other system resource usage, task switching overhead, etc...) than the way node.js does things.
And, there are obviously hundreds of other considerations too when choosing a server environment. So, you kind of have to look at the whole picture of what you're designing for and make the best set of tradeoffs.
In general, I wouldn't put this issue anywhere on the radar for why you should choose one over the other unless you expect to be running risky code (perhaps out of your control) that crashes a lot and this issue is therefore more important in your deployment than all the other differences. And, if that was the case, I'd probably isolate the risky code to its own process (even when using nodejs) to alleviate any pain from that crash. You could have a process pool waiting to process risky things. For example, if you were running code submitted by a user, I might run that code in its own isolated VM.
If you're just worried about your own code crashing a lot, then you probably have bigger problems and need more extensive unit testing, more robust error handling and need to take advantage of other tools just as a linter and other code analysis tools to find potential problem areas. With proper design, implementation and error handling, you should be able to keep a single incoming request from harming anything other than itself. That's certainly the philosophy that every server environment that serves multiple requests from the same process advises and the people/companies deploying those servers use.
I need to test how my code will handle the failure of a web role instance in a development environment.
How do I terminate one of the instances? I can't see any option in the UI for this. Seems like a strange ommission
Update
The issue is relating to a distributed cache layer (I know that azure offers their own)
I want to be able to test how the system reacts to a missing or additional node etc
Prehaps my real question is
how up to date is RoleEnvironment.CurrentRoleInstance.Role.Instances
The need to simulate ungraceful exits in the dev emulator usually is done because you are doing something in your web role that is stateful or long running. That is generally discouraged, but sometimes is unavoidable.
I suspect the best way to simulate the a failure is to kill processes. If you open task manager (or better Process Explorer), you will see "WatDebugger" hosting either "WaIISHost" or "WaWorkerHost". If you kill this process, I think it will simulate a failure.
Honestly, it is easier to test this one in the cloud however. You can RDP into one of the instances and kill the 'WaAppAgent' process. That will kill your RoleEntryPoint and fabric controller agent. That will be a true ungraceful failure.
By failure, do you mean becoming unavailable? It should be seamless because the next request would simply be handled by one of the other instances. As long as there is one instance available Azure will route calls to that instance.
This is the nature of a high-available system, requests are handled by the available instances. This is why you have multiple instances in the first place, to handle requests in the case of failure in one or more instances.
This is why you need to always be watchful of how your application handles state. State needs to be maintained outside of the instance, either in queues or in a database. This ensures that any process can pickup a piece of work and execute against it.
There is another question dealing with Session State that should help: How does Microsoft Azure handle Session State?
By terminate an instance do you mean reducing instance count and see which one gets killed? I like Ryan's view about ungraceful exits, but if it's forced kill by the fabric it'll be a different ball game.
I have a site which sometimes takes particularly long to process a request (and that's not a defect). 99% of the time it's pretty quick because it almost doesn't do any processing.
I want to show a message that says "Loading" when the site takes long to process the request. My site uses mod_wsgi and Apache. The way I see it, I would respond saying 'Loading' before completing the processing and do one of two things right before:
-spawn a (daemon) thread to take care of the processing.
-communicate through socket with other process and tell it to take care of the processing (most likely send request to http://localhost:8080/do_processing).
What are the pros and cons of one approach vs the other?
Using a separate process is better. It does not have to be hard at all as suggested in another answer as you can use an existing system for doing exactly that such as Celery (http://celeryproject.org/). Relying on in process threads is not necessarily a good idea unless you are going to implement an internal job queueing system of your own to prevent blowing out of number of threads. Also, in a multiprocess server configuration you cant be guaranteed a request comes back to the same process and so not easy to get status of a running operation. Finally, the web server processes could get killed off and thus your background task could also be killed before it finishes. You would need to have a mechanism for holding state which can survive such an event if that was important. Far easier to use something like Celery.
The process route requires quite a bit of a system processing. Creation of a separate process is relatively expensive and slow. However if your process crashes it doesn't affect your main governing process (you will receive the exit status code and will have an opportunity to respawn a new working process). You will also need some sort of InterProcessCommunication layer (can be a socket, pipe, shared memory, etc...) which is adds to complexity if your project.
Threads are lightweight and cheap. All you need to do is to manage concurrent access to shared resources. So it really depends on the task you have in mind. Threads probably will be more likely the appropriate way to implement your task.