I've been working on a program and it requre's restarting a certain application with Node.JS
I know Python is able to close another app but is Node.JS able to?
For example if a function is triggered I need it to close Spotify from the Node.JS process.
Thanks!
https://nodejs.org/api/process.html#process_process_kill_pid_signal
The process.kill(pid[, signal]) method sends the signal to the process identified by pid.
Of course, this assumes you already know the pid of the process you want to end. If you don't you'll have to find it first.
Related
In VS code there are two ways to launch the debug console for Node. One is “launch” which executes node and passes in your script. The script executes and node exits which i don’t want to happen. The other way is “attach”, this way you launch node yourself using --inspect then attach VS code to the debugger. Then I have to go to the node console and type “.load myscript”. This will keep the node console open after the script has finished.
What I want is to have ease of use of the “launch” method but keep the node console open at the end like the “attach” approach so I can then type further commands or view the contents of variables. There must be a way to do this but I can’t find out how. Can anyone advise how I could achieve this? I am even happy to only use the “launch” method if I could someone add a breakpoint at the end of code so that it would keep node open.
A node.js process will not exit as long as there are events pending. A simple way to do that at the end of your script is to start a server that does nothing:
net.createServer(()=>{}).listen(0)
Setting the port to 0 will cause the OS to give you a random available port so you don't need to think about what port to use.
This is generally safe if you are on a local network. However, if you are worried about other software connecting to your bogus server you can simply close all incoming connections upon receiving them:
net.createServer(x=>x.end()).listen(0)
I have built a service that takes a screenshot of a URL. I have built this using a Node and Phantom JS.
My Node app works as follows:
A simple app that receives an API requests to indicate which URL to load and take a screenshot of
The app spawns a child Phantom process which takes the screenshot and saves it to a temp file on the server
The main process uploads the image to S3
The main process fires an API request back to the initial website to say the image is uploaded with the image’s URL
The temporary file is deleted
A diagram of how it works:
This works fine for a single request, no problem. However, when I throw multiple, consecutive requests at this service I get strange results. Each request received by the service spawns a Phantom JS process and a screenshot is taken, but the data in the API request sent back to the main website is often not correct. Regularly the system will send back the image URL from a screenshot created by another child process.
My hunch is that when the spawned process exits, it sends the API request to the original website with whatever data it has then just received, rather than the data for the process it’s just completed.
I feel like this should be any easy thing to manage, but I haven’t quite found the right approach. Does anyone have any tips/tricks for managing the child processes created with spawn, especially when they exit. I would like to perform another task based on this exited process.
My initial thought was to keep an array of the child process PID’s along with the related data I had and do a lookup in this array when the child process exits. This didn’t seem to fix the problem though - I still had incorrect data being sent back to the main website. I do wonder if I implemented this correctly though. I defined the array on each API request received by the service, so thinking about it, it would have been recreated on each request…I think.
Another thought was that I should be using fork instead of spawn. I think this would allow me to communicate with the child process, but as far as I can see I can only use this to run a JS file, not a executable like Phantom. Is this correct?
I feel a bit like I’m reinventing the wheel at this point but any tips would be much appreciated, thank you.
Why node.js app created as a web server via http.createServer doesn't exit after end as simple console.log() app?
Is it because there is a forever while true {} cycle somewhere in http module?
Deep in the internals of Node.js there is bookkeeping being done. The number of active event listeners is being counted. Events and event-driven programming model are what make Node.js special. Events are also the life blood that keep a Node.js program alive.
A Node.js program will keep running as long as there are active event listeners present. After the last event listener has finished or otherwise terminated the Node.js program will also terminate.
For more details
GO HERE
This is the core of node, that while waiting for new connections, to not exit. Without using loops
There are many other ways, to keep node running, without forever while. For example:
window.setTimeout(function(){},10000000)
In the parent process, I have started the tiny-lr(livereload) server, followed by spawing a child process which looks for changes to the css files. how to pass on the livereload server to the child process or is it possible to query for the livereload server that is currently running in the child process so that I don't create it again getting an already in use error for the port.
the same case with node http server. can I know if the server is already running and use that instead of creating new one.
is it possible to query for the livereload - it is possible and may be implemented in more than one way.
Use stdout/stdin to communicate with the child process. For detailed description look HERE. Basically you can send messages from one process to the other and reply to them.
Use http.request to check if the port is in use.
You can use a file: the process with the server keeps the file open in the write mode - the content of the file stores the port on which the server runs (if needed).
You can use sockets for inter-process communication, as well.
Basically, none of the above guarantees 100% confidentiality, so you have to try/catch for errors anyway: the server may die just after your check, but before you wanted to do something with it.
how to pass on the livereload server to the child process - if you mean sharing an object between different process that it is for sure out of question; if you mean changing the ownership of the object that I am some 99,99% sure it is not possible neither.
What is the problem with having just one process responsible for running the server? And why not to use, let say, forever to take care of running and restarting the server, if needed?
I'm using NodeJS to run a socket server (using socket.io). When a client connects, I want am opening and running a module which does a bunch of stuff. Even though I am careful to try and catch as much as possible, when this module throws an error, it obviously takes down the entire socket server with it.
Is there a way I can separate the two so if the connected clients module script fails, it doesn't necessarily take down the entire server?
I'm assuming this is what child process is for, but the documentation doesn't mention starting other node instances.
I'd obviously need to kill the process if the client disconnected too.
I'm assuming these modules you're talking about are JS code. If so, you might want to try the vm module. This lets you run code in a separate context, and also gives you the ability to do a try / catch around execution of the specific code.
You can run node as a separate process and watch the data go by using spawn, then watch the stderr/stdout/exit events to track any progress. Then kill can be used to kill the process if the client disconnects. You're going to have to map clients and spawned processes though so their disconnect event will trigger the process close properly.
Finally the uncaughtException event can be used as a "catch-all" for any missed exceptions, making it so that the server doesn't get completely killed (signals are a bit of an exception of course).
As the other poster noted, you could leverage the 'vm' module, but as you might be able to tell from the rest of the response, doing so adds significant complexity.
Also, from the 'vm' doc:
Note that running untrusted code is a tricky business requiring great care.
To prevent accidental global variable leakage, vm.runInNewContext is quite
useful, but safely running untrusted code requires a separate process.
While I'm sure you could run a new nodejs instance in a child process, the best practice here is to understand where your application can and will fail, and then program defensively to handle all possible error conditions.
If some part of your code "take(s) down the entire ... server", then you really to understand why this occurred and solve that problem rather than rely on another process to shield you from the work required to design and build a production-quality service.