I'm trying to execute a child_process synchronously in node.js (Yes, I know this is bad, I have a good reason) and retrieve any output on stdout, but I can't quite figure out how...
I found this SO post: node.js execute system command synchronously that describes how to use a library (node-ffi) to execute the command, and this works great, but the only thing I'm able to get is the process exit code. Any data the command executes is sent directly to stdout -- how do I capture this?
> run('whoami')
username
0
in otherwords, username is echo'd to stdout, the result of run is 0.
I'd much rather figure out how to read stdout
So I have a solution working, but don't exactly like it... Just posting here for reference:
I'm using the node-ffi library referenced in the other SO post. I have a function that:
takes in a given command
appends >> run-sync-output
executes it
reads run-sync-output synchronously and stores the result
deletes this tmp file
returns result
There's an obvious issue where if the user doesn't have write access to the current directory, it will fail. Plus, it's just wasted effort. :-/
I have built a node.js module that solves this exact problem. Check it out :)
exec-plan
Update
The above module solves your original problem, because it allows for the synchronous chaining of child processes. Each link in the chain gets the stdout from the previous process in the chain.
I had a similar problem and I ended up writing a node extension for this. You can check out the git repository. It's open source and free and all that good stuff !
https://github.com/aponxi/npm-execxi
ExecXI is a node extension written in C++ to execute shell commands
one by one, outputting the command's output to the console in
real-time. Optional chained, and unchained ways are present; meaning
that you can choose to stop the script after a command fails
(chained), or you can continue as if nothing has happened !
Usage instructions are in the ReadMe file. Feel free to make pull requests or submit issues!
However it doesn't return the stdout yet... Well, I just released it today. Maybe we can build on it.
Anyway, I thought it was worth to mention it. I also posted this to a similar question: node.js execute system command synchronously
Since Node version v0.11.12, there is a child_process.execSync function for this.
Other than writing code a little diferent, there's actually no reason to do anything synched.
What don't you like about this? (docs)
var exec = require('child_process').exec;
exec('whoami', function (error, username) {
console.log('stdout: %s', username);
continueWithYourCode();
});
Related
We have a python script that needs to trigger the open of the Microsoft Store. We believe that the easiest way to do that is to use the ms-windows-store:// protocol.
We're currently doing that like this
import subprocess
ret = subprocess.call(["start", "ms-windows-store://pdp/?ProductId=9WZDNCRFHVJL"], shell=True)
Is that the recommended way to do this? I'm not sure if using start is correct here, or if there's something better?
Use os.startfile("ms-windows-store://pdp/?ProductId=9WZDNCRFHVJL"). This calls WINAPI ShellExecuteW directly. If you use subprocess, you have the expense of starting a child process. Plus CMD's start command will first search PATH to find a file that it can execute. Presuming nothing is found (and nothing likely will be, given this name), it hands the request off to ShellExecuteExW to let the OS shell handle it.
I wish to create one NodeJS source file in a Jupyter notebook which is using the IJavascript kernel so that I can quickly debug my code. Once I have it working, I can then use the "Download As..." feature of Jupyter to save the notebook as a NodeJS script file.
I'd like to have the ability to selectively ignore / include code in the notebook source that will not execute when I run the generated NodeJS script file.
I have solved this problem for doing a similar thing for Python Jupyter notebooks because I can determine if the code is running in an interactive session (IPython [REPL]). I accomplished this by using this function in Python:
def is_interactive():
import __main__ as main
return not hasattr(main, '__file__')
(Thanks to Tell if Python is in interactive mode)
Is there a way to do a similar thing for NodeJS?
I don't know if this is the correct way but couldn't find anything else
basically if you
try {
const repl = __dirname
} catch (err) {
//code run if repl
}
it feels a little hacky but works ¯\_(ツ)_/¯
This may not help the OP in all cases, but could help others googling for this question. Sometimes it's enough to know if the script is running interactively or not (REPL and any program that is run from a shell).
In that case, you can check for whether standard output is a TTY:
process.stdout.isTTY
The fastest and most reliable route would just be to query the process arguments. From the NodeJS executable alone, there are two ways to launch the REPL. Either you do something like this without any script following the call to node.
node --experimental-modules ...
Or you force node into the REPL using interactive mode.
node -i ...
The option ending parameter added in v6.11.0 -- will never append arguments into the process.argv array unless it's executing in script mode; via FILE, -p, or -e. Any arguments meant for NodeJS will be filtered into the accompanying process.execArgv variable, so the only thing left in the process.argv array should be process.execPath. Under these circumstances, we can reduce the query to the solution below.
const isREPL = process.execArgv.includes("-i") || process.argv.length === 1;
console.log(isREPL ? "You're in the REPL" : "You're running a script m8");
This isn't the most robust method since any user can otherwise instantiate a REPL from an intiator script which your code could be ran by. For that I'm pretty sure you could use an artificial error to crawl the traceback and look for a REPL entry. Although I haven't the time to implement and ensure that solution at this time.
This gist shows a code snippet that dumps an object into a CSV file.
File writing is done using module csv-write-stream and it returns a promise.
This code works flawlessly in all the Mocha tests that I have made.
When the code is invoked by the main nodejs app (a server-side REPL application involving raw process.stdin and console.log invocations as interaction with the user), the CSV file is created, but it's always empty and no error/warning seems to be thrown.
I have debugged extensively the REPL code with node-debug and the Chrome dev tools: I am sure that the event handlers of the WriteStream are working properly: no 'error', all 'data' seems to be handled, 'close' runs, the promise resolves as expected.
Nevertheless, in the latter case the file is always 0 bytes. I have checked several Q&A's and cannot find anything as specific as this.
My questions are:
can I be missing some errors? how can I be sure to track all communications about the file write?
in which direction could I intensify my investigation? which other setup could help me isolate the problem?
since the problem may be due to the presence of process.stdin in the equation, what is a way to create a simple, light-weight interaction with the user without having to write a webapp?
I am working on Windows 7. Node 6.9.4, npm 3.5.3, csv-write-stream 2.0.0.
I managed to fix this issue in two ways, either by:
resolving the promise upon the 'finish' event of the FileWriteStream rather than on the 'end' event of the CSVWriteStream
removing the process.exit() I was using at the end of my operations with process.stdin (this implies that this tutorial page may be in need of some corrections)
I've recently run into a question like this:
If you use execlp into a function and you have some more code below, in which situation will this code be executed?
For example:
execlp("ps","ps","-u","username",(char*) NULL);
some more code below --> what does its execution depend on?
Does it depend on the exit status of the program executed by execlp? Or will it be executed anyway because execlp does its stuff independently?
Thanks in advance
The only way that would happen is if execlp itself failed. Once the new program has been exec'd, it does not matter if that program succeeds or fails--it's already too late to go back to the original program instructions.
I have a node.js application, which connect everyday to a server.
On this server, a new version of the app can be available, if so, the installed app download it, check if the download is complete, and if so, stop itself calling a shell script, which replace the old app by the new one, and start it.
I m struggling at starting the update script.
I know I can start it with child_process_execFile function, which I do:
var execF = require('child_process').execFile;
var PATH = process.argv[1].substr(0, process.argv[1].lastIndexOf('/')+1),
filename = 'newapp.js',
execF(PATH + 'up.sh', [PATH + filename], function () {console.log('done'); return ;});
up.sh, for now is just:
cat $1 > /home/pi/test
I get 'done' printed in the console, but test isn t created.
I know that execFile create a subprocess, is it what block the script to do that?
If I suceed to start this, I know I only have to make some cp in the script to have my app auto-updating.
EDIT:
Started as usual (calling the script from console), it work well, is there a reason for the script to don t execute when called from node.js?
I'd suggest that you consider using a module that can do this for you automatically rather than duplicating the effort. Or, at least use their technique as inspiration for you own requirements.
One example is: https://github.com/edwardhotchkiss/always
It's simple to use:
Usage: always <app.js>
=> always app.js
Then, anytime your code changes, the app is killed, and restarted.
As you can see in the source, it uses the Monitor class to watch a specified file, and then uses spawn to kick it off (and of course kill to end the process when a change has happened).
Unfortunately, the [always] output is currently hardcoded into the code, but it would be a simple change/pull request I'm sure to make it optional/configurable. If the author doesn't accept your change, you could just modify a local copy of the code (as it's quite simple overall).
Make sure when you spawn/exec the process you are executing the shell that will be processing the script and not the script itself.
Should be something like
execF("/usr/bin/sh", [PATH + 'up.sh', PATH + filename]);