I have a Node Js program that has three parts to it.
The first part executes and returns the IP and User
The second part is calling the following shellscript
#!/bin/bash
IP=$1
User=$2
ssh -i /Users/cer.pem ubuntu#$IP "cd /home/$ && ls -lth" >> /Users/outcome.txt
Upon successful creation of the outcome.txt, I have to continue doing other stuff in the third part.
I did find two items relevant to thi Run shell script with node.js (childProcess) & Node.js Shell Script And Arguments, but these arent exactly solving my problem as it doesnt talk handling the sync nature of shellsrcipt.
More info about how to really work with child_process would really help. 1) How do I pass the IP and User to the 2nd node module embedding the shellscript? 2) How do I extract the dir listing from the outcome of shellscript?
Can anyone please help me here?
So if your goal is to list the contents of a folder on a remote system via ssh, be aware that this can be done several ways. Use child_process and a shell script is OK, but you could also use node-control or mscdex/ssh2 (and probably numerous others).
But in any case, when the remote work is being done, your node code will continue to execute asynchronously. Even if your script is synchronous, you have to write your control flow logic in your node.js code asynchronously.
Start with some basic nested callbacks.
function getIpAndUser(callback) {
//get them then do
callback(null, ip, user);
}
function listDirectory(ip, user, callback) {
//do your child_process.exec here
//eventually call
callback(null, output)
}
function thirdPart() {
}
//combine them together correctly:
getIpAndUser(function (error, ip, user) {
if (error) {
console.error(error);
return;
}
listDirectory(ip, user, function () {
if (error) {
console.error(error);
return;
}
thirdPart();
});
});
}
Once you grok that you can rewrite the control flow using something like async.js or a promises library if you so choose.
To address your further questions in the comments:
1) child_process.exec('list_dir.sh ' + ip + ' ' + user, callback)
Note you should eventually escape those arguments properly and probably switch to child_process.execFile, but start with that.
2) The way they do in the example that is right there in the documentation
Related
I have made a Node.js script which checks for new entries in a MySQL database and uses socket.io to send data to the client's web browser. The script is meant to check for new entries approximately every 2 seconds. I am using Forever to keep the script running as this is hosted on a VPS.
I believe what's happening is that the for loop is looping infinitely (more on why I think that's the issue below). There are no error messages in the Forever generated log file and the script is "running" even when it's started to hang up. Specifically, the part of the script that hangs up is the script stops accepting browser requests at port 8888 and doesn't serve the client-side socket.io js files. I've done some troubleshooting and identified a few key components that may be causing this issue, but at the end of the day, I'm not sure why it's happening and can't seem to find a work around.
Here is the relevant part of the code:
http.listen(8888,function(){
console.log("Listening on 8888");
});
function checkEntry() {
pool.getConnection(function(err,connection) {
connection.query("SELECT * FROM `data_alert` WHERE processtime > " + (Math.floor(new Date() / 1000) - 172800) + " AND pushed IS NULL", function (err, rows) {
connection.release();
if (!err) {
if(Object.keys(rows).length > 0) {
var x;
for(x = 0; x < Object.keys(rows).length; x++) {
connection.query("UPDATE `data_alert` SET pushed = 1 WHERE id = " + rows[x]['id'],function() {
connection.release();
io.emit('refresh feed', 'refresh');
});
}
}
}
});
});
setTimeout(function() { checkEntry();var d = new Date();console.log(d.getTime()); },1000);
}
checkEntry();
Just a few interesting things I've discovered while trouble shooting...
This only happens when I run the script on Forever. Work's completely fine if I use shell and just leave my terminal open.
It starts to happen after 5-30 minutes of running the script, it does not immediately hang up on the first execution of the checkEntry function.
I originally tried this with setInterval instead of setTimeout, the issue has remained exactly the same.
If I remove the setInterval/setTimeout function and run the checkEntry function only once, it does not hang up.
If I take out the javascript for loop in the checkEntry function, the hang ups stop (but obviously, that for loop controls necessary functionality so I have to at least find another way of using it).
I've also tried using a for-in loop for the rows object and the performance is exactly the same.
Any ideas would be immensely helpful at this point. I started working with Node.js just recently so there may be a glaringly obvious reason that I'm missing here.
Thank you.
So I just wanted to come back to this and address what the issue was. It took me quite some time to figure out and it can only be explained by my own inexperience. There is a section to my script where my code contained the following:
app.get("/", (request, response) => {
// Some code to log things to the console here.
});
The issue was that I was not sending a response. The new code looks as follows and has resolved my hang up issues:
app.get("/", (request, response) => {
// Some code to log things to the console here.
response.send("OK");
});
The issue had nothing to do with the part of the code I presented in the initial question.
I have been following a series of tutorials on Node.js. To demonstrate child processes using exec, I have been given the code below under the file of exec.js When I go to the command line for node, I type in
node exec.js
then nothing happens. Why would this be?
var exec = require("child_process").exec;
exec("open http://www.linkedin.com");
Your code works for me.
To diagnose and fix this, I would try:
In your terminal (not using Node), verify that open http://www.linkedin.com works at all. You should see your default web browser open to the appropriate URL. If that doesn't happen, then Node can't magically fix it. Most likely, you'll need to set the default browser.
Wrap the URL in single quotes just for good measure. This protects you in some cases where the URL contains certain reserved characters.
exec("open 'http://www.linkedin.com'");
Add a callback so you can see the command's output and verify it completed successfully.
exec("open 'http://www.linkedin.com'", function (err, stdout, stderr) {
console.log('err:', err);
console.log('stdout:', stdout);
console.log('stderr:', stderr);
});
The ideal solution is to use opn, a high-quality module that already exists for this exact purpose.
For Windows, use the start command instead of open (which is for Mac) as below:
exec("start http://www.linkedin.com");
I am developing a small web app where I want to be able to give my user the ability to press a button and be able to execute a script remotely on the server. I want the output of the script to be displayed on the client.
I am just trying to understand the best way there is to implement functionality like that. Are there any libraries/nodules that let you do something like that directly?
Possible ways
Child.process()
Take a look at node's child_process library. Link
I've used the .exec function before, as that runs a script to completion and captures the output. That sounds like what your are looking for, just replace their example command with yours.
var exec = require('child_process').exec,
child;
child = exec('cat *.js bad_file | wc -l',
function (error, stdout, stderr) {
console.log('stdout: ' + stdout);
console.log('stderr: ' + stderr);
if (error !== null) {
console.log('exec error: ' + error);
}
});
Wrap this call in a callback, and you'll get a response to the callback, and then to the client, when the script finishes. Be aware of the scripts your are running via a HTTP request. Open strings like "run any command" would be obvious security problems, and long running scripts would be a problem for HTTP timeouts. You could implement a WebSockets based solution and pipe results as they come (take a look at .spawn if you want to kick off a process and keep it running, and maybe interact with it later.)
I am not very much familiar with nodejs but, I need some guidance in my task. Any help would be appreciated.
I have nodejs file which runs from command line.
filename arguments and that do some operation whatever arguments I have passed.
Now, I have html page and different options to select different operation. Based on selection, I can pass my parameters to any file. that can be any local node js file which calls my another nodejs file internally. Is that possible ? I am not sure about what would be my approach !
I always have to run different command from terminal to execute different task. so, my goal is to reduce that overhead. I can select options from UI and do operations through nodejs file.
I was bored so I decided to try to answer this even though I'm not totally sure it's what you're asking. If you mean you just need to run a node script from a node web app and you normally run that script from the terminal, just require your script and run it programmatically.
Let's pretend this script you run looks like this:
// myscript.js
var task = process.argv[2];
if (!task) {
console.log('Please provide a task.');
return;
}
switch (task.toLowerCase()) {
case 'task1':
console.log('Performed Task 1');
break;
case 'task2':
console.log('Performed Task 2');
break;
default:
console.log('Unrecognized task.');
break;
}
With that you'd normally do something like:
$ node myscript task1
Instead you could modify the script to look like this:
// Define our task logic as functions attached to exports.
// This allows our script to be required by other node apps.
exports.task1 = function () {
console.log('Performed Task 1');
};
exports.task2 = function () {
console.log('Performed Task 2');
};
// If process.argv has more than 2 items then we know
// this is running from the terminal and the third item
// is the task we want to run :)
if (process.argv.length > 2) {
var task = process.argv[2];
if (!task) {
console.error('Please provide a task.');
return;
}
// Check the 3rd command line argument. If it matches a
// task name, invoke the related task function.
if (exports.hasOwnProperty(task)) {
exports[task]();
} else {
console.error('Unrecognized task.');
}
}
Now you can run it from the terminal the same way:
$ node myscript task1
Or you can require it from an application, including a web application:
// app.js
var taskScript = require('./myscript.js');
taskScript.task1();
taskScript.task2();
Click the animated gif for a larger smoother version. Just remember that if a user invokes your task script from your web app via a button or something, the script will be running on the web server and not the user's local machine. That should be obvious but I thought I'd remind you anyway :)
EDIT
I already did the video so I'm not going to redo it, but I just discovered module.parent. The parent property is only populated if your script was loaded from another script via require. This is a better way to test if your script is being run directly from the terminal or not. The way I did it might have problems if you pass an argument in when you start your app.js file, such as --debug. It would try to run a task called "--debug" and then print out "Unrecognized task." to the console when you start your app.
I suggest changing this:
if (process.argv.length > 2) {
To this:
if (!module.parent) {
Reference: Can I know, in node.js, if my script is being run directly or being loaded by another script?
I create an Express.js library that uses the official Node.js driver for its Mongodb operations.
I'm currently in the process of writing unit tests and I want to simulate failures to access the database in order to ensure:
The library acknowledges failure cases (handles the error)
Makes the right error callbacks and fires the proper events.
I want the tests to run cross-platform, preferably without having to shut down or start the database with special parameters.
Looking at the reference for commands, the sleep command seems to do almost exactly what I want, but the waiting time in seconds in pretty long, plus it is flagged as for internal use only and you need to fire the database with a special parameter for it to work. The forceerror command looks like another good one, but again, it's listed for internal use only and the description is vague to say the least.
I am wondering if there is any recommended (preferably not overly hackish) way of doing this.
Here, it requires superuser privileges for the node process executing this script to send signals to the MongoDB process without having spawned it, but it is the best I found so far to simulate unresponsiveness:
var MongoDB = require('mongodb');
MongoDB.MongoClient.connect("mongodb://localhost:27017/SomeDB", {'server': {'socketOptions': {'connectTimeoutMS': 50, 'socketTimeoutMS': 50}}}, function(Err, DB) {
if(Err)
{
console.log(Err);
}
else
{
DB.command({'serverStatus': 1}, function(Err, Result) {
if(Err)
{
console.log(Err)
}
else
{
process.kill(Result.pid, 'SIGSTOP');
//Put testing logic to test unresponsiveness
process.kill(Result.pid, 'SIGCONT');
DB.close();
}
});
}
});
Edit:
If your testing logic crashes on Linux, you can resume the MongoDB process manually on the shell by executing:
kill -CONT PID
Where PID is the process id of the MongoDB process.