I am trying to run ansible as a spawned process from NodeJS.
I have tried everything I can find on the internet and on StackOverflow to prevent ansible from doing Strict Host Key Checking when logging in via SSH, however ansible is just ignoring all the settings.
I have set the environment variable
export ANSIBLE_HOST_KEY_CHECKING=False
I have added to `~/.ssh/config
Host *
StrictHostKeyChecking no
I have added ANSIBLE_HOST_KEY_CHECKING=False to my nodejs .env file.
As an example of the command I am running, here is some of my code.
function runPlaybook(playbook, ip) {
return new Promise((resolve, reject) => {
let ansible = spawn('ansible-playbook', [`-i ${ip},`, playbook]);
ansible.stderr.on('data', function (data) {
console.log('stderr: ' + data.toString());
});
ansible.stdout.on('data', function (data) {
let stdoutData = data.toString();
console.log('stdout: ', stdoutData);
if (stdoutData.includes(`ok: [${ip}]`)) {
console.log('Clearning Interval');
clearInterval(interval);
}
if (stdoutData.startsWith(ip)) {
const re =
/^[0-9.]* *: *ok=([0-9])* *changed=([0-9])* *unreachable=([0-9])* *failed=([0-9])* *skipped=([0-9])* *rescued=([0-9])* *ignored=([0-9])*/gm;
var m;
while ((m = re.exec(stdoutData))) {
const result = {
ok: parseInt(m[1]),
changed: parseInt(m[2]),
unreachable: parseInt(m[3]),
failed: parseInt(m[4]),
skipped: parseInt(m[5]),
rescued: parseInt(m[6]),
ignored: parseInt(m[7]),
};
if (result.unreachable + result.failed == 0) {
return resolve(result);
} else {
return reject(result);
}
}
}
});
});
}
Just can't think of anything else to try.
I have fixed the problem by using the correct key file, embarrassing I know, however the error I was getting from SSH was very similar if not the same as you would get if the host had changed. In this case, it had. I was deleting VM's on my virtual server and recreating them, I must be the only person working at the moment because the same IP address was available and kept being reissued to new VM's. However at the same time due to a refactor I had also changed the key I was pointing to.
As an upside, feel free to use my regex in the example code to get the Ansible results into an array.
Related
I've built a customer resource using node. The resource code looks okay, but once compiled and placed into Concourse, the "Check" in the resource is failing.
Concourse is not giving any useful information except "Unexpected end of JSON"
I'd like to replicate exactly how Concourse calls the build, but I don't know how to find out what it calls?
My assumption was /opt/resource/check which has #!/usr/bin/env node
so just calling this should be sufficient, but I do not get the same behavior.
What I can determine is that it's hanging on my socket that fetches the params passed via stdIn, code below:
export async function retrieveRequestFromStdin<T extends any>(): Promise<T> {
return new Promise<T>((resolve, reject) => {
let inputRaw = "";
process.stdin.on("data", (chunk) => {
process.stdout.write(chunk);
inputRaw += chunk;
});
process.stdout.write(inputRaw);
process.stdin.on("end", async () => {
try {
const json = JSON.parse(inputRaw) as T;
if (!json.source.server_url.endsWith("/")) {
// Forgive input errors and append missing slash if the user did not honor the docs.
json.source.server_url = `${json.source.server_url}/`;
}
resolve(json);
} catch (e) {
reject(e);
}
});
});
}
This is the check code:
(async () => {
try {
const request: ICheckRequest = await retrieveRequestFromStdin<ICheckRequest>();
// Removed unnecessary items
} catch (e) {
stderr.write(e);
process.exit(1);
}
})();
How do I call a NodeJS script the same way as Concourse, so I can find out exactly what the problem is?
Note, I'm compiling Javascript from Typescript
For a resource check, Concourse will run /opt/resource/check passing in the source information from the resource configuration in on stdin. For example, if you configured the pipeline with:
resources:
- name: resource-name
type: your-new-node-resource-type
source:
server_url: https://someurl.com
the first time check runs, your script would receive this on stdin:
{"source": {"server_url": "https://someurl.com"}}
Your script is then expected to return the "current" version of the resource on stdout. In the example below, I've named the key version-example, but you can name that anything you want. You can also add other keys too if you want. This allows you flexibility in uniquely identifying a version.
[{"version-example": "46"}]
Subsequent calls from Concourse to your check script will also include the latest version it knows about, so continuing the example, the next call will pass this to your script:
{"source": {"server_url": "https://someurl.com"},
"version": {"version-example": "46"}}
Your check script should now return (on stdout) an array of any new versions that it finds in order:
[{"version-example": "47"},
{"version-example": "48"},
{"version-example": "49"}]
For more details, you can check out the official docs, which should also be useful when implementing the in and out scripts.
Taking a quick look at your code, it seems that it's writing stdin twice to stdout, which results in the Unexpected end of JSON message. e.g.:
{"source": {"server_url": "https://someurl.com"}}
{"source": {"server_url": "https://someurl.com"}}
So I've set up a node.js backend that is to be used to move physical items in our warehouse. The database hosting our software is oracle, and our older version of this web application is written in PHP which works fine, but has some weird glitches and is slow as all hell.
The node.js backend works fine for moving single items, but once I try moving a box (which will then move anything from 20-100 items), the entire backend stops at the .commit() part.
Anyone have any clue as to why this happens, and what I can do to remedy it? Suggestions for troubleshooting would be most welcome as well!
Code:
function move(barcode,location) {
var p = new Promise(function(resolve,reject) {
console.log("Started");
exports.findOwner(barcode).then(function(data) {
console.log("Got data");
// console.log(barcode);
var type = data[0];
var info = data[1];
var sql;
sql = "update pitems set location = '"+location+"' where barcode = '"+barcode+"' and status = 0"; // status = 0 is goods in store.
ora.norway.getConnection(function(e,conn) {
if(e) {
reject({"status": 0, "msg": "Failed to get connection", "error":e});
}
else {
console.log("Got connection");
conn.execute(sql,[],{}, function(err,results) {
console.log("Executed");
if(err) {
conn.release();
reject({"status": 0, "msg": "Failed to execute sql"+sql, "error": err});
}
else {
console.log("Execute was successfull"); // This is the last message logged to the console.
conn.commit(function(e) {
conn.release(function(err) {
console.log("Failed to release");
})
if(e) {
console.log("Failed to commit!");
reject({"status": 0, "msg": "Failed to commit sql"+sql, "error": e});
}
else {
console.log("derp6");
resolve({"status": 1, "msg": "Relocated "+results.rowsAffected+" items."});
}
});
}
});
}
});
});
});
return p;
}
Please be aware that your code is open to SQL injection vulnerabilities. Even more so since you posted it online. ;)
I recommend updating your statement to something like this:
update pitems
set location = :location
where barcode = :barcode
and status = 0
Then update your conn.execute as follows:
conn.execute(
sql,
{
location: location,
barcode: barcode
},
{},
function(err,results) {...}
);
Oracle automatically escapes bind variables. But there's another advantage in that you'll avoid hard parses when the values of the bind variables change.
Also, I'm happy to explore the issue you're encountering more with commit. But it would really help if you could provide a reproducible test case that I could run on my end.
I think this is an issue on the database level, an update on multiple items without providing an ID is maybe not allowed.
You should do two things:
1) for debugging purposes, add console.log(JSON.stringify(error)) where you expect an error. Then you'll find the error that your database provides back
2) at the line that says
conn.release(function(err) {
console.log("Failed to release");
})
Check if err is defined:
conn.release(function(err) {
if(err){
console.log("Failed to release");
}
else{console.log("conn released");}
})
This sounds like similar to an issue that I'm having. Node.js is hanging while updating oracle db using oracledb library. It looks like when there are 167 updates to make, it works fine. The program hangs when I have 168 updates. The structure of the program goes like this:
When 168 records from local sqlite db, for each record returned as callback from sqlite: 1.) get an Oracle connection; 2.) do 2 updates to two tables (one update to each table with an autoCommit on the latter execute). All 1st update completed but none can start the second update. They are just hanging there. With 167 records, they will run to completion.
The strange thing observed is that none of the 168 could get started on 2nd update (they finished 1st update) so some will have a chance to go forward to commit. It looks like they are all queued up in some way.
I have a node.js script that does some database queries for me and works fine. The script is starting to get a bit longer so I thought I might start to break it up and thought moving the database connection code out to another file made sense.
Below is the code that I have moved into another file and then included with a require statement.
The issue I'm having is with the 'exports' commands at the bottom of the script. It appears the function 'dbHandleDisconnectUsers()' exports fine however the variable 'dbConnectionUsers' doesn't.
The script errors refer to methods of the object'dbConnectionUsers' (I hope thats the correct terminalogy) missing and gives me the impression I'm not really passing a complete object. Note: I would include the exact errors but I'm not in front of the machine.
var mysql = require('/usr/lib/node_modules/mysql');
// Users Database Configuration
var dbConnectionUsers;
var dbConfigurationUsers = ({
host : 'xxxxx',
user : 'xxxxx',
password : 'xxxxx',
database : 'xxxxxx',
timezone : 'Asia/Singapore'
});
// Users Database Connection & Re-Connection
function dbHandleDisconnectUsers() {
dbConnectionUsers = mysql.createConnection(dbConfigurationUsers);
dbConnectionUsers.connect(function(err) {
if(err) {
console.log('Users Error Connecting to Database:', err);
}else{
dbConnectionUsers.query("SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;");
dbConnectionUsers.query("SET SESSION sql_mode = 'ANSI';");
dbConnectionUsers.query("SET NAMES UTF8;");
dbConnectionUsers.query("SET time_zone='Asia/Singapore';");
}
});
dbConnectionUsers.on('error', function(err) {
console.log('Users Database Protocol Connection Lost: ', err);
if(err.code === 'PROTOCOL_CONNECTION_LOST') {
dbHandleDisconnectUsers();
} else {
throw err;
}
});
}
dbHandleDisconnectUsers();
exports.dbHandleDisconnectUsers() = dbHandleDisconnectUsers();
exports.dbConnectionUsers = dbConnectionUsers;
In the core script I have this require statement:
var database = require('database-connect.js');
And I refer the function/variable as
database.dbHandleDisconnectUsers()
database.dbConnectionUsers
Ignoring the syntax error that everybody else has pointed out in exports.dbHandleDisconnectUsers() = dbHandleDisconnectUsers(), I will point out that dbConnectionUsers is uninitialized.
JavaScript is a pass-by-copy-of-reference language, therefore these lines:
var dbConnectionUsers;
exports.dbConnectionUsers = dbConnectionUsers;
are essentially identical to
exports.dbConnectionUsers = undefined;
Even though you set dbConnectionUsers later, you are not affecting exports.dbConnectionUsers because it holds a copy of the original dbConnectionUsers reference.
It's similar, in primitive data types, to:
var x = 5;
var y = x;
x = 1;
console.log(x); // 1
console.log(y); // 5
For details on how require and module.exports work, I will refer you to a recent answer I posted on the same topic:
Behavior of require in node.js
It's odd that your function is working but your other variable isn't exporting. This shouldn't be the case.
When you export functions you generally don't want to be exporting them as evaluated functions (ie. aFunction() ). The only time you might is if you want export whatever that function returns, or if you want to export an instance of a constructor function as part of your module.
The other thing, which is really odd, and is mentioned in a comment above is that you are trying to assign a value to exports.dbHandleDisconnectUsers(), which should be an undefined and throw an error.
So,in other words: Your code should not look like exports.whatever() = whatever().
Instead you should export both functions and other properties like this:
exports.dbHandleDisconnectUsers = dbHandleDisconnectUsers; // no evaluation ()
exports.dbConnectionUsers = dbConnectionUsers;
I don't know if this is the only thing wrong here, but this is definitely one thing that might be causing an execution error or two :)
Also, taking into consideration what Brandon has pointed out as well, you are initially exporting something undefined. But in your script, you are overwriting the reference anyway.
What you should do instead is make a new object reference, which is persistent and has a property in it that you can update. ie:
var dbConnection = {users: null};
exports.dbConnection = dbConnection;
Then when you run your function:
function dbHandleDisconnectUsers() {
dbConnection.users = mysql.createConnection(dbConfigurationUsers);
dbConnection.users.connect(function(err) {
if(err) {
console.log('Users Error Connecting to Database:', err);
}else{
dbConnection.users.query("SET SESSION TRANSACTION ISOLATION LEVEL SERIALIZABLE;");
dbConnection.users.query("SET SESSION sql_mode = 'ANSI';");
dbConnection.users.query("SET NAMES UTF8;");
dbConnection.users.query("SET time_zone='Asia/Singapore';");
}
});
dbConnection.users.on('error', function(err) {
console.log('Users Database Protocol Connection Lost: ', err);
if(err.code === 'PROTOCOL_CONNECTION_LOST') {
dbHandleDisconnectUsers();
} else {
throw err;
}
});
}
This way, the object reference of dbConnection is never overwritten.
You will then refer to your users db connection in your module as:
database.dbConnection.users
Your function should still work as you were intending on using it before with:
database.dbHandleDisconnectUsers();
Installed NodeJS on Raspberry Pi, is there a way to check if the rPi is connected to the internet via NodeJs ?
A quick and dirty way is to check if Node can resolve www.google.com:
require('dns').resolve('www.google.com', function(err) {
if (err) {
console.log("No connection");
} else {
console.log("Connected");
}
});
This isn't entire foolproof, since your RaspPi can be connected to the Internet yet unable to resolve www.google.com for some reason, and you might also want to check err.type to distinguish between 'unable to resolve' and 'cannot connect to a nameserver so the connection might be down').
While robertklep's solution works, it is far from being the best choice for this. It takes about 3 minutes for dns.resolve to timeout and give an error if you don't have an internet connection, while dns.lookup responds almost instantly with the error ENOTFOUND.
So I made this function:
function checkInternet(cb) {
require('dns').lookup('google.com',function(err) {
if (err && err.code == "ENOTFOUND") {
cb(false);
} else {
cb(true);
}
})
}
// example usage:
checkInternet(function(isConnected) {
if (isConnected) {
// connected to the internet
} else {
// not connected to the internet
}
});
This is by far the fastest way of checking for internet connectivity and it avoids all errors that are not related to internet connectivity.
I had to build something similar in a NodeJS-app some time ago. The way I did it was to first use the networkInterfaces() function is the OS-module and then check if one or more interfaces have a non-internal IP.
If that was true, then I used exec() to start ping with a well-defined server (I like Google's DNS servers). By checking the return value of exec(), I know if ping was sucessful or not. I adjusted the number of pings based on the interface type. Forking a process introduces some overhead, but since this test is not performed too frequently in my app, I can afford it. Also, by using ping and IP-adresses, you dont depend on DNS being configured. Here is an example:
var exec = require('child_process').exec, child;
child = exec('ping -c 1 128.39.36.96', function(error, stdout, stderr){
if(error !== null)
console.log("Not available")
else
console.log("Available")
});
It's not as foolproof as possible but get the job done:
var dns = require('dns');
dns.lookupService('8.8.8.8', 53, function(err, hostname, service){
console.log(hostname, service);
// google-public-dns-a.google.com domain
});
just use a simple if(err) and treat the response adequately. :)
ps.: Please don't bother telling me 8.8.8.8 is not a name to be resolved, it's just a lookup for a highly available dns server from google. The intention is to check connectivity, not name resolution.
Here is a one liner: (Node 10.6+)
let isConnected = !!await require('dns').promises.resolve('google.com').catch(()=>{});
Since I was concerned with DNS cache in other solutions here, I tried an actual connectivity test using http2.
I think this is the best way to test the internet connection as it doesn't send much data and also doesn't rely on DNS resolving alone and it is quite fast.
Note that this was added in: v8.4.0
const http2 = require('http2');
function isConnected() {
return new Promise((resolve) => {
const client = http2.connect('https://www.google.com');
client.on('connect', () => {
resolve(true);
client.destroy();
});
client.on('error', () => {
resolve(false);
client.destroy();
});
});
};
isConnected().then(console.log);
Edit: I made this into a package if anyone is interested.
As of 2019 you can use DNS promises lookup.
NOTE This API is experimental.
const dns = require('dns').promises;
exports.checkInternet = function checkInternet() {
return dns.lookup('google.com')
.then(() => true)
.catch(() => false);
};
I found a great and simple npm tool to detect internet connection. It's looks like more reliable.
First you need to install
npm i check-internet-connected
Then you can call it like follows
const checkInternetConnected = require('check-internet-connected');
const config = {
timeout: 5000, //timeout connecting to each server(A and AAAA), each try (default 5000)
retries: 5,//number of retries to do before failing (default 5)
domain: 'google.com'//the domain to check DNS record of
}
checkInternetConnected(config)
.then(() => {
console.log("Internet available");
}).catch((error) => {
console.log("No internet", error);
});
It is very helpful to check internet connection for our browser is available or not.
var internetAvailable = require("internet-available");
internetAvailable().then(function(){
console.log("Internet available",internetAvailable);
}).catch(function(){
console.log("No internet");
});
for more[internet-available][1]: https://www.npmjs.com/package/internet-available
It's a very simple function that does not import any stuff, but makes use of JS inbuilt function, and can only be executed when called, it does not monitor loss/internet connection; unlike some answers that make use of cache, this result is accurate as it does not return cached result.
const connected = fetch("https://google.com", {
method: "FET",
cache: "no-cache",
headers: { "Content-Type": "application/json" },
referrerPolicy: "no-referrer",
}).then(() => true)
.catch(() => false);
call it using await(make sure your'e inside an async function or you'll get a promise)
console.log(await connected);
I have a node app with mongo server on an amazon ec2 instance. It works great, but I just added a new API call and every time I call it, the server freezes and I cannot access/ssh into it for several hours. While this is happening, my server goes down which makes the app that relies on it unusable and my users angry...
This code works perfectly on my localhost, but as soon as I run it on my server it freezes. My thoughts are that it may be crashing mongo? I have no idea why this would happen...
If anyone has any ideas what could be going wrong, please let me know.
node is using express. The send_error function will perform a res.send({some error}). db.CommentModel returns mongoose.model('comment', Comment);
in app.js
app.get('/puzzle/comment/:id', auth.restrict, puzzle.getComments);
in the file which defines getComments
exports.getComments = function(req, res)
{
var userID = _u.stripNonAlphaNum(req.params.id);
var CommentModel = db.CommentModel;
CommentModel.find({user: userID}, function(e, comments) {
if(e)
{
err.send_error(err.DB_ERROR, res);
}
else if (!comments)
{
err.send_error(err.DB_ERROR, res);
}
else if (comments.length == 0)
{
res.send([]);
}
else
{
var commentIDs = [];
for (var i = 0; i<comments.length; i++)
{
commentIDs.push({_id: comments[i].puzzle});
}
var TargetModel = pApp.findPuzzleModel(_u.stripNonAlphaNum(req.apiKey));
TargetModel.find({removed: false, $or: commentIDs}, function(e, puzzles) {
if(e)
{
err.send_error(err.DB_ERROR, res);
}
else if (!puzzles)
{
err.send_error(err.DB_ERROR, res);
}
else
{
res.send(puzzles);
}
});
}
});
}
It sounds like your query is causing something on your server (potentially mongo) to consume a very large amount of CPU - as this is commonly what causes the issue you have seen with SSH access.
You should try reading over the logs of your mongo instance and seeing if there are any long running queries.
Monngodb provides an internal profiler for examining long running commands. Try setting long running profiling level to 1, running the command and examining the logfile output.
More details on the profiler are available at http://www.mongodb.org/display/DOCS/Database+Profiler