I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});
Related
I've been creating a multi tenant app where I've been creating the database connections on the fly as soon as I resolve the tenant database connection string from the request that has just hit the server.
It's working as expected, but the connections keeps adding up and never they are never getting disconnected.
From what I've been reading, it seems like the mongoose.connect manages the connections but the mongoose.createConnection doesn't, I'm not sure if my undestanding is correct here.
I thought on creating my own connection pool with map in memory and use the connection from the map if it already exists in the map, but I'm not sure if this is a good approach.
Does anyone know if there is a npm connection pool package already built for this issue? Or any implementation ideas?
I also thought closing out each connection manually when the request lifecycle ends, but it will affect the performance if I have to connect and disconnect from mongo per each request, instead of using a connection pool.
Here is the part of the code I'm creating the connection, nothing special here because I'm always creating the connection.
// ... Resolve connection string from request
let tentantConn;
try {
// One connection per tenant
tentantConn = await mongoose.createConnection(
decrypt(tenant.dbUrl),
{
useNewUrlParser: true,
useUnifiedTopology: true
});
}catch (e) {
req.log.info({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
return reply.status(401).send({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
}
// ...
The connection pool is implemented on the driver level:
https://github.com/mongodb/node-mongodb-native/blob/main/src/cmap/connection_pool.ts
By default it opens 5 connection per server. You can change pool size but you cannot disable it.
Now, terminology is a bit confusing, as a single mongodb server / cluster can have multiple databases. They share the same connection string - same 5 connections from the pool regardless of number of databases.
Assumption your tenants have individual clusters and do connect to different mongodb servers, in order to close these connections you need to explicitly call
await mongoose.connection.close()
Took me few days to be able to get back to this issue, but I was able to tweak my code and now the connection count on mongodb atlas seems to be stable. I'm not super happy to be using a global variable to fix this issue, but it is solving my issue for now.
async function switchTenantConnection(aConnStr, aDbName, aAsyncOpenCallback){
const hasConn = global.connectionPoolTest !== null;
if(!hasConn){
const tentantConn = await getTenantConnectionFromEncryptStr(aConnStr);
if(aAsyncOpenCallback){
tentantConn.once('open', aAsyncOpenCallback);
}
tentantConn.once('disconnected', async function () {
global.connectionPoolTest = null;
});
tentantConn.once('error', async function () {
global.connectionPoolTest = null;
});
global.connectionPoolTest= { dbName: aDbName, connection: tentantConn, createdAt: new Date() };
return tentantConn;
}
return global.connectionPoolTest.connection.useDb(aDbName);
}
I keep getting the following error response from node when trying to run a read call to rally:
Error: getaddrinfo ENOTFOUND rally1.rallydev.com rally1.rallydev.com:443
I am using the Rally Node SDK, and node v7. I am on a local machine. It is successfully reaching and logging the 'releaseoid' before the 'try'.
I feel like I am not specifying http (which I was before and now completely commented out the server, letting the SDK default it). But it is continuing to give back that error. I could not find (or possibly understand) other general Node guidance that may address this situation. I am not clear where port 443 is coming from as I am not specifying it. Is the SDK adding it?
If I specify the server address without http:
server: 'rally1.rallydev.com',
I still get an error, but this time:
Error: Invalid URI "rally1.rallydev.com/slm/webservice/v2.0null
I am new to Node and not sure if I am having a problem with Node or the Rally Node SDK.
Code below.
var rally = require('rally');
var rallyApi = rally({
apiKey: 'xx',
apiVersion: 'v2.0',
//server: 'rally1.rallydev.com',
requestOptions: {
headers: {
'X-RallyIntegrationName' : 'Gather release information after webhook',
'X-RallyIntegrationVendor' : 'XX',
'X-RallyIntegrationVersion' : '0.9'
}
}
});
// exports.getReleaseDetails = function(releaseoid, result) {
// console.log('get release details being successfully called');
//
//
//
// }
module.exports = {
getReleaseDetails: async(releaseoid) => {
console.log(releaseoid);
try {
let res = await
rallyApi.get({
ref: 'release/' + releaseoid,
fetch: [
'Name',
'Notes',
'Release Date'
]
//requestOptions: {}
});
res = await res;
console.log(res);
} catch(e) {
console.error('something went wrong');
console.log(e);
}
}
}
That mostly looks right. I haven't tried to use async/await with the node toolkit yet- it would be interesting to see if that works. It should, since get and all the other methods return promises in addition to handling standard node callback syntax.
But anyway, I think the issue you're having is a missing leading / on your ref.
rallyApi.get({
ref: '/release/' + releaseOid
});
Give that a shot?
As for the network errors, is it possible that you're behind a proxy on your network? You're right though, https://rally1.rallydev.com is the default server so you shouldn't have to specify it. FYI, 443 is just the default port for https traffic.
I am writing a program to work with rabbitmq via amqp on heroku.
The part of my program have this code:
console.log( 'APP START' );
//Connect to db and start
global.controllers.db.opendb(dbsettings, function(error,db){
if (!error){
global.db = db;
console.log( 'DB: connection to database established.' );
var con = amqp.createConnection( { url: global.queue.producers.host } );
con.on( 'ready', function() {
console.log( 'mq: producers connection ready.' );
});
}
});
As I understood from documentation I should get only one message upon successful connection to queue service.
Is there any particular reason why my output have a lot of lines containing mq: producers connection ready. like this then?
The amqp-node library automatically reconnects either when the connection is lost or when an error occurs in your code. I can't see anything wrong with your code above, but if any exceptions are thrown in your rabbit-related code (also in other places, such as connecting and subscribing to queues) amqp-node will try to reestablish your connection - and keep getting the same exception and keep retrying.
I have a Node.js application with a frontend app and a backend app, the backend will manage the list and "push" an update to the frontend app, the call to the frontend app will trigger a list update so that all clients receive the correct list data.
The problem is on the backend side, when I press the button, I perform an AJAX call, and that AJAX call will perform the following code (trimmed some operations out of it):
Lists.findOne({_id: active_settings.active_id}, function(error, lists_result) {
var song_list = new Array();
for (i=0; i < lists_result.songs.length; i++) {
song_list.push(lists_result.songs[i].ref);
}
Song.find({
'_id': {$in: song_list}
}, function(error, songs){
// DO STUFF WITH THE SONGS
// UPDATE SETTINGS (code trimmed)
active_settings.save(function(error, updated_settings) {
list = {
settings: updated_settings,
};
var io = require('socket.io-client');
var socket = io.connect(config.app_url);
socket.on('connect', function () {
socket.emit('update_list', {key: config.socket_key});
});
response.json({
status: true,
list: list
});
response.end();
}
});
});
However the response.end never seems to work, the call keeps hanging, further more, the list doesn't always get refreshed so there is an issue with the socket.emit code. And the socket connection stays open I assume because the response isn't ended?
I have never done this server side before so any help would be much appreciated. (the active_settings etc exists)
I see some issues that might or might not be causing your problems:
list isn't properly scoped, since you don't prefix it with var; essentially, you're creating a global variable which might get overwritten when there are multiple requests being handled;
response.json() calls .end() itself; it doesn't hurt to call response.end() again yourself, but not necessary;
since you're not closing the socket(.io) connection anywhere, it will probably always stay open;
it sounds more appropriate to not set up a new socket.io connection for each request, but just once at your app startup and just re-use that;
I want to make a sessionhandling over websockets via node.js and socket.io without necessarily using cookies and avoiding express.js, because there should be also clients not running in a browser environment. Somebody did this already or got some experience with a proof of concept?
Before socket.io connection is established, there is a handshake mechanism, by default, all properly incoming requests successfully shake hands. However there is a method to get socket data during handshake and return true or false depending on your choice which accepts or denies the incoming connection request. Here is example from socket.io docs:
Because the handshakeData is stored after the authorization you can actually add or remove data from this object.
var io = require('socket.io').listen(80);
io.configure(function (){
io.set('authorization', function (handshakeData, callback) {
// findDatabyip is an async example function
findDatabyIP(handshakeData.address.address, function (err, data) {
if (err) return callback(err);
if (data.authorized) {
handshakeData.foo = 'bar';
for(var prop in data) handshakeData[prop] = data[prop];
callback(null, true);
} else {
callback(null, false);
}
})
});
});
The first argument of callback function is error, you can provide a string here, which will automatically refuse the client if not set to null. Second argument is boolean, whether you want to accept the incoming request or not.
This should be helpful, https://github.com/LearnBoost/socket.io/wiki/Authorizing
You could keep track of all session variables and uniquely identify users using a combination of the following available in handshakeData
{
headers: req.headers // <Object> the headers of the request
, time: (new Date) +'' // <String> date time of the connection
, address: socket.address() // <Object> remoteAddress and remotePort object
, xdomain: !!headers.origin // <Boolean> was it a cross domain request?
, secure: socket.secure // <Boolean> https connection
, issued: +date // <Number> EPOCH of when the handshake was created
, url: request.url // <String> the entrance path of the request
, query: data.query // <Object> the result of url.parse().query or a empty object
}
This example may help as well, just have your non-browser clients supply the information in a different way:
SocketIO + MySQL Authentication