My Restify server is dependent on a database connection which is established through an asynchronous function and a callback. I'm hosting it on Azure, where the server turns off after a period of inactivity, but when it wakes up, it restarts Node.js.
This is causing an error where a request wakes up the server, which crashes because the DB connection hasn't been established yet. What's the best way to handle this?
I found a solution that seems to work although I don't understand why:
You start by immediately calling any use functions in Restify and then later calling the listen function after the DB is connected. Here's an example:
var server = restify.createServer({
name: 'Example',
});
server.use(restify.bodyParser());
server.use(restify.queryParser());
function initializeServer() {
server.listen(80);
console.log("The server is now active.");
}
var database = new sql.Connection(function (err) {
if (err) {
console.log(err);
} else {
initializeServer();
}
});
Related
I have a browser, which connects to server using socket.io with transport as websocket only.
I validate all socket connecting to my server, using simple logic and is working fine.
Update-1
Now the problem occur when Internet fluctuates, browser is creating many websocket connection. (Issue similar to https://github.com/socketio/socket.io/issues/430)
As application requires one connection per browser and thus except one all other socket are being invalidated by validation code. But when I disconnect a invalidated socket all sockets are being disconnecting.
Simplified Code
const soredis = require('socket.io-redis');
const io = require('socket.io')(3000);
io.adapter(soredis({host: localhost, port: 6379}));
io.sockets.on('connection', function(socket) {
setTimeout(function () {
getSocket(socket.id, (err, res) => { //get data from redis
if (err) { //If not found in redis
socket.disconnect();
return;
}
});
}, 15000);
socket.on('register', function(data, cb) {
if (!data.key) {
return cb("Error");
}
saveSocket(socket.id); //save to redis
return cb();
});
});
Any possible reason why and how to resolve the same??
I'm using a MEAN.JS framework (MongoDB, ExpressJS, AngularJS and NodeJS) to build an app.
Using Socket.IO; I'm keeping in MongoDB User schema if the user is connected or not.
//Connect
io.on('connection', function(socket){
connectToChat(true);
//Disconnect
socket.on('disconnect', function(){
connectToChat(false);
});
function connectToChat(isConnect){
var user = socket.request.user;
var numberOfSocketClients = Object.keys(io.sockets.adapter.rooms[user.id] || {}).length;
if(isConnect || (!isConnect && numberOfSocketClients===0)) {
User.findOne({_id: mongoose.Types.ObjectId(user.id)})
.exec(function (err, doc) {
if(!err && doc){
doc.isConnected = isConnect;
doc.save(callback);
}
});
}
});
This works well in all cases except when server is stopped... When server is restarted all the user should be not connected by default but there are some connected users saved in MongoDB User schema.
Okey, I understand that stop the server is a rare case because I'm using Forever... But, is there a good methodology to execute a code in ExpressJS only when the server is restarted? I tried to execute a code in server.jsfile but all the sessions execute this code and it's not that I want.
Thank you very much!
Listen to the ExpressJs listening event. It is fired on boot, once the server is ready to accept connections.
app.on('listening', function () {
// server ready to accept connections here
});
Where app is your express server.
I'm trying to build an application that has two components. There's a public-facing component and an administrative component. Each component will be hosted on a different server, but the two will access the same database. I need to set up the administrative component to be able to send a message to the public-facing component to query the database and send the information to all the public clients.
What I can't figure out is how to set up a connection between the two components. I'm using the standard HTTP server setup provided by Socket.io.
In each server:
var app = require('http').createServer(handler)
, io = require('socket.io').listen(app)
, fs = require('fs')
app.listen(80);
function handler (req, res) {
fs.readFile(__dirname + '/index.html',
function (err, data) {
if (err) {
res.writeHead(500);
return res.end('Error loading index.html');
}
res.writeHead(200);
res.end(data);
});
}
io.sockets.on('connection', function (socket) {
socket.emit('news', { hello: 'world' });
socket.on('my other event', function (data) {
console.log(data);
});
});
And on each client:
<script src="/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://localhost');
socket.on('news', function (data) {
console.log(data);
socket.emit('my other event', { my: 'data' });
});
</script>
I've looked at this question but couldn't really follow the answers provided, and I think the situation is somewhat different. I just need one of the servers to be able to send a message to the other server, and still send/receive messages to/from its own set of clients.
I'm brand new to Node (and thus, Socket), so some explanation would be incredibly helpful.
The easiest thing I could find to do is simply create a client connection between the servers using socket.io-client. In my situation, the admin server connects to the client server:
var client = require("socket.io-client");
var socket = client.connect("other_server_hostname");
Actions on the admin side can then send messages to the admin server, and the admin server can use this client connection to forward information to the client server.
On the client server, I created an on 'adminMessage' function and check for some other information to verify where the message came from like so:
io.sockets.on('connection', function (socket) {
socket.on('adminMessage', function (data) {
if(data.someIdentifyingData == "data") {
// DO STUFF
}
});
});
I had the same problem, but instead to use socket.io-client I decided to use a more simple approach (at least for me) using redis pub/sub, the result is pretty simple. My main problem with socket.io-client is that you'll need to know server hosts around you and connect to each one to send messages.
You can take a look at my solution here: https://github.com/alissonperez/scalable-socket-io-server
With this solution you can have how much process/servers you want (using auto-scaling solution), you just use redis as a way to forward your messages between your servers.
I have an http server created using:
var server = http.createServer()
I want to shut down the server. Presumably I'd do this by calling:
server.close()
However, this only prevents the server from receiving any new http connections. It does not close any that are still open. http.close() takes a callback, and that callback does not get executed until all open connections have actually disconnected. Is there a way to force close everything?
The root of the problem for me is that I have Mocha tests that start up an http server in their setup (beforeEach()) and then shut it down in their teardown (afterEach()). But since just calling server.close() won't fully shut things down, the subsequent http.createServer() often results in an EADDRINUSE error. Waiting for close() to finish also isn't an option, since open connections might take a really long time to time out.
I need some way to force-close connections. I'm able to do this client-side, but forcing all of my test connections to close, but I'd rather do it server-side, i.e. to just tell the http server to hard-close all sockets.
You need to
subscribe to the connection event of the server and add opened sockets to an array
keep track of the open sockets by subscribing to their close event and removing the closed ones from your array
call destroy on all of the remaining open sockets when you need to terminate the server
You also have the chance to run the server in a child process and exit that process when you need.
For reference for others who stumble accross this question, the https://github.com/isaacs/server-destroy library provides an easy way to destroy() a server (using the approach described by Ege).
I usually use something similar to this:
var express = require('express');
var server = express();
/* a dummy route */
server.get('/', function (req, res) {
res.send('Hello World!');
});
/* handle SIGTERM and SIGINT (ctrl-c) nicely */
process.once('SIGTERM', end);
process.once('SIGINT', end);
var listener = server.listen(8000, function(err) {
if (err) throw err;
var host = listener.address().address;
var port = listener.address().port;
console.log('Server listening at http://%s:%s', host, port);
});
var lastSocketKey = 0;
var socketMap = {};
listener.on('connection', function(socket) {
/* generate a new, unique socket-key */
var socketKey = ++lastSocketKey;
/* add socket when it is connected */
socketMap[socketKey] = socket;
socket.on('close', function() {
/* remove socket when it is closed */
delete socketMap[socketKey];
});
});
function end() {
/* loop through all sockets and destroy them */
Object.keys(socketMap).forEach(function(socketKey){
socketMap[socketKey].destroy();
});
/* after all the sockets are destroyed, we may close the server! */
listener.close(function(err){
if(err) throw err();
console.log('Server stopped');
/* exit gracefully */
process.exit(0);
});
}
it's like Ege Özcan says, simply collect the sockets on the connection event, and when closing the server, destroy them.
I've rewriten original answers using modern JS:
const server1 = http.createServer(/*....*/);
const server1Sockets = new Set();
server1.on("connection", socket => {
server1Sockets.add(socket);
socket.on("close", () => {
server1Sockets.delete(socket);
});
});
function destroySockets(sockets) {
for (const socket of sockets.values()) {
socket.destroy();
}
}
destroySockets(server1Sockets);
My approach comes from this one and it basically does what #Ege Özcan said.
The only addition is that I set a route to switch off my server because node wasn't getting the signals from my terminal ('SIGTERM' and 'SIGINT').
Well, node was getting the signals from my terminal when doing node whatever.js but when delegating that task to a script (like the 'start' script in package.json --> npm start) it failed to be switched off by Ctrl+C, so this approach worked for me.
Please note I am under Cygwin and for me killing a server before this meant to close the terminal and reopen it again.
Also note that I am using express for the routing stuff.
var http=require('http');
var express= require('express');
var app= express();
app.get('/', function (req, res) {
res.send('I am alive but if you want to kill me just go to /exit');
});
app.get('/exit', killserver);
var server =http.createServer(app).listen(3000, function(){
console.log('Express server listening on port 3000');
/*console.log(process);*/
});
// Maintain a hash of all connected sockets
var sockets = {}, nextSocketId = 0;
server.on('connection', function (socket) {
// Add a newly connected socket
var socketId = nextSocketId++;
sockets[socketId] = socket;
console.log('socket', socketId, 'opened');
// Remove the socket when it closes
socket.on('close', function () {
console.log('socket', socketId, 'closed');
delete sockets[socketId];
});
// Extend socket lifetime for demo purposes
socket.setTimeout(4000);
});
// close the server and destroy all the open sockets
function killserver() {
console.log("U killed me but I'll take my revenge soon!!");
// Close the server
server.close(function () { console.log('Server closed!'); });
// Destroy all open sockets
for (var socketId in sockets) {
console.log('socket', socketId, 'destroyed');
sockets[socketId].destroy();
}
};
There is now a closeAllConnections() method in v18.2.0
I'm writing a Node.js web server that uses a Postgres database. I used to connect on each new request like this:
app.get('/', function (req, res) {
pg.connect(pgconnstring, function (err, client) {
// ...
});
});
But after a few requests, I noticed 'out of memory' errors on Heroku when trying to connect. My database has only 10 rows, so I don't see how this could be happening. All of my database access is of this form:
client.query('SELECT * FROM table', function (err, result) {
if (err) {
res.send(500, 'database error');
return;
}
res.set('Content-Type', 'application/json');
res.send(JSON.stringify({ data: result.rows.map(makeJSON) }));
});
Assuming that the memory error was due to having several persistent connections to the database, I switched to a style I saw in several node-postgres examples of connecting only once at the top of the file:
var client = new pg.Client(pgconnstring);
client.connect();
app.get('/', function (req, res) {
// ...
});
But now my requests hang (indefinitely?) when I try to execute a query after the connection is disrupted. (I simulated it by killing a Postgres server and bringing it back up.)
So how do I do one of these?
Properly pool Postgres connections so that I can 'reconnect' every time without running out of memory.
Have the global client automatically reconnect after a network failure.
I'm assuming you're using the latest version of node-postgres, in which the connection pooling has been greatly improved. You must now check the connection back into the pool, or you'll bleed the connections:
app.get('/', function (req, res) {
pg.connect(pgconnstring, function (err, client, done) {
// do some stuff
done();
});
});
As for error handling on a global connection (#2, but I'd use the pool):
client.on('error', function(e){
client.connect(); // would check the error, etc in a production app
});
The "missing" docs for all this is on the GitHub wiki.