I'm writing a Node.js web server that uses a Postgres database. I used to connect on each new request like this:
app.get('/', function (req, res) {
pg.connect(pgconnstring, function (err, client) {
// ...
});
});
But after a few requests, I noticed 'out of memory' errors on Heroku when trying to connect. My database has only 10 rows, so I don't see how this could be happening. All of my database access is of this form:
client.query('SELECT * FROM table', function (err, result) {
if (err) {
res.send(500, 'database error');
return;
}
res.set('Content-Type', 'application/json');
res.send(JSON.stringify({ data: result.rows.map(makeJSON) }));
});
Assuming that the memory error was due to having several persistent connections to the database, I switched to a style I saw in several node-postgres examples of connecting only once at the top of the file:
var client = new pg.Client(pgconnstring);
client.connect();
app.get('/', function (req, res) {
// ...
});
But now my requests hang (indefinitely?) when I try to execute a query after the connection is disrupted. (I simulated it by killing a Postgres server and bringing it back up.)
So how do I do one of these?
Properly pool Postgres connections so that I can 'reconnect' every time without running out of memory.
Have the global client automatically reconnect after a network failure.
I'm assuming you're using the latest version of node-postgres, in which the connection pooling has been greatly improved. You must now check the connection back into the pool, or you'll bleed the connections:
app.get('/', function (req, res) {
pg.connect(pgconnstring, function (err, client, done) {
// do some stuff
done();
});
});
As for error handling on a global connection (#2, but I'd use the pool):
client.on('error', function(e){
client.connect(); // would check the error, etc in a production app
});
The "missing" docs for all this is on the GitHub wiki.
Related
I don't know how to establish connection to the mongo db for my node JS server in AWS Lambda using serverless on AWS. I've mentioned my question in the handler function below.
The code looks something like this:
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
import cookieParser from "cookie-parser";
import serverless from "serverless-http";
const PORT = 1234;
dotenv.config();
mongoose.connect(
process.env.MONGO_URL,
() => {
console.log("connected to db");
},
(err) => {
console.log({
error: `Error connecting to db : ${err}`,
});
}
);
const app = express();
app.use(cookieParser());
app.use(express.json());
// this part has various routes
app.use("/api/auth", authRoutes);
app.use((err, req, res, next) => {
const status = err.status || 500;
const message = err.message || "Something went wrong";
return res.status(status).json({
success: false,
status,
message,
});
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
export const handler = () => {
// how to connect to mongodb here?
return serverless(app);
};
Here handler is the AWS lambda's handler function. For each http request I'm reading/writing data from/to my DB in some way. After checking the cloudwatch logs, it was clear that the requests sent to the server result in timeout because the connection to mongodb hasn't been established. So how exactly do I use mongoose.connect here?
I tried doing this:
export const handler = () => {
mongoose.connect(
process.env.MONGO_URL,
() => {
console.log("connected to db");
}
);
return serverless(app);
};
But it didn't work possibly because it's asynchronous. So I'm not sure how to make this connection here.
EDIT:
One thing that I realised was that the database server's network access list had only my IP because that's how I set it up initially.
So I changed it to anywhere for now and made the following minor changes:
const connect_to_db = () => {
mongoose
.connect(process.env.MONGO_URL)
.then(() => {
console.log("Connected to DB");
})
.catch((err) => {
throw err;
});
};
app.listen(PORT, () => {
connect_to_db();
console.log(`Server listening on port ${PORT}`);
});
Now I can see "Connected to DB" in the logs but the requests sent still times out after 15 seconds (the timeout limit set currently).
My logs:
What am I doing wrong?
So I did some more digging and asked this around the community. Few things that made me understand what I was doing wrong:
It appeared I wasn't connecting the db and my app response
together. My app was handling the request fine, my db was connecting
fine. But there was nothing tying them together. It's supposed to be simple:
Requests comes in > App waits until db connection has been established > App handles request > App returns response.
Second, calling app.listen was another problem in my code. Calling listen keeps the process open for incoming requests and it ultimately is killed by Lambda on timeout.
In a serverless environment, you don't start a process that listens for requests but, instead, the listening part is done by AWS API Gateway (which I have used to have my Lambda handle http requests) and it knows to send request information to Lambda handler for processing and returning a response. The handler function is designed to be run on each request and return a response.
So I tried adding await mongoose.connect(process.env.MONGO_URL); to all my methods before doing any operation on the database and it started sending responses as expected. This was getting repetitive so I created a simple middleware and this helped me avoid lot of repetitive code.
app.use(async (req, res, next) => {
try {
await mongoose.connect(process.env.MONGO_URL);
console.log("CONNECTED TO DB SUCCESSFULLY");
next();
} catch (err) {
next(err);
}
});
Another important, but small change. I was assigning lambda's handler incorrectly.
Instead of this:
export const handler = () => {
return serverless(app);
};
I did this:
export const handler = serverless(app);
That's it I suppose, these changes fixed my express server on Lambda. If anything I've said is wrong in any way just let me know.
I have setup a Primus websocket service as below.
http = require('http');
server = http.createServer();
Primus = require('primus');
primus = new Primus(server, {
transformer: 'websockets',
pathname: 'ws'
});
primus.on('connection', function connection(spark) {
console.log("client has connected");
spark.write("Herro Client, I am Server");
spark.on('data', function(data) {
console.log('PRINTED FROM SERVER:', data);
spark.write('receive '+data)
});
spark.on('error', function(data) {
console.log('PRINTED FROM SERVER:', data);
spark.write('receive '+data)
});
});
server.listen(5431);
console.log("Server has started listening");
It works fine. In above code, I use spark.write to send response message to users. Now I want to convert it to be used in a middleware.
The code becomes as below:
primus.use('name', function (req, res, next) {
doStuff();
});
in the doStuff() method, how I can get the spark instance to send message back to clients?
The readme is slightly vague about this, but middleware only deals with the HTTP request.
Primus has two ways of extending the functionality. We have plugins but also support middleware. And there is an important difference between these. The middleware layers allows you to modify the incoming requests before they are passed in to the transformers. Plugins allow you to modify and interact with the sparks. The middleware layer is only run for the requests that are handled by Primus.
To achieve what you want, you'll have to create a plugin. It's not much more complicated than middleware.
primus.plugin('herro', {
server: function(primus, options) {
primus.on('connection', function(spark) {
spark.write('Herro Client, I am Server')
})
},
client: function(primus, options) {}
})
For more info, see the Plugins section of the readme.
My Restify server is dependent on a database connection which is established through an asynchronous function and a callback. I'm hosting it on Azure, where the server turns off after a period of inactivity, but when it wakes up, it restarts Node.js.
This is causing an error where a request wakes up the server, which crashes because the DB connection hasn't been established yet. What's the best way to handle this?
I found a solution that seems to work although I don't understand why:
You start by immediately calling any use functions in Restify and then later calling the listen function after the DB is connected. Here's an example:
var server = restify.createServer({
name: 'Example',
});
server.use(restify.bodyParser());
server.use(restify.queryParser());
function initializeServer() {
server.listen(80);
console.log("The server is now active.");
}
var database = new sql.Connection(function (err) {
if (err) {
console.log(err);
} else {
initializeServer();
}
});
I have an http server and every time it gets a post request, it is supposed to insert the data into MongoDB. This server is supposed to be constantly running and accepting thousands of request in any given second.
How can I maximize the efficiency and speed of my code? I feel like my current code is not fast enough and furthermore wastes CPU power when it makes a new db every time it receives a request.
My current layout
var server = http.createServer(function (req, res) {
req.on('data', function(chunk) {
//Receive my data
});
req.on('end', function() {
//JSON parse my data
var db = new Db('test', new Server("111.111.11.111", 27017,{auto_reconnect: false}), {safe: true});
db.open(function(err, db) {
//Insert data into DB
db.close();
});
});
}); //End Http server
server.listen(8999);
I have tried replacing db.open with MongoClient.connect, but that considerably slows down processing and I don't know why. In this case, the older version of MongoDB Native for node js seems to work faster.
You'll want to shift to an approach where you open a large pool of connections during startup that are shared by your HTTP request handlers. To tweak the MongoDB connection pool size to suit whatever scalability needs you have, pass an options parameter to your MongoClient.connect call.
var options = {
server: {
// The number of pooled connection available to your app.
poolSize: 100
}
};
mongodb.MongoClient.connect('mongodb://111.111.11.111/test', options,
function(err, db) {
var server = http.createServer(function (req, res) {
// Your req.on calls go here, directly using db rather than creating
// new instances. Don't close db either.
});
server.listen(8999);
}
);
Not sure if this would be better, but you can encapsulate that server inside the db, therefore persisting the connection:
var db = new Db('test', new Server("111.111.11.111", 27017,{auto_reconnect: false}), {safe: true});
db.open(function(err, db) {
var server = http.createServer(function (req, res) {
//now do stuff thru constant open connection
});
db.close();
});
I tried to use connect-domain to handling error. In most cases it ok, but it fail with redis callback. How to fix this?
Here's my app
var http = require('http');
var express = require('express');
var connectDomain = require('connect-domain');
var redis = require("redis").createClient();
var app = express();
app.use(connectDomain());
app.get('/', function (req, res) {
throw new Error("Handler OK");
});
app.get('/error', function (req, res) {
redis.get("akey", function(err, reply) {
throw new Error("Handler error");
res.end("ok");
});
});
app.use(function (err, req, res, next) {
res.end(err.message);
});
http.createServer(app).listen(8989, function() {
console.log("Express server started ");
});
I use nodejs 0.8.16, all modules are latest
Not sure if the domain should be catching that or not - but you can capture redis errors by setting up an error handler, like this:
// handle redis connection temporarily going down without app crashing
redisClient.on("error", function(err) {
console.error("Error connecting to redis", err);
});
While the connection is broken your handler will keep getting called as redis tries to reconnect. If it's eventually successful everything will come back online on it's own.
You can also try https://www.npmjs.org/package/wait-for-redis. It ensures clients can wait for server to be up in case when clients start early.