Let me preface this by saying that this is an area of Nuxt I have literally zero experience in and there seems to be very few resources about it online, even Nuxt's docs are lacking. So feel free to correct any incorrect assumptions I make in this post.
I'm running a websocket listener in a hook in a module and a regular Express api server in a server-middleware. On their own they seem to be working without issue, but what I can't seem to get right is accessing the websocket listener in one of the api endpoints, so that if the api gets hit I can notify the clients subscribed to the ws. Let me show you.
Here's my websocket listener hook module:
// #/modules/ws.js
const WebSocket = require('ws')
const wss = new WebSocket.Server({ noServer: true })
wss.on('connection', ws => {
console.log('client connected', wss.clients.size)
ws.on('message', data => {
console.log('received:', data)
data = JSON.parse(data)
if(data.type === 'connection') {
ws.clientId = data.id
return ws.send(JSON.stringify({ type: 'connection', status: 'success', id: ws.clientId }))
}
})
})
export default function () {
this.nuxt.hook('listen', server => {
server.on('upgrade', (request, socket, head) => {
wss.handleUpgrade(request, socket, head, ws => {
wss.emit('connection', ws)
})
})
})
}
And then here's my api server:
// #/server-middleware/index.js
const express = require('express')
const app = express()
app.post('/test-route', (req, res) => {
console.log('connection received')
// here I'd now like to access wss to notify subscribed clients
res.json({ status: 'success' })
})
module.exports = app
And then wiring it all up in nuxt.config.js
...
modules: ['#/modules/ws'],
serverMiddleware: ['#/server-middleware/index'],
...
Firstly, I'm worried that I'm going about this all the wrong way and that I should somehow be sticking the api server into modules/ws.js then everything should be accessible right there, but I have no idea how to take over Express, such that it still works, so if you have advice here, I'd super appreciate it.
Short of that, I'd just like to access the wss object that was instantiated in modules/ws.js in server-middleware/index.js because then theoretically I should still be able to push messages to subscribed clients, right? But how do I do that? I tried straight requireing the exported default function from ws.js, but that seems to just return the function and not the wss instance. I also tried looking at the this.nuxt object in the middleware, same as in the hook module, but that just returns undefined.
So now I'm out of things to try, and with how foreign all of this is to me, I can't even conceive of anything else that might be remotely related. Any advice about any of this will be greatly appreciated.
Nuxt.js version 2.15.7, FWIW
One possible way is to separate the WebSocket server into a different file to import for later use.
Example:
// #/modules/ws/wss.js
const WebSocket = require('ws')
const wss = new WebSocket.Server({ noServer: true })
wss.on('connection', ws => {
...
})
module.exports = {
handleUpgrade (request, socket, head) {
wss.handleUpgrade(request, socket, head, ws => {
wss.emit('connection', ws)
})
},
getActiveClients () {
return [...wss.clients].filter(
client => client.readyState === WebSocket.OPEN
)
}
}
// #/modules/ws/index.js
const wss = require('./wss')
export default function () {
this.nuxt.hook('listen', server => {
server.on('upgrade', wss.handleUpgrade)
})
}
Then use it in your server middleware:
// #/server-middleware/index.js
const express = require('express')
const wss = require('../modules/ws/wss')
const app = express()
app.get('/broadcast', (req, res) => {
for (let client of wss.getActiveClients()) {
client.send('broadcast!')
}
res.json({ status: 'success' })
})
module.exports = app
Related
I just start to explore redis. I want to cache some data using redis. I set up redis connection in the server.ts file and export it from there. Import it in my controller function and try to use set and get but this error comes for both get and set.
TypeError: Cannot read properties of undefined (reading 'get')
//sever.js---> redis connection part
export const client = redis.createClient({
url: "redis://127.0.0.1:6379",
});
client.connect();
client.on("error", (err) => console.log("Redis Client Error", err));
const app: Application = express();
//controller
import { client } from "../server";
const allProjects = async (req: Request, res: Response): Promise<void> => {
const cachedProjects = await client.get("projects");
if (cachedProjects) {
res.status(200).json(JSON.parse(cachedProjects));
}
const projects = await Projects.find({});
if (!projects) {
res.status(400).send("No projects found");
throw new Error("No projects found");
}
await client.set("projects", JSON.stringify(projects));
res.status(200).json(projects);
};
My Redis server is running and I can use set/get using redis cli. I make a mistake somewhere but can't find it.
I am using Node.js, Express.js and Typescript
This error is most likely because client is undefined. This suggests that your import from server.js isn't doing what you think it is. This could be because server.js is special from a Node.js point of view as it's the default file that loads when you run npm start. Might be better to put that code in its own file.
To test this, try doing a .get and .set in server.js after your connection is established and see if that works. If so, you've proved you can talk to Redis. The rest is debugging.
Also, you might want to refer to the example code on the Node Redis Github repo. I've added it here for your convenience:
import { createClient } from 'redis';
const client = createClient();
client.on('error', (err) => console.log('Redis Client Error', err));
await client.connect();
await client.set('key', 'value');
const value = await client.get('key');
Note that Alexey is right that you need to await the establishment of a connection. You should also add the error handler before doing so. That way if establishing a connection fails, you'll know about it.
Wait until client connected to the server and then export it
//sever.js---> redis connection part
const client = await redis.createClient({
url: "redis://127.0.0.1:6379",
});
await client.connect();
client.on("error", (err) => console.log("Redis Client Error", err));
const app: Application = express();
export client
I find the solution. Actually, I write the Redis server connection code in the wrong place. It should be after all path. Then it works fine.
app.use("/api/v1/uploads", imageUpload);
app.use("/api/v1/forgetPassword", forgetPassword);
app.use("/api/v1/resetPassword", resetPassword);
app.use("/uploads", express.static(path.join(dirname, "/uploads")));
client.connect();
client.on("connected", ()=> console.log("Redis connected"))
client.on("error", (err) => console.log("Redis Client Error", err));
app.use(notFound);
var server = app.listen(process.env.PORT || 3001, () =>
console.log(`Listening on port ${process.env.PORT}`)
);
I don't know how to establish connection to the mongo db for my node JS server in AWS Lambda using serverless on AWS. I've mentioned my question in the handler function below.
The code looks something like this:
import express from "express";
import mongoose from "mongoose";
import dotenv from "dotenv";
import cookieParser from "cookie-parser";
import serverless from "serverless-http";
const PORT = 1234;
dotenv.config();
mongoose.connect(
process.env.MONGO_URL,
() => {
console.log("connected to db");
},
(err) => {
console.log({
error: `Error connecting to db : ${err}`,
});
}
);
const app = express();
app.use(cookieParser());
app.use(express.json());
// this part has various routes
app.use("/api/auth", authRoutes);
app.use((err, req, res, next) => {
const status = err.status || 500;
const message = err.message || "Something went wrong";
return res.status(status).json({
success: false,
status,
message,
});
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});
export const handler = () => {
// how to connect to mongodb here?
return serverless(app);
};
Here handler is the AWS lambda's handler function. For each http request I'm reading/writing data from/to my DB in some way. After checking the cloudwatch logs, it was clear that the requests sent to the server result in timeout because the connection to mongodb hasn't been established. So how exactly do I use mongoose.connect here?
I tried doing this:
export const handler = () => {
mongoose.connect(
process.env.MONGO_URL,
() => {
console.log("connected to db");
}
);
return serverless(app);
};
But it didn't work possibly because it's asynchronous. So I'm not sure how to make this connection here.
EDIT:
One thing that I realised was that the database server's network access list had only my IP because that's how I set it up initially.
So I changed it to anywhere for now and made the following minor changes:
const connect_to_db = () => {
mongoose
.connect(process.env.MONGO_URL)
.then(() => {
console.log("Connected to DB");
})
.catch((err) => {
throw err;
});
};
app.listen(PORT, () => {
connect_to_db();
console.log(`Server listening on port ${PORT}`);
});
Now I can see "Connected to DB" in the logs but the requests sent still times out after 15 seconds (the timeout limit set currently).
My logs:
What am I doing wrong?
So I did some more digging and asked this around the community. Few things that made me understand what I was doing wrong:
It appeared I wasn't connecting the db and my app response
together. My app was handling the request fine, my db was connecting
fine. But there was nothing tying them together. It's supposed to be simple:
Requests comes in > App waits until db connection has been established > App handles request > App returns response.
Second, calling app.listen was another problem in my code. Calling listen keeps the process open for incoming requests and it ultimately is killed by Lambda on timeout.
In a serverless environment, you don't start a process that listens for requests but, instead, the listening part is done by AWS API Gateway (which I have used to have my Lambda handle http requests) and it knows to send request information to Lambda handler for processing and returning a response. The handler function is designed to be run on each request and return a response.
So I tried adding await mongoose.connect(process.env.MONGO_URL); to all my methods before doing any operation on the database and it started sending responses as expected. This was getting repetitive so I created a simple middleware and this helped me avoid lot of repetitive code.
app.use(async (req, res, next) => {
try {
await mongoose.connect(process.env.MONGO_URL);
console.log("CONNECTED TO DB SUCCESSFULLY");
next();
} catch (err) {
next(err);
}
});
Another important, but small change. I was assigning lambda's handler incorrectly.
Instead of this:
export const handler = () => {
return serverless(app);
};
I did this:
export const handler = serverless(app);
That's it I suppose, these changes fixed my express server on Lambda. If anything I've said is wrong in any way just let me know.
When I generate a webserver with express-generator, I get this folder structure :
bin/www
views/...
app.js
package.json
...
bin/www call app.js like that :
var app = require('../app');
// ...
var server = http.createServer(app);
server.listen(port);
app.js create the app like that :
var express = require('express')
var mongoose = require('mongoose')
mongoose.connect(process.env.DATABASE_URL).then(
() => {
debug('Database is connected')
},
err => {
debug('An error has occured with the database connection')
process.exit(1)
}
)
var app = express()
// Midllewares
app.use(/* some middleware 1 */)
app.use(/* some middleware 2 */)
app.use(/* some middleware 3 */)
app.use(/* some middleware ... */)
// Routes
app.get('/', function(req, res, next) {
res.json({'message': 'Welcome to my website'})
})
app.get('/users', function(req, res, next) {
Users.find({}).exec(function(err, users) {
if (err) {
res.json({'message': 'An error occured'})
return
}
res.json('users': users)
})
})
// ... others routes ...
module.exports = app
ok, this is the webserver boilerplate from express-generator. But if I want to start my app by the good way, I must call process.send('ready') when my app is ready. ("ready" mean that all services are ready to use: database, redis, scheduler...) (call process.send('ready') when your app is ready is a best practice to know that your webserver app si ready. This signal can be used by process management or other system)
The probleme is that in bin/www, the app is started (server.listen() is called) without insurance that the database connection is established. In other word, without the insurance that the webserver app is ready to listen to the traffic.
I read that start the server in bin/www is a best practice
The above example is not complete, we can considere that we have an app with multiple services that we must start before accept requests (services examples: redis, job scheduler, database connection, ftp connection to another server...)
I already check some popular and advanced boilerplate of Node.js app :
https://github.com/sahat/hackathon-starter
https://github.com/kriasoft/nodejs-api-starter
https://github.com/madhums/node-express-mongoose
https://github.com/icebob/vue-express-mongo-boilerplate
https://github.com/talyssonoc/node-api-boilerplate
But none of them take care of the ready state of the app before calling server.listen(port) which make the webserver starting to listen to the incoming requests. That surprises me a lot and I don't understand why
Code example of a webserver app with multiple services that we must wait for before accept incomings requests:
bin/www:
var app = require('../app');
// ...
var server = http.createServer(app);
server.listen(port);
app.js:
var express = require('express')
var mongoose = require('mongoose')
// **************
// Service 1 : database
mongoose.connect(process.env.DATABASE_URL).then(
() => {
debug('Database is connected')
},
err => {
debug('An error has occured with the database connection')
process.exit(1)
}
)
// **************
// **************
// Service 2
// Simulate a service that take 10 seconds to initialized
var myWeatherService = null
setTimeout(function() {
myWeatherService.getWeatherForTown = function(town, callback) {
weather = 'sun'
callback(null, weather)
}
}, 10*1000)
// **************
// **************
// Other services...
// **************
var app = express()
// Midllewares
app.use(/* some middleware 1 */)
app.use(/* some middleware 2 */)
app.use(/* some middleware 3 */)
app.use(/* some middleware ... */)
// Routes
app.get('/', function(req, res, next) {
res.json({'message': 'Welcome to my website'})
})
app.get('/users', function(req, res, next) {
Users.find({}).exec(function(err, users) {
if (err) {
res.json({'message': 'An error occured'})
return
}
res.json({'users': users})
})
})
app.get('/getParisWeather', function(req, res, next) {
Users.getWeatherForTown('Paris', function(err, weather) {
if (err) {
res.json({'message': 'An error occured'})
return
}
res.json({'town': 'Paris', weatcher: weather})
})
})
// ... others routes ...
module.exports = app
If I start my app, and then I call localhost:port/getParisWeather before the myWeatherService is initialized, I will get an error
I already think about a solution: move each service declaration in bin/www and let in app.js only code that concern the declaration of the express app:
bin/www:
var app = require('../app');
var mongoose = require('mongoose')
var server = null;
Promise.resolve()
.then(function () {
return new Promise(function (resolve, reject) {
// start service 1
console.log('Service 1 is ready')
resolve()
})
})
.then(function () {
return new Promise(function (resolve, reject) {
// start service 2
console.log('Service 2 is ready')
resolve()
})
})
.then(function () {
return new Promise(function (resolve, reject) {
// start other services...
console.log('Others services is ready')
resolve()
})
})
.then(function () {
return new Promise(function (resolve, reject) {
server = http.createServer(app);
server.listen(port);
console.log('Server start listenning')
})
})
.then(function () {
next()
})
.catch(next)
.finally(function () {
})
.done()
app.js:
var express = require('express')
var app = express()
// Midllewares
app.use(/* some middleware 1 */)
app.use(/* some middleware 2 */)
app.use(/* some middleware 3 */)
app.use(/* some middleware ... */)
// Routes
app.get('/', function(req, res, next) {
res.json({'message': 'Welcome to my website'})
})
app.get('/users', function(req, res, next) {
Users.find({}).exec(function(err, users) {
if (err) {
res.json({'message': 'An error occured'})
return
}
res.json({'users': users})
})
})
app.get('/getParisWeather', function(req, res, next) {
Users.getWeatherForTown('Paris', function(err, weather) {
if (err) {
res.json({'message': 'An error occured'})
return
}
res.json({'town': 'Paris', weatcher: weather})
})
})
// ... others routes ...
module.exports = app
But I know that put logic in bin/www is not a good practice, it must only contains the server start lines...
So, my question is, how we must start a webserver app to respect the bests practices // what is the bests practices to ?
I know that I can put everything in only one file and start the webserver at the end of this file, this is not my question. What I ask is how to do it in the good way and in the best practices
The answer really depends on what your ecosystem is like. If you know all of the services that your app will use, then you might try checking them in an expressjs middleware function that is called before the routing code. You can use a set of promises to keep track of service readiness and a boolean to tell whether all services are ready. If all services are ready, then the middleware function can call next(), but if not, then you might return an HTML page that tells the user the site is undergoing maintenance or isn't ready and they should try back later. I can see you encapsulating all those promises in a middleware function that manages whether or not they are ready as to not clutter your app.js or bin/www files.
Update:
If you want to prevent the server from listening until the services are ready, then you'll need to setup your own infrastructure in the same process or use something like supervisord to manage the processes. For example, you can setup a "startup" process that checks for the services. Once the services are ready, your startup process can fork and start the node server or create a child process that runs the server. You don't need to have any of the service checking logic in your node app; the assumption is that if it is started by the other process, then the services are already up and running. You can introduce a high-level process management system like supervisord, or keep it all in nodejs and use the child_process module. This approach will help to keep the "startup" code separate from the "run/app" code.
Consider a simple express api which returns 'OK' on port 3000.
const express = require('express');
const app = express();
app.get('/', (req, res) => {
res.send('OK');
});
app.listen(3000, () => {
console.log("App listening on port 3000");
});
This becomes ready to accept connections as soon as the app is fired up. Now let's sleep for 5 seconds to fake the database getting ready, and fire a 'ready' event manually afterwards. We start listening for connections when the event is caught.
const express = require('express');
const sleep = time => new Promise(resolve => setTimeout(resolve, time));
const app = express();
app.get('/', (req, res) => {
res.send('OK');
});
sleep(5000)
.then(() => {
process.emit("ready");
});
process.on("ready", () => {
app.listen(3000, () => {
console.log("App listening on port 3000");
});
});
Test if by going to http://localhost:3000. You'll see 'OK' only after 5 seconds.
Here, you don't have to emit the 'ready' event on process object itself. A custom EventEmitter object will do the job as well. The process object inherits from EventEmitter and is available globally. So it's a convenient way of listening to any global events.
I am trying to build a two way socket.io server/client connection. The server will remain behind one IP/domain and the client will behind a different IP. The point is to notify me when the server goes offline, in case of power outage or server failure. The issue I am having, is I am trying to secure the socket so not just anyone can connect to the socket. Socket.IO has a server.origins function that will return the origin of socket trying to connect. Their API documentation explains it like this.
io.origins((origin, callback) => {
if (origin !== 'https://foo.example.com') {
return callback('origin not allowed', false);
}
callback(null, true);
});
The issue I am having is whenever I connect to the socket.io server with socket.io-client the origin is always '*'.
Under potential drawbacks in there API is says:
"in some situations, when it is not possible to determine origin it may have value of *"
How do I get socket.io it see the IP where the socket connection request is coming from?
Once the connection is established I can use the socket information and see the IP where the socket lives, but the connection is already made. I am trying to stop rouge connections.
# Server
const express = require('express');
const app = express();
const chalk = require('chalk')
const server = require('http').createServer(app);
const io = require('socket.io')(server);
const cors = require('cors');
const port = 4424;
app.use(cors());
io.origins((origin, callback) => {
console.log(origin);
if (origin !== '*') {
return callback('origin not allowed', false);
}
callback(null, true);
});
io.on('connection', (socket) => {
console.log('Client connected...');
socket.on('join', (data) => {
console.log(data);
socket.emit('messages', 'Hello from server');
});
})
server.listen(port, () => console.log(chalk.blue(`Express started on port ${port}!`)));
Client:
# Client
const io = require('socket.io-client');
const socket = io('https://"MY DOMAIN THAT THE SERVER IS BEHIND"', { reconnect: true })
socket.on('connect', (data) => {
console.log("Connection successful");
socket.emit('join', 'Hello World from client');
});
socket.on('connect_error', (error) => {
console.log("Connection error");
});
socket.on('disconnect', (timeout) => {
console.log("Connection disconnected");
})
socket.on('messages', (data) => {
console.log(data);
});
I have the server behind a NGINX server using SSL, and connected to the server with the client on a different IP and it goes through and creates the connection, but the Origin is always "*".
Actually I found out you can use middleware with Socket.io with the io.use() function. I just wrote a simple middleware that checks the incoming socket ip with a list of approved ips.
io.use((socket, next) => {
const ip = socket.handshake.headers['x-forwarded-for']
if (firewall(ip))
{
return next();
}
})
And firewall is a function that checks if the ip is in the array of approved ips.
I have a Node.js server that I am sending a web socket upgrade request to. The Authorization header of this request contains login information, which I need to compare against a database entry. I'm unsure how I can stop the web socket connection from opening until after my database query callback is executed.
The following is a simplification of what I am currently doing:
var Express = require('express')
var app = Express()
server = app.listen(app.get("port"), function () {})
server.on("upgrade", function (request, socket) {
//Query database
//On success set "authenticated" flag on request (later accessed through socket.upgradeReq)
//On failure abort connection
})
This works, but there is a brief period of time where the socket is open but I haven't verified the Authorization header, so it would be possible for a malicious user to send/receive data. I'm mitigating this risk in my implementation through the use of an "authenticated" flag, but it seems like there must be a better way.
I tried the following things, but while they seemed to intercept all requests except the upgrade ones:
Attempt #1:
app.use(function (request, response, next) {
//Query database, only call next if authenticated
next()
})
Attempt #2:
app.all("*", function (request, response, next) {
//Query database, only call next if authenticated
next()
})
Possibly worth noting:
I do have an HTTP server as well, it uses the same port and accepts POST requests for registration and login.
Thank you for any assistance, please let me know if additional information is needed.
I'm not sure if this is correct HTTP protocol communication but it seems to be working in my case:
server.on('upgrade', function (req, socket, head) {
var validationResult = validateCookie(req.headers.cookie);
if (validationResult) {
//...
} else {
socket.write('HTTP/1.1 401 Web Socket Protocol Handshake\r\n' +
'Upgrade: WebSocket\r\n' +
'Connection: Upgrade\r\n' +
'\r\n');
socket.close();
socket.destroy();
return;
}
//...
});
verifyClient is implemented for this purpose!
const WebSocketServer = require('ws').Server
const ws = new WebSocketServer({
verifyClient: (info, cb) => {
const token = info.req.headers.token
if (!token)
cb(false, 401, 'Unauthorized')
else {
jwt.verify(token, 'secret-key', (err, decoded) => {
if (err) {
cb(false, 401, 'Unauthorized')
} else {
info.req.user = decoded
cb(true)
}
})
}
}
})
src:
Websocket authentication in Node.js using JWT and WS