I am developing an application that relies completely on Socket.io. As we all know NodeJS by default runs only on one core. Now I would like to scale it across multiple cores. I am finding it difficult to make socketio work with PM2 Cluster Mode. Any sample code would help.
I am using Artillery to test. And when the app runs on single core I get the response while It runs in cluster the response would be NaN
When Ran Without Cluster
PM2 docs say
Be sure your application is stateless meaning that no local data is
stored in the process, for example sessions/websocket connections,
session-memory and related. Use Redis, Mongo or other databases to
share states between processes.
Socket.io is not stateless.
Kubernetes implementation get around the statefull issues by routing based on source IP to a specific instance. This is still not 100% since some sources may present more than one IP address. I know this is not PM2, but gives you an idea of the complexity.
NESTjs SERVER
I use Socket server 2.4.1 so then i get the compatible redis adapter that is 5.4.0
I need to extend nest's adepter class "ioAdapter" that class only works for normal ws connections not our pm2 clusters
import { IoAdapter } from '#nestjs/platform-socket.io';
import * as redisIOAdapter from 'socket.io-redis';
import { config } from './config';
export class RedisIoAdapter extends IoAdapter {
createIOServer(port: number, options?: any): any {
const server = super.createIOServer(port, options);
const redisAdapter = redisIOAdapter({
host: config.server.redisUrl,
port: config.server.redisPort,
});
server.adapter(redisAdapter);
return server;
}
}
That is actually nestjs implementation
Now i need to tell nest im using that implementetion so i go to main.ts
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import { config } from './config';
import { RedisIoAdapter } from './socket-io.adapter';
import { EventEmitter } from 'events';
async function bootstrap() {
EventEmitter.defaultMaxListeners = 15;
const app = await NestFactory.create(AppModule);
app.enableCors();
app.useWebSocketAdapter(new RedisIoAdapter(app));
await app.listen(config.server.port);
}
bootstrap();
I have a lot of events for this one so i had to up my max event count
now for every gateway you got, you need to use a different connection strategy, so instead of using polling you need to go to websocket directly
...
#WebSocketGateway({ transports: ['websocket'] })
export class AppGateway implements OnGatewayConnection, OnGatewayDisconnect {
...
or if you are using namespaces
...
#WebSocketGateway({ transports: ['websocket'], namespace: 'user' })
export class UsersGateway {
...
last step is to install the redis database on your AWS instance and that is another thing; and also install pm2
nest build
pm2 i -g pm2
pm2 start dist/main.js -i 4
CLIENT
const config: SocketIoConfig = {
url: environment.server.admin_url, //http:localhost:3000
options: {
transports: ['websocket'],
},
};
You can now test your websocket server using FireCamp
Try using this lib:
https://github.com/luoyjx/socket.io-redis-stateless
It makes socket io stateless through redis.
You need to setup Redis with your Node server. Here is how I managed to get cluster mode to work with Socket.io
First install Redis. If you are using Ubuntu, follow this link: https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-redis-on-ubuntu-18-04
Then:
npm i socket.io-redis
Now place Redis in your Node server
const redisAdapter = require('socket.io-redis')
global.io = require('socket.io')(server, { transports: [ 'websocket' ]})
io.adapter(redisAdapter({ host: 'localhost', port: 6379 }))
That was all I had to do to get PM2 cluster mode to work with socket.io in my server.
Related
I have a small personal project of Flutter application with Node.js express backend and a Postgresql database.
For the moment my database is hosted on my pc locally but having an ubuntu server I would like my database on it.
So I created a Docker container with my Postgresql database in it.
However I'm a bit stuck now I don't know how to create a database instance on my remote server and make it communicate with my application...
Here is my ormconfig.ts file i suppose i have to change here...
import { join } from "path";
import {ConnectionOptions } from "typeorm";
import { PostEntity } from "./database/entity/post.entity";
import { UserEntity } from "./database/entity/user.entity";
const connectionOptions : ConnectionOptions = {
type: "postgres",
host: "localhost",
port: 5432
username:"postgres",
password:"pg",
database:"test",
entities: [UserEntity,PostEntity],
synchronize:true,
dropSchema:false,
migrationsRun:true,
logging:false,
logger:"debug",
migrations:[join(__dirname,"src/migration/**/*.ts")],
};
export = connectionOptions;
Thanks a lot !
Unsure of your network setup with your Ubuntu server, but realistically it should be something like:
import { join } from "path";
import {ConnectionOptions } from "typeorm";
import { PostEntity } from "./database/entity/post.entity";
import { UserEntity } from "./database/entity/user.entity";
const connectionOptions : ConnectionOptions = {
type: "postgres",
host: UBUNTU_SERVER_ADDRESS,
port: POSTGRES_DOCKER_PORT
username:"postgres",
password:"pg",
database:"test",
entities: [UserEntity,PostEntity],
synchronize:true,
dropSchema:false,
migrationsRun:true,
logging:false,
logger:"debug",
migrations:[join(__dirname,"src/migration/**/*.ts")],
};
export = connectionOptions;
You'll need to make sure that the Postgres Docker instance has opened ports to connect to. E.g.:
docker run -d -p 5432:5432 ...other-args postgres:latest
Make sure your Ubuntu server has correctly configured firewall and network settings to allow remote access on port 5432.
I'm working on a project that's using multiple Nest repos, around 4.
Every repo needs to implementing logging to log things like
Server lifecycle events
Uncaught errors
HTTP requests/responses
Ideally, I'd like to package everything up into a module which I can publish to my company's NPM organization and just consume directly in each of my projects.
That way, it would take very minimal code to get logging set up in each project.
One of the things I'd like to log in my server lifecycle event is the server's url.
I know you can get this via app.getUrl() in the bootstrapping phase, but it would be great to have access to the app instance in a module's lifecycle hook like so.
#Module({})
export class LoggingModule implements NestModule {
onApplicationBootstrap() {
console.log(`Server started on ${app.getUrl()}`)
}
beforeApplicationShutdown() {
console.log('shutting down')
}
onApplicationShutdown() {
console.log('successfully shut down')
}
configure(consumer: MiddlewareConsumer) {
consumer.apply(LoggingMiddleware).forRoutes('*')
}
}
Is this possible?
There's no way (besides hacky ones, maybe) to access the app itself inside modules.
As you can see here, app.getUrl() uses the underlying HTTP server. Thus I guess you can retrieve the same data using the provider HttpAdapterHost.
Ï thought I'd chime in and offer one of the hacky solutions. Only use this, if there is absolutely no other way or your deadline is coming in an hour.
Create a class that can hold the application instance
export class AppHost {
app: INesApplication
}
And a module to host it
#Module({
providers: [AppHost]
exports: [AppHost]
})
export class AppHostModule {}
In your bootstrap() function, retrieve the AppHost instance and assign the app itself
// after NestFactory.create() ...
app.select(AppHostModule).get(AppHost).app = app;
Now, the actual application will be available wherever you inject AppHost.
Be aware, though, that the app will not be available inside AppHost before the whole application bootstraps (in onModuleInit, onApplicationBootstrap hooks or in provider factories), but it should be available in shutdown hooks.
Not sure is that hacky... I'm using this to prevent the server from starting in case of pending migrations.
// AppModule.ts
export class AppModule implements NestModule {
app: INestApplication;
async configure(consumer: MiddlewareConsumer) {
if (await this.hasPendingMigrations()) {
setTimeout(()=> {
this.logger.error("There are pending migrations!")
process.exitCode = 1;
this.app.close();
}, 1000);
}
//...
}
public setApp(app: INestApplication) {
this.app = app;
}
//...
}
//main.ts
const app = await NestFactory.create(AppModule, {
logger: config.cfgServer.logger,
});
app.get(AppModule).setApp(app);
I am looking for the best WS solution for IoT project. I am currently testing my options with Web Sockets. I have tried so far two NPM libraries 'ws' and 'websockets'. They worked great both NodeJS and ReactJS implementation was simple. I am now trying websocket.io. Reading the documentation I struggle to create even a simple working example copying the code directly from the documentation. Since the test code is so simple, I am really confused especially after the positive experience with two previous packages. I am pretty sure I am doing something wrong but I am unable to spot my mistake. I am really thankful for anyone helping to spot what am I not doing right.
NodeJS server instance listening on port 8000 (based on example here: https://socket.io/docs/v4/server-initialization/) :
const io = require("socket.io")();
io.on("connection", socket => {
console.log('blip')
});
io.listen(8000);
React client trying to connect to the NodeJS server from port 2000:
import React from "react";
import { io } from "socket.io-client";
class Io extends React.Component {
state = {
wsConnected: false
}
componentDidMount() {
const socket = io.connect('http://localhost:8000');
socket.on("connect", () => {
console.log('connect',socket.id);
});
}
render() {
const { wsConnected } = this.state;
return (
<div>{wsConnected ? 'on' : 'off'}</div>
)
}
}
export default Io
It seems you have CORS problem when in polling transport mode, So you can use Socket.io standalone server like this when you are using polling:
const io = require("socket.io")(
{
cors: {
origin: '*',
}
}
);
io.on("connection", socket => {
console.log(`${socket.handshake.headers.origin} connected`);
});
io.listen(8000);
But it's better use websocket transport if you need persistent http connection. When you are using websocket transport mode, there's no CORS policy.
websocket transport doesn't support http headers like cookies or etc but it is better to use it when there's energy consumption concerns in client side. Also you can force socket.io server to only supports determined transport modes. See socket.io documentations.
This is a little hard articulate so I hope my title isn't too terrible.
I have a frontend/backend React/Node.js(REST API) Web app that I want to add Redis support to for storing retrieving app global settings and per-user specific settings (like language preference, last login, etc... simple stuff) So I was considering adding a /settings branch to my backend REST API to push/pull this information from a redis instance.
This is where my Node.js inexperience comes through. I'm looking at using the ioredis client and it seems too easy. If I have a couple of helpers (more than one .js which will call upon redis) will constructing the client as a const in each be safe to do? Or is there another recommended approach to reusing a single instance of it be the way to go?
Here's a sample of what I'm thinking of doing. Imagine if I had 3 helper modules that require access to the redis client. Should I declare them as const in each? Or centralize them in a single helper module, and get the client from it? Is there a dis-advantage to doing either?
const config = require('config.json');
const redis_url = config.redis_url;
//redis setup
const Redis = require('ioredis');
const redis = new Redis(redis_url);
module.exports = {
test
}
async function test(id) {
redis.get(id, function (err, result) {
if (err) {
console.error(err);
throw(err);
} else {
return result;
}
});
Thank you.
If no redis conflicts...
If the different "helper" modules you are referring to have no conflicts when interacting with redis, such as overwriting / using the same redis keys, then I can't see any reason not to use the same redis instance (as outlined by garlicman) and export this to the different modules in which it is used.
Otherwise use separate redis databases...
If you do require separate redis database connections, redis ships with 16 databases so you can specify which to connect to when creating a new instance - see below:
const redis = new Redis({ // SET UP CONFIG FOR CONNECTION TO REDIS
port: 6379, // Redis port
host: 127.0.0.1, // Redis host
family: 4, // 4 (IPv4) or 6 (IPv6)
db: 10, // Redis database to connect to
});
Normally what I would do (in Java say) is implement any explicit class with singleton access the hold the connection and any connection error/reconnect handling.
All modules in Node.js are already singletons I believe, but what I will probably go with will be a client class to hold it and my own access related methods. Something like:
const config = require('config.json');
const Redis = require("ioredis");
class Redis {
constructor(){
client = new Redis(config.redis_url);
}
get(key) {
return this.client.get(key);
}
set(key, value, ttl) {
let rp;
if (ttl === 0) {
rp = this.client.set(key, value);
}
else {
rp = this.client.set(key, value)
.then(function(res) {
this.client.expire(key, ttl);
});
}
return rp;
}
}
module.exports = new Redis;
I'll probably include a data_init() method to check and preload an initial key/value structure on first connect.
So I'm like 99% sure I'm just screwing up something dumb here.
I'm trying to set up catbox to cache objects to redis. I have redis up and running and I can hit it with RDM (sql pro like utility for redis) but Hapi is not cooperating.
I register the redis catbox cache like so:
const server = new Hapi.Server({
cache: [
{
name: 'redisCache',
engine: require('catbox-redis'),
host: 'redis',
partition: 'cache',
password: 'devpassword'
}
]
});
I am doing this in server.js After this block of code I go on to register some more plugins and start the server. I also export the server at the end of the file
module.exports = server;
Then in my routes file, I am attempting to set up a testing route like so:
{
method: 'GET',
path: '/cacheSet/{key}/{value}',
config: { auth: false },
handler: function(req, res) {
const testCache = server.cache({
cache: 'redisCache',
expireIn: 1000
});
testCache.set(req.params.key, req.params.value, 1000, function(e) {
console.log(e);
res(Boom.create(e.http_code, e.message));
})
res(req.params.key + " " + req.params.value);
}
},
Note: My routes are in an external file, and are imported into server.js where I register them.
If I comment out all the cache stuff on this route, the route runs fine and returns my params.
If I run this with the cache stuff, at first I got "server not defined". So I then added
const server = require('./../server.js');
to import the server.
Now when I run this, I get "server.cache is not a function" and a 500 error.
I don't understand what I'm doing wrong. My guess is that I'm importing server, but perhaps it's the object without all the configs set so it's unable to use the .cache method. However this seems wrong because .cache should always be a default method with the default memory cache, so even if my cache registration isn't active yet, server.cache should theoretically still be a method.
I know it has to be something basic I'm messing up, but what?
I was correct. I was doing something stupid. It had to do with how I was exporting my server. I modified my structure to pull out the initial server creation and make it more modular. Now I am simply exporting JUST the server like so:
'use strict';
const Hapi = require('hapi');
const server = new Hapi.Server({
cache: [
{
name: 'redisCache',
engine: require('catbox-redis'),
host: 'redis',
partition: 'cache',
password: 'devpassword'
}
]
});
module.exports = server;
I then import that into my main server file (now index.js previously server.js) and everything runs fine. I can also import this into any other file (in this case my routes file) and access the server for appropriate methods.
Redis is happily storing keys and Hapi is happily not giving me errors.
Leaving here in case anyone else runs into a dumb mistake like this.