AWS CDK ApplicationLoadBalancedFargateService with Node.js and WebSockets constantly stops after each health check - node.js

I used CDK with the ApplicationLoadBalancedFargateService construct to deploy the ECS Fargate service with Load Balancing. It's the WebSocket API written on Node.js with Socket.io.
It works fine locally, but in the AWS it's constantly being killed by the health check, with an error:
service CoreStack-CoreServiceA69F11F4-9ciFOoAL5oj9 (port 5000) is unhealthy in target-group CoreS-CoreS-FMLY84WTNYVN due to (reason Health checks failed with these codes: [404]).
The code of the stack:
import { aws_ecs_patterns, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Repository } from 'aws-cdk-lib/aws-ecr';
import { ContainerImage } from 'aws-cdk-lib/aws-ecs';
import { HostedZone } from 'aws-cdk-lib/aws-route53';
import { ApplicationProtocol } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
import ec2 = require('aws-cdk-lib/aws-ec2');
import ecs = require('aws-cdk-lib/aws-ecs');
const DOMAIN_NAME = 'newordergame.com';
const SUBDOMAIN = 'core';
const coreServiceDomain = SUBDOMAIN + '.' + DOMAIN_NAME;
export class CoreStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const vpc = new ec2.Vpc(this, 'CoreServiceVpc', { maxAzs: 2 });
const cluster = new ecs.Cluster(this, 'Cluster', { vpc });
const repository = Repository.fromRepositoryName(
this,
'CoreServiceRepository',
'core-service'
);
const image = ContainerImage.fromEcrRepository(repository, 'latest');
const zone = HostedZone.fromLookup(this, 'NewOrderGameZone', {
domainName: DOMAIN_NAME
});
// Instantiate Fargate Service with just cluster and image
new aws_ecs_patterns.ApplicationLoadBalancedFargateService(
this,
'CoreService',
{
cluster,
taskImageOptions: {
image: image,
containerName: 'core',
containerPort: 5000,
enableLogging: true
},
protocol: ApplicationProtocol.HTTPS,
domainName: coreServiceDomain,
domainZone: zone
}
);
}
}
The Dockerfile:
FROM node:16-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY package.json ./
RUN npm install
COPY webpack.config.js tsconfig.json ./
COPY src ./src
RUN npm run webpack:build
EXPOSE 5000
CMD [ "node", "build/main.js" ]
The application also listens to the port 5000.
It crashes every 3 mins.
Does anyone have a clue about how to fix it?
Thank you!

Your health check URL is returning a 404 status code. You need to configure the health check with a path that will return 200 status code.

ELB health check works only with HTTP or HTTPS, it does not work with WebSocket including Socket.io
So, the best workaround I found is to make the service listen to the HTTP port along with WebSocket and answer it with status 200.
With this approach, the health check will actually check the status of the service.

Related

Invariant Violation: fetch is not found globally and no fetcher passed, to fix pass a fetch for your environment

I'm trying to run a NodeJS app with an AWS Lambda handler. My package.json is very simple:
...
"dependencies": {
"aws-appsync": "^4.1.7",
"aws-sdk": "^2.1202.0",
"graphql-tag": "^2.12.6"
}
When I try to run anything I get:
Invariant Violation:
fetch is not found globally and no fetcher passed, to fix pass a fetch for
your environment like https://www.npmjs.com/package/node-fetch.
For example:
import fetch from 'node-fetch';
import { createHttpLink } from 'apollo-link-http';
const link = createHttpLink({ uri: '/graphql', fetch: fetch });
at new InvariantError (/Users/jamesdaniels/Code/node_modules/ts-invariant/lib/invariant.js:16:28)
at Object.exports.checkFetcher (/Users/jamesdaniels/Code/node_modules/apollo-link-http-common/lib/index.js:65:15)
at Object.createHttpLink (/Users/jamesdaniels/Code/node_modules/apollo-link-http/lib/bundle.umd.js:47:30)
at Object.exports.createAppSyncLink (/Users/jamesdaniels/Code/node_modules/aws-appsync/lib/client.js:144:201)
at new AWSAppSyncClient (/Users/jamesdaniels/Code/node_modules/aws-appsync/lib/client.js:214:72)
at Object.<anonymous> (/Users/jamesdaniels/Code/utils/appsync.js:16:23)
The error appears to be with the aws-appsync package. The error only occurs when I introduce that to my app:
const AWS = require("aws-sdk") // Works fine
const AUTH_TYPE = require("aws-appsync").AUTH_TYPE;
const AWSAppSyncClient = require("aws-appsync").default;
// GraphQL client config
const appSyncClientConfig = {
url: "https://xxxxxxxxxxxxxxxxx.appsync-api.eu-west-2.amazonaws.com/graphql",
region: "eu-west-2",
auth: {
type: AUTH_TYPE.AWS_IAM,
credentials: AWS.config.credentials,
},
disableOffline: true,
};
// Initialise the AppSync client
const appSyncClient = new AWSAppSyncClient(appSyncClientConfig);
An error is thrown from dependent modules aws-appsync > apollo-link-http > apollo-link-http-common.
Turns out the error message was telling me exactly what I needed to know. aws-appsync isn't designed for a back-end environment. It's intended for front-end environments, where fetch is available globally. In the back-end we need to create our own global variable for our own fetch if we want it to be available for any node packages that we've installed.
The solution is to download node-fetch and then follow the instructions in the "Provide Global Access" section:
Providing global access
To use fetch() without importing it, you can patch the global object in node:
// fetch-polyfill.js
import fetch, {
Blob,
blobFrom,
blobFromSync,
File,
fileFrom,
fileFromSync,
FormData,
Headers,
Request,
Response,
} from 'node-fetch'
if (!globalThis.fetch) {
globalThis.fetch = fetch
globalThis.Headers = Headers
globalThis.Request = Request
globalThis.Response = Response
}
// index.js
import './fetch-polyfill'
// ...

Connect Postgresql database remote server ubuntu server (Docker-Compose) and Node.js express backend

I have a small personal project of Flutter application with Node.js express backend and a Postgresql database.
For the moment my database is hosted on my pc locally but having an ubuntu server I would like my database on it.
So I created a Docker container with my Postgresql database in it.
However I'm a bit stuck now I don't know how to create a database instance on my remote server and make it communicate with my application...
Here is my ormconfig.ts file i suppose i have to change here...
import { join } from "path";
import {ConnectionOptions } from "typeorm";
import { PostEntity } from "./database/entity/post.entity";
import { UserEntity } from "./database/entity/user.entity";
const connectionOptions : ConnectionOptions = {
type: "postgres",
host: "localhost",
port: 5432
username:"postgres",
password:"pg",
database:"test",
entities: [UserEntity,PostEntity],
synchronize:true,
dropSchema:false,
migrationsRun:true,
logging:false,
logger:"debug",
migrations:[join(__dirname,"src/migration/**/*.ts")],
};
export = connectionOptions;
Thanks a lot !
Unsure of your network setup with your Ubuntu server, but realistically it should be something like:
import { join } from "path";
import {ConnectionOptions } from "typeorm";
import { PostEntity } from "./database/entity/post.entity";
import { UserEntity } from "./database/entity/user.entity";
const connectionOptions : ConnectionOptions = {
type: "postgres",
host: UBUNTU_SERVER_ADDRESS,
port: POSTGRES_DOCKER_PORT
username:"postgres",
password:"pg",
database:"test",
entities: [UserEntity,PostEntity],
synchronize:true,
dropSchema:false,
migrationsRun:true,
logging:false,
logger:"debug",
migrations:[join(__dirname,"src/migration/**/*.ts")],
};
export = connectionOptions;
You'll need to make sure that the Postgres Docker instance has opened ports to connect to. E.g.:
docker run -d -p 5432:5432 ...other-args postgres:latest
Make sure your Ubuntu server has correctly configured firewall and network settings to allow remote access on port 5432.

Express won't serve acme-challenge static file without extension

I struggle to validate the Let's Encrypt challenge on my server. I have properly set up my server and it serves files as expected, except when they don't have extensions (which is the case for the SSL challenges).
app.use("/.well-known", express.static(path.join(__dirname, ".well-known"), { dotfiles: "allow" }) );
For instance:
.well-known/acme-challenge/test.txt works as expected
.well-known/acme-challenge/test won't work (resource not found)
What am I doing wrong? Thanks
I want to document my solution to this issue using docker to automate the process of using certbot to get/set certificates. If you are not using docker just skip to server.js, but make sure you understand the shell script and how it calls the certonly command.
First I recommend you set up your docker compose file like the following:
docker-compose.yml
version: "3.9"
services:
main_website:
image: yourimage:latest
build:
context: ./
target: dev
ports:
- "443:443" #- Expose SSL Port for HTTPS
- "80:8080" # - Public Main Website Vue
- "8000:8000" # - Vue Dev
- "8001:8001" #- Vue UI
volumes:
- "./web:/usr/src/app"
- "./server:/usr/src/server"
- "./server/crontab:/etc/crontabs/root"
- "./certbot:/var/www/certbot/:ro"
entrypoint: ["/bin/sh","-c"] #-c stands for running sh in string mode so I can run npm install and run start:dev at the same time
command: #-Note that when vue (from npm run serve runs its package json has an & at the end which makes it run in the background allowing the other server to also run)
- "npm install && npm run build && npm run serve && npm run ui && cd /usr/src/server && npm install && npm run start:dev"
env_file:
- db_dev.env
- stripe_dev.env
environment:
NODE_ENV: "dev"
depends_on:
- db_dev
- db_live
- stripe
- cerbot
container_name: website_and_api_trading_tools_software
healthcheck:
test: "curl --fail http://localhost:8080/ || exit 1"
interval: 20s
timeout: 10s
retries: 3
start_period: 30s
db_dev:
image: mysql:latest
command: --group_concat_max_len=1844674407370954752
volumes:
- ./db_dev_data:/var/lib/mysql
restart: always
container_name: db_dev_trading_tools_software
env_file:
- db_dev.env
db_live:
image: mysql:latest
command: --group_concat_max_len=1844674407370954752
volumes:
- ./db_live_data:/var/lib/mysql
restart: always
container_name: db_live_trading_tools_software
env_file:
- db_live.env
phpmyadmin:
depends_on:
- db_live
- db_dev
container_name: phpmyadmin_trading_tools_software
image: phpmyadmin/phpmyadmin
environment:
PMA_ARBITRARY: 1
UPLOAD_LIMIT: 25000000
restart: always
ports:
- "8081:80"
stripe:
container_name: stripe_trading_tools_software
image: stripe/stripe-cli:latest
restart: always
entrypoint: ["/bin/sh","-c"]
env_file:
- stripe_dev.env
command:
- "stripe listen --api-key ${STRIPE_TEST_API_KEY} --forward-to main_website:8080/webhook --format JSON"
certbot:
container_name: certbot_tradingtools_software
image: certbot/certbot
volumes:
- "./certbot/www/:/var/www/certbot/:rw"
- "./certbot/conf/:/etc/letsencrypt/:rw"
Things to note for this file are that I created a volume on the main_website container. This will have access to the files that the webroot check from the certbot will need and will also store the certificates later in the 'conf' directly that will be here.
- "./certbot:/var/www/certbot/:ro"
Volumes of note for the certbot container are listed below. The /var/www/certbot path is where the webroot files will go which will create the files .well-known/acme-challenge/randomfilejiberishhere, and the /etc/letsencrypt path is where the actual certificates are placed.
volumes:
- "./certbot/www/:/var/www/certbot/:rw"
- "./certbot/conf/:/etc/letsencrypt/:rw"
Note that I add :rw at the end of the volume path to enable read/write and I put :ro to enable read only for the main_website container as a good practice to follow. This step is optional.
The next thing to note, is you will want a healthcheck on your main_website to be sure it's up and running before we start creating webroot checks with certbot against it.
healthcheck:
test: "curl --fail http://localhost:8080/ || exit 1"
interval: 20s
timeout: 10s
retries: 3
start_period: 30s
This will every 20 seconds attempt to curl into the website from within the container until it gets a valid response. When you docker compose up, just add the --wait flag, like this: docker compose up --wait (this will wait for the health check to finish before moving onto the next step)
Note that with this setup the certbot container and main_website essentially have access to the same volume.
Next, I recommend creating a shell script like the following.
certbot.sh
#!/usr/bin/env bash
##JA - To use this file pass as an argument the domain name of interest attempting to renew or create.
#https://unix.stackexchange.com/questions/84381/how-to-compare-two-dates-in-a-shell (refer to string comparison technique for dates)
date +"%Y%m%d" > ./certbot/last-cert-run.txt
docker compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ -d $1 -d $2 --non-interactive --agree-tos -m youremailhere#gmail.com --dry-run -v --expand
if [ $? -eq 0 ]
then
echo "Dry run succesful, attempting to create real certificate and/or automatically renew by doing so..."
##JA - The rate limit is 5 certificates created per week. For this reason, this command should only run if it has been one week since the last run.
##JA - Read (if it exists) the date from the last-successful-cert-run.txt file. If it has been at least a week since then, then we are allowed to run.
##JA - The date command is different on Linux vs Mac, so we need to identify the system before using it.
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) machine=Linux;;
Darwin*) machine=Mac;;
CYGWIN*) machine=Cygwin;;
MINGW*) machine=MinGw;;
*) machine="UNKNOWN:${unameOut}"
esac
echo "machine=$machine"
current_date=$(date +"%Y%m%d")
last_date=$(cat ./certbot/last-successful-cert-run.txt)
if [ "$machine" = "Darwin" ];then
one_week_from_last_date=$(date -jf "%Y%m%d" -v +7d "$last_date" +"%Y%m%d")
else
one_week_from_last_date=$(date +"%Y%m%d" -d "$last_date +7 days")
fi
echo "last_date=$last_date"
echo "current_date=$current_date"
echo "one_week_from_last_date=$one_week_from_last_date"
##JA - This will renew every 7 days, well within the rate limit.
if [ $current_date -ge $one_week_from_last_date ];then
echo "Time to renew certificate..."
docker compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ -d $1 -d $2 --non-interactive --agree-tos -m youremailhere#gmail.com -v --expand
date +"%Y%m%d" > ./certbot/last-successful-cert-run.txt
exit 0
else
echo "Not time to renew certificate yet"
exit 0
fi
else
echo "Certbot dry run failed."
exit 1
fi
To use this script you call it like this certbot.sh domain1.com anotherdomain.com. If you want to add more domains just keep adding them but you will have to modify the script to add extra -d commands for each domain in the --dry-run version of the certbot command and the non --dry-run version.
Certbot only allows 5 failed validation attempts before you are blocked for an hour
For this reason you should ALWAYS attempt a --dry-run test first to be sure everything is setup and ONLY if that passes attempt a real one.
Certbot also allows only 5 certificates per week to be issued!
For this reason, I store the date to a text file when it last ran successfully and only run the actual command to create the certificate if necessary. Note that I had to add code to calculate the date difference 7 days ahead differently based on if you are running this shell script from a Mac or Linux based system since the date command differs.
Finally, I am NOT using certbot renew, because certonly essentially does the same thing here.
Server.js
const express = require("express");
const cors = require("cors");
const logger = require('./logger');//require the logger folder index.js file which will load DEV or PROD versions of logging.
const stripe = require('stripe');
const http = require('http')
const https = require('https');
const fs = require('fs');
const certBotPath = '/var/www/certbot/';
const credentialsPath = certBotPath+'conf/live/';
logger.info(`credentialsPath=${credentialsPath}`);
//#JA - Array to hold all the known credentials found.
var credentials_array = [];
try{
fs.readdirSync(credentialsPath).map( domain_directory => {
// logger.info("Reading directories for SSL credentials...");
// logger.info(domain_directory);
if(domain_directory!="README"){ //#JA - We ignore the README file
const privateKey = fs.readFileSync(credentialsPath+domain_directory+'/privkey.pem', 'utf8');
const certificate = fs.readFileSync(credentialsPath+domain_directory+'/cert.pem', 'utf8');
const ca = fs.readFileSync(credentialsPath+domain_directory+'/chain.pem', 'utf8');
const credentials = {
key: privateKey,
cert: certificate,
ca: ca,
domain: domain_directory
}
logger.info(`credential object added.. ${domain_directory}`);
credentials_array.push(credentials);
}
});
}catch{
credentials = {};//#JA - Just give a wrong credentials object so it works.
credentials_array.push(credentials);
}
logger.info("Finished finding credentials...");
const vueBuildPath = '/usr/src/app/build/';//#JA - Path to the vue build files
const app = express();
//#JA - We need a manual check for https because SOME http routes we do NOT want to forward to https, like /.well-known/acme-challenge/:fileid for certbot
var httpsCheck = function (req, res, next) {
if(req.connection.encrypted){
logger.info(req.url);
logger.info("SSL Detected, continue to next middleware...!");
next();//Then continue on as normal.
}else{
logger.info("SSL NOT DETECTED!!!")
logger.info(req.url);
if(!req.url.includes("/.well-known/acme-challenge/")){
logger.info("Forcing permanent redirect to port 443 secure...");
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}else{
next();//#JA - Let it pass as normal in this case since it's the challenges.
}
}
}
app.use(httpsCheck);
app.use(express.static(vueBuildPath));
app.use(express.static(certBotPath));
var corsOptions = {
origin: "http://localhost:8080"
};
app.use(cors(corsOptions));
// // parse requests of content-type - application/json (#JA - I added the extra ability to get the rawBody so it plays nice with certain webHooks from stripe)
app.use(express.json({
limit: '5mb',
verify: (req, res, buf) => {
req.rawBody = buf.toString();
}
}));
// parse requests of content-type - application/x-www-form-urlencoded
app.use(express.urlencoded({ extended: true }));
// certbot SSL checking webroot directory
// app.use('/.well-known', express.static(certBotPath));
app.get('/.well-known/acme-challenge/:fileid', function(req, res){
res.sendFile(certBotPath+"www/.well-known/acme-challenge/"+req.params.fileid)
})
// Sequelize Database setups
const db = require("./models");
const { info } = require("console");
if(process.env.NODE_ENV=="dev"){
db.sequelize.sync({force:true})
.then(() => {
logger.info("Droped and re-synced db.");
})
.catch((err) => {
logger.error("Failed to sync db: " + err.message);
});
}else{
db.sequelize.sync({alter:true})
.then(() => {
logger.info("Altered & Synced db.");
})
.catch((err) => {
logger.error("Failed to sync db: " + err.message);
});
}
// Routes
// #JA - Main vue application route at /
app.get(["/","/optimizer"], (req, res) => {
//res.json({ message: "Welcome to tradingtools.software applications." });
res.sendFile(vueBuildPath + "index.html");
});
// This is your Stripe CLI webhook secret for testing your endpoint locally.
const endpointSecret = process.env.WEBHOOK_SIGNING_SECRET;
//logger.info(`Stripe endpointSecret=${endpointSecret}`);
//Stripe Webhook
app.post('/webhook', (request, response) => {
logger.info("Stripe WebHook detected!");
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.rawBody, sig, endpointSecret); //#JA - Had to modify this to take the rawBody since this is what was needed.
} catch (err) {
response.status(400).send(`Webhook Error: ${err.message}`);
return;
}
logger.info(`event_type=${event.type}`);
// Handle the event
switch (event.type) {
case 'payment_intent.succeeded':
const paymentIntent = event.data.object;
// Then define and call a function to handle the event payment_intent.succeeded
//logger.info(JSON.stringify(paymentIntent));
//logger.info("Payment_intent.succeeded!");
break;
// ... handle other event types
default:
console.log(`Unhandled event type ${event.type}`);
}
// Return a 200 response to acknowledge receipt of the event
response.send();
});
require("./routes/user.routes")(app); //#JA - Includes all the API Routes for User
const httpServer = http.createServer(app);
httpServer.listen(8080, () =>{
logger.info(`HTTP Server running on port 8080 via 80 docker-compose, partial redirect active, using app for routes but will forward all 80 port traffic to 443 except for .well-known route`);
});
//#JA - TODO - Make this use first credentials it finds and add anymore as contexts.
logger.info(`using credentials for: ${credentials_array[0].domain} as default`);
const httpsServer = https.createServer(credentials_array[0], app);
if(credentials_array.length>1){
for(let i=0;i<credentials_array.length;i++){
logger.info(`Adding httpsServer context for ${credentials_array[i].domain}`);
httpsServer.addContext(credentials_array[i].domain,credentials_array[i]);//#JA - Domain is stored in the credentials object for convinience.
}
}
httpsServer.listen(443, () => {
logger.info(`'HTTPS Server running on port 443!'`);
});
Lastly, while most of the code that I pasted is not relevant to this, I will point out the relevant sections to make this work.
const http = require('http')
const https = require('https');
You must include these above since you will need to run BOTH servers. The webroot check will NOT work on the SSL version of the directory! For this reason I had to do some workarounds so that the server will never have to shutdown.
const certBotPath = '/var/www/certbot/';
const credentialsPath = certBotPath+'conf/live/';
These are the paths to the actual certificate and certbot webroot check if you are using the same volume information from docker-compose that I did.
var credentials_array = [];
try{
fs.readdirSync(credentialsPath).map( domain_directory => {
// logger.info("Reading directories for SSL credentials...");
// logger.info(domain_directory);
if(domain_directory!="README"){ //#JA - We ignore the README file
const privateKey = fs.readFileSync(credentialsPath+domain_directory+'/privkey.pem', 'utf8');
const certificate = fs.readFileSync(credentialsPath+domain_directory+'/cert.pem', 'utf8');
const ca = fs.readFileSync(credentialsPath+domain_directory+'/chain.pem', 'utf8');
const credentials = {
key: privateKey,
cert: certificate,
ca: ca,
domain: domain_directory
}
logger.info(`credential object added.. ${domain_directory}`);
credentials_array.push(credentials);
}
});
}catch{
credentials = {};//#JA - Just give a wrong credentials object so it works.
credentials_array.push(credentials);
}
This code will attempt to read the credentials path to see if there is any existing certificates created ignoring the README file that will always be present. If there is any domain certificates found it will add it to the array since you can have multiple certificates added per https server instance by using .addContext on the https object as you will see later.
The directory it creates when it successfully creates certificates looks something like this so you understand what my code is doing. I just blurred out my domain for security reasons.
var httpsCheck = function (req, res, next) {
if(req.connection.encrypted){
logger.info(req.url);
logger.info("SSL Detected, continue to next middleware...!");
next();//Then continue on as normal.
}else{
logger.info("SSL NOT DETECTED!!!")
logger.info(req.url);
if(!req.url.includes("/.well-known/acme-challenge/")){
logger.info("Forcing permanent redirect to port 443 secure...");
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}else{
next();//#JA - Let it pass as normal in this case since it's the challenges.
}
}
}
app.use(httpsCheck);
The code above is middleware that intercepts every request to the website and if it detects its SSL it lets it continue to the next middleware using the next() command. If it detects it's NOT encrypted, then it reads the URL and if it sees this is a request for /.well-known/acme-challenge/ then it will allow it to continue on with the next() command, if it's NOT however, it will redirect the request to the SSL version (https) of that path so that you retain redirection abilities.
app.get('/.well-known/acme-challenge/:fileid', function(req, res){
res.sendFile(certBotPath+"www/.well-known/acme-challenge/"+req.params.fileid)
})
This code will run whenever the webroot path of /.well-known/acme-challenge if used and I send the file that it's looking for using the known file paths on the main_website docker container instance. This is essential for it to pass all of it's checks!
const httpServer = http.createServer(app);
httpServer.listen(8080, () =>{
logger.info(`HTTP Server running on port 8080 via 80 docker-compose, partial redirect active, using app for routes but will forward all 80 port traffic to 443 except for .well-known route`);
});
This creates the http server for port 80 access and uses the app so it will still work with the routing we specified earlier. In my case I'm listening on port 8080 because my docker compose file routes 8080 to 80.
const httpsServer = https.createServer(credentials_array[0], app);
if(credentials_array.length>1){
for(let i=0;i<credentials_array.length;i++){
logger.info(`Adding httpsServer context for ${credentials_array[i].domain}`);
httpsServer.addContext(credentials_array[i].domain,credentials_array[i]);//#JA - Domain is stored in the credentials object for convinience.
}
}
httpsServer.listen(443, () => {
logger.info(`'HTTPS Server running on port 443!'`);
});
Finally you create the httpsServer. I default it with the first credentialObject it can find and then pass it app so it can access all the other routes as normal. If there is other credentials found it will use the .addContext function to add them as needed.
Then you simply listen on port 443 as usual for an SSL server.
In conclusion this is a fully working setup with the official certbot docker file and SSL redirection for all files except what is needed by the webroot check. The supplied shellscript can easily be run by sending the domain you want and put on a cronjob to periodically check if new certificates are needed and place them in the same directory everytime.
I hope this helps someone else trying to do this as this was a lot of work to figure out.

how to connect angular app on public ip and a node app on local ip?

I have an angular 4 app which is running on 151.233.x.y:8080 and I have a node app which is running on 192.168.t.z:3000! I want to make a connection between them with an HTTP service. the base URL in my service is http://192.168.t.z and my angular app is running by ng serve --port 8080 --host 192.168.87.19 --public 151.233.58.231 but I cannot connect to my node app successfully! whats the problem?
const _isDev = window.location.port.indexOf('8080') > -1;
const protocol = window.location.protocol;
const host = window.location.host;
const apiURI = _isDev ? 'http://192.168.t.z:3001/' : '';
export const CNF = {
appName : 'Demo',
BASE_URI: protocol + "//" + host + '/',
BASE_API: apiURI,
};
Use CNF.BASE_API when you use get post method in http requset.
Its working only your system.Outside is not working.

How to use PM2 Cluster with Socket IO?

I am developing an application that relies completely on Socket.io. As we all know NodeJS by default runs only on one core. Now I would like to scale it across multiple cores. I am finding it difficult to make socketio work with PM2 Cluster Mode. Any sample code would help.
I am using Artillery to test. And when the app runs on single core I get the response while It runs in cluster the response would be NaN
When Ran Without Cluster
PM2 docs say
Be sure your application is stateless meaning that no local data is
stored in the process, for example sessions/websocket connections,
session-memory and related. Use Redis, Mongo or other databases to
share states between processes.
Socket.io is not stateless.
Kubernetes implementation get around the statefull issues by routing based on source IP to a specific instance. This is still not 100% since some sources may present more than one IP address. I know this is not PM2, but gives you an idea of the complexity.
NESTjs SERVER
I use Socket server 2.4.1 so then i get the compatible redis adapter that is 5.4.0
I need to extend nest's adepter class "ioAdapter" that class only works for normal ws connections not our pm2 clusters
import { IoAdapter } from '#nestjs/platform-socket.io';
import * as redisIOAdapter from 'socket.io-redis';
import { config } from './config';
export class RedisIoAdapter extends IoAdapter {
createIOServer(port: number, options?: any): any {
const server = super.createIOServer(port, options);
const redisAdapter = redisIOAdapter({
host: config.server.redisUrl,
port: config.server.redisPort,
});
server.adapter(redisAdapter);
return server;
}
}
That is actually nestjs implementation
Now i need to tell nest im using that implementetion so i go to main.ts
import { NestFactory } from '#nestjs/core';
import { AppModule } from './app.module';
import { config } from './config';
import { RedisIoAdapter } from './socket-io.adapter';
import { EventEmitter } from 'events';
async function bootstrap() {
EventEmitter.defaultMaxListeners = 15;
const app = await NestFactory.create(AppModule);
app.enableCors();
app.useWebSocketAdapter(new RedisIoAdapter(app));
await app.listen(config.server.port);
}
bootstrap();
I have a lot of events for this one so i had to up my max event count
now for every gateway you got, you need to use a different connection strategy, so instead of using polling you need to go to websocket directly
...
#WebSocketGateway({ transports: ['websocket'] })
export class AppGateway implements OnGatewayConnection, OnGatewayDisconnect {
...
or if you are using namespaces
...
#WebSocketGateway({ transports: ['websocket'], namespace: 'user' })
export class UsersGateway {
...
last step is to install the redis database on your AWS instance and that is another thing; and also install pm2
nest build
pm2 i -g pm2
pm2 start dist/main.js -i 4
CLIENT
const config: SocketIoConfig = {
url: environment.server.admin_url, //http:localhost:3000
options: {
transports: ['websocket'],
},
};
You can now test your websocket server using FireCamp
Try using this lib:
https://github.com/luoyjx/socket.io-redis-stateless
It makes socket io stateless through redis.
You need to setup Redis with your Node server. Here is how I managed to get cluster mode to work with Socket.io
First install Redis. If you are using Ubuntu, follow this link: https://www.digitalocean.com/community/tutorials/how-to-install-and-secure-redis-on-ubuntu-18-04
Then:
npm i socket.io-redis
Now place Redis in your Node server
const redisAdapter = require('socket.io-redis')
global.io = require('socket.io')(server, { transports: [ 'websocket' ]})
io.adapter(redisAdapter({ host: 'localhost', port: 6379 }))
That was all I had to do to get PM2 cluster mode to work with socket.io in my server.

Resources