Express won't serve acme-challenge static file without extension - node.js

I struggle to validate the Let's Encrypt challenge on my server. I have properly set up my server and it serves files as expected, except when they don't have extensions (which is the case for the SSL challenges).
app.use("/.well-known", express.static(path.join(__dirname, ".well-known"), { dotfiles: "allow" }) );
For instance:
.well-known/acme-challenge/test.txt works as expected
.well-known/acme-challenge/test won't work (resource not found)
What am I doing wrong? Thanks

I want to document my solution to this issue using docker to automate the process of using certbot to get/set certificates. If you are not using docker just skip to server.js, but make sure you understand the shell script and how it calls the certonly command.
First I recommend you set up your docker compose file like the following:
docker-compose.yml
version: "3.9"
services:
main_website:
image: yourimage:latest
build:
context: ./
target: dev
ports:
- "443:443" #- Expose SSL Port for HTTPS
- "80:8080" # - Public Main Website Vue
- "8000:8000" # - Vue Dev
- "8001:8001" #- Vue UI
volumes:
- "./web:/usr/src/app"
- "./server:/usr/src/server"
- "./server/crontab:/etc/crontabs/root"
- "./certbot:/var/www/certbot/:ro"
entrypoint: ["/bin/sh","-c"] #-c stands for running sh in string mode so I can run npm install and run start:dev at the same time
command: #-Note that when vue (from npm run serve runs its package json has an & at the end which makes it run in the background allowing the other server to also run)
- "npm install && npm run build && npm run serve && npm run ui && cd /usr/src/server && npm install && npm run start:dev"
env_file:
- db_dev.env
- stripe_dev.env
environment:
NODE_ENV: "dev"
depends_on:
- db_dev
- db_live
- stripe
- cerbot
container_name: website_and_api_trading_tools_software
healthcheck:
test: "curl --fail http://localhost:8080/ || exit 1"
interval: 20s
timeout: 10s
retries: 3
start_period: 30s
db_dev:
image: mysql:latest
command: --group_concat_max_len=1844674407370954752
volumes:
- ./db_dev_data:/var/lib/mysql
restart: always
container_name: db_dev_trading_tools_software
env_file:
- db_dev.env
db_live:
image: mysql:latest
command: --group_concat_max_len=1844674407370954752
volumes:
- ./db_live_data:/var/lib/mysql
restart: always
container_name: db_live_trading_tools_software
env_file:
- db_live.env
phpmyadmin:
depends_on:
- db_live
- db_dev
container_name: phpmyadmin_trading_tools_software
image: phpmyadmin/phpmyadmin
environment:
PMA_ARBITRARY: 1
UPLOAD_LIMIT: 25000000
restart: always
ports:
- "8081:80"
stripe:
container_name: stripe_trading_tools_software
image: stripe/stripe-cli:latest
restart: always
entrypoint: ["/bin/sh","-c"]
env_file:
- stripe_dev.env
command:
- "stripe listen --api-key ${STRIPE_TEST_API_KEY} --forward-to main_website:8080/webhook --format JSON"
certbot:
container_name: certbot_tradingtools_software
image: certbot/certbot
volumes:
- "./certbot/www/:/var/www/certbot/:rw"
- "./certbot/conf/:/etc/letsencrypt/:rw"
Things to note for this file are that I created a volume on the main_website container. This will have access to the files that the webroot check from the certbot will need and will also store the certificates later in the 'conf' directly that will be here.
- "./certbot:/var/www/certbot/:ro"
Volumes of note for the certbot container are listed below. The /var/www/certbot path is where the webroot files will go which will create the files .well-known/acme-challenge/randomfilejiberishhere, and the /etc/letsencrypt path is where the actual certificates are placed.
volumes:
- "./certbot/www/:/var/www/certbot/:rw"
- "./certbot/conf/:/etc/letsencrypt/:rw"
Note that I add :rw at the end of the volume path to enable read/write and I put :ro to enable read only for the main_website container as a good practice to follow. This step is optional.
The next thing to note, is you will want a healthcheck on your main_website to be sure it's up and running before we start creating webroot checks with certbot against it.
healthcheck:
test: "curl --fail http://localhost:8080/ || exit 1"
interval: 20s
timeout: 10s
retries: 3
start_period: 30s
This will every 20 seconds attempt to curl into the website from within the container until it gets a valid response. When you docker compose up, just add the --wait flag, like this: docker compose up --wait (this will wait for the health check to finish before moving onto the next step)
Note that with this setup the certbot container and main_website essentially have access to the same volume.
Next, I recommend creating a shell script like the following.
certbot.sh
#!/usr/bin/env bash
##JA - To use this file pass as an argument the domain name of interest attempting to renew or create.
#https://unix.stackexchange.com/questions/84381/how-to-compare-two-dates-in-a-shell (refer to string comparison technique for dates)
date +"%Y%m%d" > ./certbot/last-cert-run.txt
docker compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ -d $1 -d $2 --non-interactive --agree-tos -m youremailhere#gmail.com --dry-run -v --expand
if [ $? -eq 0 ]
then
echo "Dry run succesful, attempting to create real certificate and/or automatically renew by doing so..."
##JA - The rate limit is 5 certificates created per week. For this reason, this command should only run if it has been one week since the last run.
##JA - Read (if it exists) the date from the last-successful-cert-run.txt file. If it has been at least a week since then, then we are allowed to run.
##JA - The date command is different on Linux vs Mac, so we need to identify the system before using it.
unameOut="$(uname -s)"
case "${unameOut}" in
Linux*) machine=Linux;;
Darwin*) machine=Mac;;
CYGWIN*) machine=Cygwin;;
MINGW*) machine=MinGw;;
*) machine="UNKNOWN:${unameOut}"
esac
echo "machine=$machine"
current_date=$(date +"%Y%m%d")
last_date=$(cat ./certbot/last-successful-cert-run.txt)
if [ "$machine" = "Darwin" ];then
one_week_from_last_date=$(date -jf "%Y%m%d" -v +7d "$last_date" +"%Y%m%d")
else
one_week_from_last_date=$(date +"%Y%m%d" -d "$last_date +7 days")
fi
echo "last_date=$last_date"
echo "current_date=$current_date"
echo "one_week_from_last_date=$one_week_from_last_date"
##JA - This will renew every 7 days, well within the rate limit.
if [ $current_date -ge $one_week_from_last_date ];then
echo "Time to renew certificate..."
docker compose run --rm certbot certonly --webroot --webroot-path /var/www/certbot/ -d $1 -d $2 --non-interactive --agree-tos -m youremailhere#gmail.com -v --expand
date +"%Y%m%d" > ./certbot/last-successful-cert-run.txt
exit 0
else
echo "Not time to renew certificate yet"
exit 0
fi
else
echo "Certbot dry run failed."
exit 1
fi
To use this script you call it like this certbot.sh domain1.com anotherdomain.com. If you want to add more domains just keep adding them but you will have to modify the script to add extra -d commands for each domain in the --dry-run version of the certbot command and the non --dry-run version.
Certbot only allows 5 failed validation attempts before you are blocked for an hour
For this reason you should ALWAYS attempt a --dry-run test first to be sure everything is setup and ONLY if that passes attempt a real one.
Certbot also allows only 5 certificates per week to be issued!
For this reason, I store the date to a text file when it last ran successfully and only run the actual command to create the certificate if necessary. Note that I had to add code to calculate the date difference 7 days ahead differently based on if you are running this shell script from a Mac or Linux based system since the date command differs.
Finally, I am NOT using certbot renew, because certonly essentially does the same thing here.
Server.js
const express = require("express");
const cors = require("cors");
const logger = require('./logger');//require the logger folder index.js file which will load DEV or PROD versions of logging.
const stripe = require('stripe');
const http = require('http')
const https = require('https');
const fs = require('fs');
const certBotPath = '/var/www/certbot/';
const credentialsPath = certBotPath+'conf/live/';
logger.info(`credentialsPath=${credentialsPath}`);
//#JA - Array to hold all the known credentials found.
var credentials_array = [];
try{
fs.readdirSync(credentialsPath).map( domain_directory => {
// logger.info("Reading directories for SSL credentials...");
// logger.info(domain_directory);
if(domain_directory!="README"){ //#JA - We ignore the README file
const privateKey = fs.readFileSync(credentialsPath+domain_directory+'/privkey.pem', 'utf8');
const certificate = fs.readFileSync(credentialsPath+domain_directory+'/cert.pem', 'utf8');
const ca = fs.readFileSync(credentialsPath+domain_directory+'/chain.pem', 'utf8');
const credentials = {
key: privateKey,
cert: certificate,
ca: ca,
domain: domain_directory
}
logger.info(`credential object added.. ${domain_directory}`);
credentials_array.push(credentials);
}
});
}catch{
credentials = {};//#JA - Just give a wrong credentials object so it works.
credentials_array.push(credentials);
}
logger.info("Finished finding credentials...");
const vueBuildPath = '/usr/src/app/build/';//#JA - Path to the vue build files
const app = express();
//#JA - We need a manual check for https because SOME http routes we do NOT want to forward to https, like /.well-known/acme-challenge/:fileid for certbot
var httpsCheck = function (req, res, next) {
if(req.connection.encrypted){
logger.info(req.url);
logger.info("SSL Detected, continue to next middleware...!");
next();//Then continue on as normal.
}else{
logger.info("SSL NOT DETECTED!!!")
logger.info(req.url);
if(!req.url.includes("/.well-known/acme-challenge/")){
logger.info("Forcing permanent redirect to port 443 secure...");
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}else{
next();//#JA - Let it pass as normal in this case since it's the challenges.
}
}
}
app.use(httpsCheck);
app.use(express.static(vueBuildPath));
app.use(express.static(certBotPath));
var corsOptions = {
origin: "http://localhost:8080"
};
app.use(cors(corsOptions));
// // parse requests of content-type - application/json (#JA - I added the extra ability to get the rawBody so it plays nice with certain webHooks from stripe)
app.use(express.json({
limit: '5mb',
verify: (req, res, buf) => {
req.rawBody = buf.toString();
}
}));
// parse requests of content-type - application/x-www-form-urlencoded
app.use(express.urlencoded({ extended: true }));
// certbot SSL checking webroot directory
// app.use('/.well-known', express.static(certBotPath));
app.get('/.well-known/acme-challenge/:fileid', function(req, res){
res.sendFile(certBotPath+"www/.well-known/acme-challenge/"+req.params.fileid)
})
// Sequelize Database setups
const db = require("./models");
const { info } = require("console");
if(process.env.NODE_ENV=="dev"){
db.sequelize.sync({force:true})
.then(() => {
logger.info("Droped and re-synced db.");
})
.catch((err) => {
logger.error("Failed to sync db: " + err.message);
});
}else{
db.sequelize.sync({alter:true})
.then(() => {
logger.info("Altered & Synced db.");
})
.catch((err) => {
logger.error("Failed to sync db: " + err.message);
});
}
// Routes
// #JA - Main vue application route at /
app.get(["/","/optimizer"], (req, res) => {
//res.json({ message: "Welcome to tradingtools.software applications." });
res.sendFile(vueBuildPath + "index.html");
});
// This is your Stripe CLI webhook secret for testing your endpoint locally.
const endpointSecret = process.env.WEBHOOK_SIGNING_SECRET;
//logger.info(`Stripe endpointSecret=${endpointSecret}`);
//Stripe Webhook
app.post('/webhook', (request, response) => {
logger.info("Stripe WebHook detected!");
const sig = request.headers['stripe-signature'];
let event;
try {
event = stripe.webhooks.constructEvent(request.rawBody, sig, endpointSecret); //#JA - Had to modify this to take the rawBody since this is what was needed.
} catch (err) {
response.status(400).send(`Webhook Error: ${err.message}`);
return;
}
logger.info(`event_type=${event.type}`);
// Handle the event
switch (event.type) {
case 'payment_intent.succeeded':
const paymentIntent = event.data.object;
// Then define and call a function to handle the event payment_intent.succeeded
//logger.info(JSON.stringify(paymentIntent));
//logger.info("Payment_intent.succeeded!");
break;
// ... handle other event types
default:
console.log(`Unhandled event type ${event.type}`);
}
// Return a 200 response to acknowledge receipt of the event
response.send();
});
require("./routes/user.routes")(app); //#JA - Includes all the API Routes for User
const httpServer = http.createServer(app);
httpServer.listen(8080, () =>{
logger.info(`HTTP Server running on port 8080 via 80 docker-compose, partial redirect active, using app for routes but will forward all 80 port traffic to 443 except for .well-known route`);
});
//#JA - TODO - Make this use first credentials it finds and add anymore as contexts.
logger.info(`using credentials for: ${credentials_array[0].domain} as default`);
const httpsServer = https.createServer(credentials_array[0], app);
if(credentials_array.length>1){
for(let i=0;i<credentials_array.length;i++){
logger.info(`Adding httpsServer context for ${credentials_array[i].domain}`);
httpsServer.addContext(credentials_array[i].domain,credentials_array[i]);//#JA - Domain is stored in the credentials object for convinience.
}
}
httpsServer.listen(443, () => {
logger.info(`'HTTPS Server running on port 443!'`);
});
Lastly, while most of the code that I pasted is not relevant to this, I will point out the relevant sections to make this work.
const http = require('http')
const https = require('https');
You must include these above since you will need to run BOTH servers. The webroot check will NOT work on the SSL version of the directory! For this reason I had to do some workarounds so that the server will never have to shutdown.
const certBotPath = '/var/www/certbot/';
const credentialsPath = certBotPath+'conf/live/';
These are the paths to the actual certificate and certbot webroot check if you are using the same volume information from docker-compose that I did.
var credentials_array = [];
try{
fs.readdirSync(credentialsPath).map( domain_directory => {
// logger.info("Reading directories for SSL credentials...");
// logger.info(domain_directory);
if(domain_directory!="README"){ //#JA - We ignore the README file
const privateKey = fs.readFileSync(credentialsPath+domain_directory+'/privkey.pem', 'utf8');
const certificate = fs.readFileSync(credentialsPath+domain_directory+'/cert.pem', 'utf8');
const ca = fs.readFileSync(credentialsPath+domain_directory+'/chain.pem', 'utf8');
const credentials = {
key: privateKey,
cert: certificate,
ca: ca,
domain: domain_directory
}
logger.info(`credential object added.. ${domain_directory}`);
credentials_array.push(credentials);
}
});
}catch{
credentials = {};//#JA - Just give a wrong credentials object so it works.
credentials_array.push(credentials);
}
This code will attempt to read the credentials path to see if there is any existing certificates created ignoring the README file that will always be present. If there is any domain certificates found it will add it to the array since you can have multiple certificates added per https server instance by using .addContext on the https object as you will see later.
The directory it creates when it successfully creates certificates looks something like this so you understand what my code is doing. I just blurred out my domain for security reasons.
var httpsCheck = function (req, res, next) {
if(req.connection.encrypted){
logger.info(req.url);
logger.info("SSL Detected, continue to next middleware...!");
next();//Then continue on as normal.
}else{
logger.info("SSL NOT DETECTED!!!")
logger.info(req.url);
if(!req.url.includes("/.well-known/acme-challenge/")){
logger.info("Forcing permanent redirect to port 443 secure...");
res.writeHead(301, { "Location": "https://" + req.headers['host'] + req.url });
res.end();
}else{
next();//#JA - Let it pass as normal in this case since it's the challenges.
}
}
}
app.use(httpsCheck);
The code above is middleware that intercepts every request to the website and if it detects its SSL it lets it continue to the next middleware using the next() command. If it detects it's NOT encrypted, then it reads the URL and if it sees this is a request for /.well-known/acme-challenge/ then it will allow it to continue on with the next() command, if it's NOT however, it will redirect the request to the SSL version (https) of that path so that you retain redirection abilities.
app.get('/.well-known/acme-challenge/:fileid', function(req, res){
res.sendFile(certBotPath+"www/.well-known/acme-challenge/"+req.params.fileid)
})
This code will run whenever the webroot path of /.well-known/acme-challenge if used and I send the file that it's looking for using the known file paths on the main_website docker container instance. This is essential for it to pass all of it's checks!
const httpServer = http.createServer(app);
httpServer.listen(8080, () =>{
logger.info(`HTTP Server running on port 8080 via 80 docker-compose, partial redirect active, using app for routes but will forward all 80 port traffic to 443 except for .well-known route`);
});
This creates the http server for port 80 access and uses the app so it will still work with the routing we specified earlier. In my case I'm listening on port 8080 because my docker compose file routes 8080 to 80.
const httpsServer = https.createServer(credentials_array[0], app);
if(credentials_array.length>1){
for(let i=0;i<credentials_array.length;i++){
logger.info(`Adding httpsServer context for ${credentials_array[i].domain}`);
httpsServer.addContext(credentials_array[i].domain,credentials_array[i]);//#JA - Domain is stored in the credentials object for convinience.
}
}
httpsServer.listen(443, () => {
logger.info(`'HTTPS Server running on port 443!'`);
});
Finally you create the httpsServer. I default it with the first credentialObject it can find and then pass it app so it can access all the other routes as normal. If there is other credentials found it will use the .addContext function to add them as needed.
Then you simply listen on port 443 as usual for an SSL server.
In conclusion this is a fully working setup with the official certbot docker file and SSL redirection for all files except what is needed by the webroot check. The supplied shellscript can easily be run by sending the domain you want and put on a cronjob to periodically check if new certificates are needed and place them in the same directory everytime.
I hope this helps someone else trying to do this as this was a lot of work to figure out.

Related

HAProxy locks up simple express server in 5 minutes?

I have a really strange one I just cannot work out.
I have been building node/express apps for years now and usually run a dev server just at home for quick debugging/testing. I frontend it with a haproxy instance to make it "production like" and to perform the ssl part.
In any case, just recently ALL servers (different projects) started mis-behaving and stopped responding to requests around exactly 5 minutes after being started. That is ALL the 3 or 4 I run sometimes on this machine, yet the exact same instance of haproxy is front-ending the production version of the code and that has no issues, it's still rock solid. And, infuriatingly, I wrote a really basic express server example and if it's front ended by the same haproxy it also locks up, but if I switch ports, it runs fine forever as expected!
So in summary:
1x haproxy instance frontending a bunch of prod/dev instances with the same rule sets, all with ssl.
2x production instances working fine
4x dev instances(and a simple test program) ALL locking up after around 5 min when behind haproxy
and if I run the simple test program on a different port so it's local network only, it works perfectly.
I do also have uptime robot liveness checks hitting the haproxy as well to monitor the instances.
So this example:
const express = require('express')
const request = require('request');
const app = express()
const port = 1234
var counter = 0;
var received = 0;
process.on('warning', e => console.warn(e.stack));
const started = new Date();
if (process.pid) {
console.log('Starting as pid ' + process.pid);
}
app.get('/', (req, res) => {
res.send('Hello World!').end();
})
app.get('/livenessCheck', (req, res) => {
res.send('ok').end();
})
app.use((req, res, next) => {
console.log('unknown', { host: req.headers.host, url: req.url });
res.send('ok').end();
})
const server = app.listen(port, () => {
console.log(`Example app listening on port ${port}`)
})
app.keepAliveTimeout = (5 * 1000) + 1000;
app.headersTimeout = (6 * 1000) + 2000;
setInterval(() => {
server.getConnections(function(error, count) {
console.log('connections', count);
});
//console.log('tick', new Date())
}, 500);
setInterval(() => {
console.log('request', new Date())
request('http://localhost:' + port, function (error, response, body) {
if (error) {
const ended = new Date();
console.error('request error:', ended, error); // Print the error if one occurred
counter = counter - 1;
if (counter < 0) {
console.error('started ', started); // Print the error if one occurred
const diff = Math.floor((ended - started) / 1000)
const min = Math.floor(diff / 60);
console.error('elapsed ', min, 'min ', diff - min*60, 'sec');
exit;
}
return;
}
received = received + 1;
console.log('request ', received, 'statusCode:', new Date(), response && response.statusCode); // Print the response status code if a response was received
//console.log('body:', body); // Print the HTML for the Google homepage.
});
}, 1000);
works perfectly and runs forever on a non-haproxy port, but only runs for approx 5 min on a port behind haproxy, it usually gets to 277 request responses each time before hanging up and timing out.
The "exit()" function is just a forced crash for testing.
I've tried adjusting some timeouts on haproxy, but to no avail. And each one has no impact on the production instances that just keep working fine.
I'm running these dev versions on a mac pro 2013 with latest OS. and tried various versions of node.
Any thoughts what it could be or how to debug further?
oh, and they all server web sockets as well as http requests.
Here is one example of a haproxy config that I am trying (relevant sections):
global
log 127.0.0.1 local2
...
nbproc 1
daemon
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 4s
timeout server 5s
timeout http-keep-alive 4s
timeout check 4s
timeout tunnel 1h
maxconn 3000
frontend wwws
bind *:443 ssl crt /etc/haproxy/certs/ no-sslv3
option http-server-close
option forwardfor
reqadd X-Forwarded-Proto:\ https
reqadd X-Forwarded-Port:\ 443
http-request set-header X-Client-IP %[src]
# set HTTP Strict Transport Security (HTST) header
rspadd Strict-Transport-Security:\ max-age=15768000
acl host_working hdr_beg(host) -i working.
use_backend Working if host_working
default_backend BrokenOnMac
backend Working
balance roundrobin
server working_1 1.2.3.4:8456 check
backend BrokenOnMac
balance roundrobin
server broken_1 2.3.4.5:8456 check
So if you go to https://working.blahblah.blah it works forever, but the backend for https://broken.blahblah.blah locks up and stops responding after 5 minutes (including direct curl requests bypassing haproxy).
BUT if I run the EXACT same code on a different port, it responds forever to any direct curl request.
The "production" servers that are working are on various OSes like Centos. On my Mac Pro, I run the tests. The test code works on the Mac on a port NOT front-ended by haproxy. The same test code hangs up after 5 minutes on the Mac when it has haproxy in front.
So the precise configuration that fails is:
Mac Pro + any node express app + frontended by haproxy.
If I change anything, like run the code on Centos or make sure there is no haproxy, then the code works perfectly.
So given it only stopped working recently, then is it the latest patch for OSX Monterey (12.6) maybe somehow interfering with the app socket when it gets a certain condition from haproxy? Seems highly unlikely, but the most logical explanation I can come up with.

AWS CDK ApplicationLoadBalancedFargateService with Node.js and WebSockets constantly stops after each health check

I used CDK with the ApplicationLoadBalancedFargateService construct to deploy the ECS Fargate service with Load Balancing. It's the WebSocket API written on Node.js with Socket.io.
It works fine locally, but in the AWS it's constantly being killed by the health check, with an error:
service CoreStack-CoreServiceA69F11F4-9ciFOoAL5oj9 (port 5000) is unhealthy in target-group CoreS-CoreS-FMLY84WTNYVN due to (reason Health checks failed with these codes: [404]).
The code of the stack:
import { aws_ecs_patterns, Stack, StackProps } from 'aws-cdk-lib';
import { Construct } from 'constructs';
import { Repository } from 'aws-cdk-lib/aws-ecr';
import { ContainerImage } from 'aws-cdk-lib/aws-ecs';
import { HostedZone } from 'aws-cdk-lib/aws-route53';
import { ApplicationProtocol } from 'aws-cdk-lib/aws-elasticloadbalancingv2';
import ec2 = require('aws-cdk-lib/aws-ec2');
import ecs = require('aws-cdk-lib/aws-ecs');
const DOMAIN_NAME = 'newordergame.com';
const SUBDOMAIN = 'core';
const coreServiceDomain = SUBDOMAIN + '.' + DOMAIN_NAME;
export class CoreStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const vpc = new ec2.Vpc(this, 'CoreServiceVpc', { maxAzs: 2 });
const cluster = new ecs.Cluster(this, 'Cluster', { vpc });
const repository = Repository.fromRepositoryName(
this,
'CoreServiceRepository',
'core-service'
);
const image = ContainerImage.fromEcrRepository(repository, 'latest');
const zone = HostedZone.fromLookup(this, 'NewOrderGameZone', {
domainName: DOMAIN_NAME
});
// Instantiate Fargate Service with just cluster and image
new aws_ecs_patterns.ApplicationLoadBalancedFargateService(
this,
'CoreService',
{
cluster,
taskImageOptions: {
image: image,
containerName: 'core',
containerPort: 5000,
enableLogging: true
},
protocol: ApplicationProtocol.HTTPS,
domainName: coreServiceDomain,
domainZone: zone
}
);
}
}
The Dockerfile:
FROM node:16-alpine
RUN mkdir -p /usr/app
WORKDIR /usr/app
COPY package.json ./
RUN npm install
COPY webpack.config.js tsconfig.json ./
COPY src ./src
RUN npm run webpack:build
EXPOSE 5000
CMD [ "node", "build/main.js" ]
The application also listens to the port 5000.
It crashes every 3 mins.
Does anyone have a clue about how to fix it?
Thank you!
Your health check URL is returning a 404 status code. You need to configure the health check with a path that will return 200 status code.
ELB health check works only with HTTP or HTTPS, it does not work with WebSocket including Socket.io
So, the best workaround I found is to make the service listen to the HTTP port along with WebSocket and answer it with status 200.
With this approach, the health check will actually check the status of the service.

Node clusters with pm2 queues every other request

Im using pm2 to manage concurrent requests to my API, so far so good, I've managed to make it work.
My api has only one route. Each request takes between 1-2 min to completely resolve and send back the response. As soon as I make my first request, I can see in the pm2 logs that the request has been accepted, but if I make a second request to the same route it gets queued and only gets processed after the first is completed.
Only in case I make a third request to the same route while the second request is in queue, another worker is called and accepts the third request, the second stays in queue until the first gets resolved.
I hope I made myself clear
first request is accepted promptly by a worker, second request gets queued and third request gets also promply accepted by another worker, fourth gets queued, fifth gets accepted, sixth gets queued and so on.
I have 24 available workers.
here is my very simple server:
const express = require('express');
const runner = require('./Rrunner2')
const moment = require('moment');
const bodyParser = require('body-parser');
const app = express();
app.use(express.json({limit: '50mb', extended: true}));
app.use(bodyParser.json())
app.post('/optimize', (req, res) => {
try{
const req_start = moment().format('DD/MM/YYYY h:mm a');
console.log('Request received at ' + req_start)
console.log(`Worker process ID - ${process.pid} has accepted the request.`);
const data = req.body;
const opt_start = moment().format('DD/MM/YYYY h:mm a')
console.log('Optimization started at ' + opt_start)
let result = runner.optimizer(data);
const opt_end = moment().format('DD/MM/YYYY h:mm a')
console.log('Optmization ended at ' + opt_end)
const res_send = moment().format('DD/MM/YYYY h:mm a');
console.log('Response sent at ' + res_send)
return res.send(result)
}catch(err){
console.error(err)
return res.status(500).send('Server error.')
}
});
const PORT = 3000;
app.listen(PORT, () => console.log(`Server listening port ${PORT}.`))
my PM2 ecosystem file is:
module.exports = {
apps : [{
name: "Asset Optimizer",
script: "server.js",
watch: true,
ignore_watch: ["optimizer", "outputData", "Rplots.pdf", "nodeSender.js", "errorLog"],
instances: "max",
autorestart: true,
max_memory_restart: "1G",
exec_mode: "cluster",
watch_options:{
"followSymlinks": false
}
}
]}
I start the server using pm2 start ecosystem.config.js
everything works just fine, but this queue issue is making me crazy. I've tried many many dirty approaches, including splitting routes, splitting servers. no success whatsoever.
Even if you don´t know the answer to this, please give me some ideas on how to overcome this problem. Thank you very much.
UPDATE
Okay, I've managed to make this work with the native cluster module by setting:
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;
cluster.schedulingPolicy = cluster.SCHED_RR;
But once I try to make pm2 start the server, it no longer works.
Is it possible to make pm2 accept the Round Robin appproach.
P.S.: I'm using windows,and found in the node docs that it is the onlu platform that this is not setup by default.
The only viable solution to this issue was implementing nginx as a reverse proxy and a load balancer.
I used nginx 1.18.0 and this is the configuration file that made it work.
If anyone comes by this issue, nginx + pm2 is the way to go.
Happy to clarify further if anyone faces this. It gave me a lot of work.
worker_processes 5;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
upstream expressapi {
least_conn;
server localhost:3000;
server localhost:3001;
server localhost:3002;
server localhost:3003;
server localhost:3004;
}
sendfile on;
keepalive_timeout 800;
fastcgi_read_timeout 800;
proxy_read_timeout 800;
server {
listen 8000;
server_name optimizer;
location / {
proxy_pass http://expressapi/;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
As said above the problem isnt with PM2. Pm2 is just processing which requests gets sent to it. The problem is the layer above that is sending requests, aka nignx.

"Dockerized" Node.js Application Not Responding to Host

I am trying to "Dockerize" a rather large, complicated Node.js application, which is roughly structured like so:
Nebula [root directory of the project; runs the Node.js application]
|__CosmosD3 [directory that contains all HTML and JS files]
|__Nebula-Pipeline [directory that contains most Python files that do all the heavy computations]
Our standard installation process for this project is rather complicated due to all the interconnected pieces that all must work together flawlessly to enable our front end JS files to communicate with our back end Python scripts. For Linux/Unix systems, this is the rough process (with commands run from within the Nebula directory):
Install Java. (Just about any version should do; we have a single Java file that needs to be run for a particular interaction in the front end)
Install Python 2.7 and pip
Run pip install --upgrade pip to ensure the latest pip is being used
Install Node.js, npm, and zmq using sudo apt-get install libzmq-dev npm nodejs-legacy
Run npm install to install our Node.js package dependencies
Run pip install -e ./Nebula-Pipeline to install all Python dependencies
Pray that everything has gone smoothly and run the application with npm start. The default port is 8081, so the application should be accessible through localhost:8081.
A very similar project to this one has already been "Dockerized." By tweaking the Dockerfile for the other project to more closely correspond to the steps above, this is the Dockerfile I've created:
FROM ubuntu:16.04
RUN mkdir /www
WORKDIR /www
RUN apt-get update
RUN apt-get install -y libzmq-dev npm nodejs-legacy python python-pip
RUN apt-get update && \
apt-get install -y openjdk-8-jdk && \
apt-get install -y ant && \
apt-get clean;
# Fix certificate issues, found as of
# https://bugs.launchpad.net/ubuntu/+source/ca-certificates-java/+bug/983302
RUN apt-get update && \
apt-get install ca-certificates-java && \
apt-get clean && \
update-ca-certificates -f;
# Setup JAVA_HOME, this is useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME
##COPY ./lib/ /opt/lib
RUN pip install --upgrade pip
COPY ./Nebula /www
RUN ls -a /www
RUN npm install
RUN pip install -e /www/Nebula-Pipeline
EXPOSE 8081
CMD ["npm", "start"]
When I run this Dockerfile with docker build -t nebulaserver, it appears that the image is created successfully. When I run the image with docker run -p 8081:8081 nebulaserver, I get the following printout that seems to indicate that everything is actually working properly. (Note that the "PORT: 8081" printout is a confirmation of the port that the Node.js app is using.)
Michelles-MacBook-Pro-8$ docker run -p 8081:8081 nebulaserver
> Nebula#0.0.1 start /www
> node app.js
PORT:
8081
However, when I subsequently try to connect to localhost:8081, I don't get any kind of response. Additionally, I expect to see additional printouts from my Node.js server when it receives a request for one of the HTML pages. I don't see any of these printouts either. It's as if the port forwarding is not working properly. Based on everything I've read, I should be doing things correctly, but I've never used Docker before, so perhaps I'm missing something?
Anyone have any idea what I might be doing wrong?
EDIT:
Here's my app.js file, in case that helps figure out what's going on...
/*Import packages required in package.json */
/*Add these packages from the ../node_modules path*/
var express = require('express');//A lightweight nodejs web framework
var path = require('path');//Ability to join filepaths to filenames.
var favicon = require('serve-favicon');//Set prefered icon in browser URL bar. Unused?
var logger = require('morgan');//HTTP request logger. Unused?
var cookieParser = require('cookie-parser');//Stores cookies in req.cookies
var bodyParser = require('body-parser');//Middleware parser for incoming request bodies,
/* REST API routes */
var routes = require('./routes/index');//Points to /routes/index.js. Currently, index.js points to CosmosD3/CosmosD3.html
/* Connect to the databases */
//var mongo = require('mongodb');
//var monk = require('monk');
//var db = monk('localhost:27017/nodetest');
//var datasets = monk('localhost:27017/datasets');
/* The HTTP request handler */
var app = express();//Creates app from express class. (Baseline famework for an app. No web functionality).
var debug = require('debug')('Nebula:server');//Require the debug module. Pass it scoping 'Nebula:server'
var http = require('http').Server(app);//Create an http server on top of app.
/* The Socket.io WebSocket module */
var io = require('socket.io')(http);//Create an io/websocket on top of http object.
/* Our custom Nebula module handles the WebSocket synchronization */
var nebula = require('./nebula')(io);//Creates nebula layer on top of io.
/* Set the port we want to run on */
var port = process.env.PORT || 8080;
//app.set('port', port);
var host = '0.0.0.0';
app.listen(port, host);
console.log(`Running on http://${host}:${port}`);
/* view engine setup, currently not used */
app.set('views', path.join(__dirname, 'views'));
app.set('view engine', 'jade');
// uncomment after placing your favicon in /public
//app.use(favicon(path.join(__dirname, 'public', 'favicon.ico')));
app.use(logger('dev'));
app.use(bodyParser.json());
app.use(bodyParser.urlencoded({ extended: false }));
app.use(cookieParser());
/* Expose everything in public/ through our web server */
app.use(express.static(path.join(__dirname, 'public')));
app.use("/cosmos", express.static(path.join(__dirname, 'CosmosD3')));
// Make our db accessible to our router
//app.use(function(req, res, next){
// req.db = db;
// req.datasets = datasets;
// next();
//});
/* Initiate the REST API */
app.use('/', routes);
// catch 404 and forward to error handler
app.use(function(req, res, next) {
var err = new Error('Not Found');
err.status = 404;
next(err);
});
// error handlers
// development error handler
// will print stacktrace
if (app.get('env') === 'development') {
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render('error', {
message: err.message,
error: err
});
});
}
// production error handler
// no stacktraces leaked to user
app.use(function(err, req, res, next) {
res.status(err.status || 500);
res.render('error', {
message: err.message,
error: {}
});
});
/**
* Event listener for HTTP server "error" event.
*/
function onError(error) {
if (error.syscall !== 'listen') {
throw error;
}
var bind = (typeof port === 'string' ? 'Pipe ' + port : 'Port ' + port);
// handle specific listen errors with friendly messages
switch (error.code) {
case 'EACCES':
console.error(bind + ' requires elevated privileges');
process.exit(1);
break;
case 'EADDRINUSE':
console.error(bind + ' is already in use');
process.exit(1);
break;
default:
throw error;
}
}
/**
* Event listener for HTTP server "listening" event.
*/
function onListening() {
var addr = http.address();
var bind = (typeof addr === 'string' ? 'pipe ' + addr : 'port ' + addr.port);
debug('Listening on ' + bind);
}
/**
* Listen on provided port, on all network interfaces.
*/
http.listen(port);
http.on('error', onError);
http.on('listening', onListening);
I have had this problem before and (as mentioned by #davidmaze in his comment) I had to change the host that my app was listening on. In my case, it was that I was using the Express framework, which (despite the docs implying otherwise) was listening only for localhost (127.0.0.1), and I needed it to listen to 0.0.0.0. See lines 58-66 (which make up the 2nd code block in the Create the Node.js app section of the Dockerizing a Node.js web app example) for where they explicitly tell express to listen for 0.0.0.0.
I finally figured it out!! After a ton of digging around, I discovered that the default IP for Docker applications is http://192.168.99.100 as opposed to http://localhost. I got the answer from here:
https://forums.docker.com/t/docker-running-host-but-not-accessible/44082/13
On a MAC, I had to put in the port in two spots:
in the Dockerfile I needed to put in:
EXPOSE 5000 5010 5011
before the CMD line (second to last line) and it exposes 3 ports
In docker-compose.yml (I have two dockers) I need to put in at the app part:
ports:
"5000:5000"
"5010:5010"
"5011:5011"

Node.js quick file server (static files over HTTP)

Is there Node.js ready-to-use tool (installed with npm), that would help me expose folder content as file server over HTTP.
Example, if I have
D:\Folder\file.zip
D:\Folder\file2.html
D:\Folder\folder\file-in-folder.jpg
Then starting in D:\Folder\ node node-file-server.js
I could access file via
http://hostname/file.zip
http://hostname/file2.html
http://hostname/folder/file-in-folder.jpg
Why is my node static file server dropping requests?
reference some mystical
standard node.js static file server
If there's no such tool, what framework should I use?
Related:
Basic static file server in NodeJS
A good "ready-to-use tool" option could be http-server:
npm install http-server -g
To use it:
cd D:\Folder
http-server
Or, like this:
http-server D:\Folder
Check it out: https://github.com/nodeapps/http-server
If you do not want to use ready tool, you can use the code below, as demonstrated by me at https://developer.mozilla.org/en-US/docs/Node_server_without_framework:
var http = require('http');
var fs = require('fs');
var path = require('path');
http.createServer(function (request, response) {
console.log('request starting...');
var filePath = '.' + request.url;
if (filePath == './')
filePath = './index.html';
var extname = path.extname(filePath);
var contentType = 'text/html';
switch (extname) {
case '.js':
contentType = 'text/javascript';
break;
case '.css':
contentType = 'text/css';
break;
case '.json':
contentType = 'application/json';
break;
case '.png':
contentType = 'image/png';
break;
case '.jpg':
contentType = 'image/jpg';
break;
case '.wav':
contentType = 'audio/wav';
break;
}
fs.readFile(filePath, function(error, content) {
if (error) {
if(error.code == 'ENOENT'){
fs.readFile('./404.html', function(error, content) {
response.writeHead(200, { 'Content-Type': contentType });
response.end(content, 'utf-8');
});
}
else {
response.writeHead(500);
response.end('Sorry, check with the site admin for error: '+error.code+' ..\n');
response.end();
}
}
else {
response.writeHead(200, { 'Content-Type': contentType });
response.end(content, 'utf-8');
}
});
}).listen(8125);
console.log('Server running at http://127.0.0.1:8125/');
UPDATE
If you need to access your server from external demand/file, you need to overcome the CORS, in your node.js file by writing the below, as I mentioned in a previous answer here
// Website you wish to allow to connect
response.setHeader('Access-Control-Allow-Origin', '*');
// Request methods you wish to allow
response.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
// Request headers you wish to allow
response.setHeader('Access-Control-Allow-Headers', 'X-Requested-With,content-type');
// Set to true if you need the website to include cookies in the requests sent
// to the API (e.g. in case you use sessions)
response.setHeader('Access-Control-Allow-Credentials', true);
UPDATE
As Adrian mentioned, in the comments, he wrote an ES6 code with full explanation here, I just re-posting his code below, in case the code gone from the original site for any reason:
const http = require('http');
const url = require('url');
const fs = require('fs');
const path = require('path');
const port = process.argv[2] || 9000;
http.createServer(function (req, res) {
console.log(`${req.method} ${req.url}`);
// parse URL
const parsedUrl = url.parse(req.url);
// extract URL path
let pathname = `.${parsedUrl.pathname}`;
// based on the URL path, extract the file extension. e.g. .js, .doc, ...
const ext = path.parse(pathname).ext;
// maps file extension to MIME typere
const map = {
'.ico': 'image/x-icon',
'.html': 'text/html',
'.js': 'text/javascript',
'.json': 'application/json',
'.css': 'text/css',
'.png': 'image/png',
'.jpg': 'image/jpeg',
'.wav': 'audio/wav',
'.mp3': 'audio/mpeg',
'.svg': 'image/svg+xml',
'.pdf': 'application/pdf',
'.doc': 'application/msword'
};
fs.exists(pathname, function (exist) {
if(!exist) {
// if the file is not found, return 404
res.statusCode = 404;
res.end(`File ${pathname} not found!`);
return;
}
// if is a directory search for index file matching the extension
if (fs.statSync(pathname).isDirectory()) pathname += '/index' + ext;
// read file from file system
fs.readFile(pathname, function(err, data){
if(err){
res.statusCode = 500;
res.end(`Error getting the file: ${err}.`);
} else {
// if the file is found, set Content-type and send data
res.setHeader('Content-type', map[ext] || 'text/plain' );
res.end(data);
}
});
});
}).listen(parseInt(port));
console.log(`Server listening on port ${port}`);
For people wanting a server runnable from within NodeJS script:
You can use expressjs/serve-static which replaces connect.static (which is no longer available as of connect 3):
myapp.js:
var http = require('http');
var finalhandler = require('finalhandler');
var serveStatic = require('serve-static');
var serve = serveStatic("./");
var server = http.createServer(function(req, res) {
var done = finalhandler(req, res);
serve(req, res, done);
});
server.listen(8000);
and then from command line:
$ npm install finalhandler serve-static
$ node myapp.js
I know it's not Node, but I've used Python's SimpleHTTPServer:
python -m SimpleHTTPServer [port]
It works well and comes with Python.
From npm#5.2.0, npm started installing a new binary alongside the usual npm called npx. So now, one liners to create static http server from current directory:
npx serve
or
npx http-server
connect could be what you're looking for.
Installed easily with:
npm install connect
Then the most basic static file server could be written as:
var connect = require('connect'),
directory = '/path/to/Folder';
connect()
.use(connect.static(directory))
.listen(80);
console.log('Listening on port 80.');
One-line™ Proofs instead of promises
The first is http-server, hs - link
npm i -g http-server // install
hs C:\repos // run with one line?? FTW!!
The second is serve by ZEIT.co - link
npm i -g serve // install
serve C:\repos // run with one line?? FTW!!
Following are available options, if this is what helps you decide.
C:\Users\Qwerty>http-server --help
usage: http-server [path] [options]
options:
-p Port to use [8080]
-a Address to use [0.0.0.0]
-d Show directory listings [true]
-i Display autoIndex [true]
-g --gzip Serve gzip files when possible [false]
-e --ext Default file extension if none supplied [none]
-s --silent Suppress log messages from output
--cors[=headers] Enable CORS via the "Access-Control-Allow-Origin" header
Optionally provide CORS headers list separated by commas
-o [path] Open browser window after starting the server
-c Cache time (max-age) in seconds [3600], e.g. -c10 for 10 seconds.
To disable caching, use -c-1.
-U --utc Use UTC time format in log messages.
-P --proxy Fallback proxy if the request cannot be resolved. e.g.: http://someurl.com
-S --ssl Enable https.
-C --cert Path to ssl cert file (default: cert.pem).
-K --key Path to ssl key file (default: key.pem).
-r --robots Respond to /robots.txt [User-agent: *\nDisallow: /]
-h --help Print this list and exit.
C:\Users\Qwerty>serve --help
Usage: serve.js [options] [command]
Commands:
help Display help
Options:
-a, --auth Serve behind basic auth
-c, --cache Time in milliseconds for caching files in the browser
-n, --clipless Don't copy address to clipboard (disabled by default)
-C, --cors Setup * CORS headers to allow requests from any origin (disabled by default)
-h, --help Output usage information
-i, --ignore Files and directories to ignore
-o, --open Open local address in browser (disabled by default)
-p, --port Port to listen on (defaults to 5000)
-S, --silent Don't log anything to the console
-s, --single Serve single page applications (sets `-c` to 1 day)
-t, --treeless Don't display statics tree (disabled by default)
-u, --unzipped Disable GZIP compression
-v, --version Output the version number
If you need to watch for changes, see hostr, credit Henry Tseng's answer
Install express using npm: https://expressjs.com/en/starter/installing.html
Create a file named server.js at the same level of your index.html with this content:
var express = require('express');
var server = express();
server.use(express.static(__dirname));
server.listen(8080);
This will load your index.html file. If you wish to specify the html file to load, use this syntax:
server.use('/', express.static(__dirname + '/myfile.html'));
If you wish to put it in a different location, set the path on the third line:
server.use('/', express.static(__dirname + '/public'));
CD to the folder containing your file and run node from the console with this command:
node server.js
Browse to localhost:8080
#DEMO/PROTO SERVER ONLY
If that's all you need, try this:
const fs = require('fs'),
http = require('http'),
arg = process.argv.slice(2),
rootdir = arg[0] || process.cwd(),
port = process.env.PORT || 9000,
hostname = process.env.HOST || '127.0.0.1';
//tested on node=v10.19.0
http.createServer(function (req, res) {
try {
// change 'path///to/////dir' -> 'path/to/dir'
req_url = decodeURIComponent(req.url).replace(/\/+/g, '/');
stats = fs.statSync(rootdir + req_url);
if (stats.isFile()) {
buffer = fs.createReadStream(rootdir + req_url);
buffer.on('open', () => buffer.pipe(res));
return;
}
if (stats.isDirectory()) {
//Get list of files and folder in requested directory
lsof = fs.readdirSync(rootdir + req_url, {encoding:'utf8', withFileTypes:false});
// make an html page with the list of files and send to browser
res.writeHead(200, {'Content-Type': 'text/html; charset=utf-8'});
res.end(html_page(`http://${hostname}:${port}`, req_url, lsof));
return;
}
} catch (err) {
res.writeHead(404);
res.end(err);
return;
}
}).listen(port, hostname, () => console.log(`Server running at http://${hostname}:${port}`));
function html_page(host, req_url, lsof) {//this is a Function declarations can be called before it is defined
// Add link to root directory and parent directory if not already in root directory
list = req_url == '/' ? [] : [`/`,
`..`];
templete = (host, req_url, file) => {// the above is a Function expressions cannot be called before it is defined
return `${file}`; }
// Add all the links to the files and folder in requested directory
lsof.forEach(file => {
list.push(templete(host, req_url, file));
});
return `
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="content-type" content="text/html" charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>Directory of ${req_url}</title>
</head>
<body>
<h2>Directory of ${req_url}</h2>
${list.join('<br/>\n')}
</body>
</html>`
}
In plain node.js:
const http = require('http')
const fs = require('fs')
const path = require('path')
process.on('uncaughtException', err => console.error('uncaughtException', err))
process.on('unhandledRejection', err => console.error('unhandledRejection', err))
const publicFolder = process.argv.length > 2 ? process.argv[2] : '.'
const port = process.argv.length > 3 ? process.argv[3] : 8080
const mediaTypes = {
zip: 'application/zip',
jpg: 'image/jpeg',
html: 'text/html',
/* add more media types */
}
const server = http.createServer(function(request, response) {
console.log(request.method + ' ' + request.url)
const filepath = path.join(publicFolder, request.url)
fs.readFile(filepath, function(err, data) {
if (err) {
response.statusCode = 404
return response.end('File not found or you made an invalid request.')
}
let mediaType = 'text/html'
const ext = path.extname(filepath)
if (ext.length > 0 && mediaTypes.hasOwnProperty(ext.slice(1))) {
mediaType = mediaTypes[ext.slice(1)]
}
response.setHeader('Content-Type', mediaType)
response.end(data)
})
})
server.on('clientError', function onClientError(err, socket) {
console.log('clientError', err)
socket.end('HTTP/1.1 400 Bad Request\r\n\r\n')
})
server.listen(port, '127.0.0.1', function() {
console.log('👨‍🔧 Development server is online.')
})
This is a simple node.js server that only serves requested files in a certain directory.
Usage:
node server.js folder port
folder may be absolute or relative depending on the server.js location. The default value is . which is the directory you execute node server.js command.
port is 8080 by default but you can specify any port available in your OS.
In your case, I would do:
cd D:\Folder
node server.js
You can browse the files under D:\Folder from a browser by typing http://127.0.0.1:8080/somefolder/somefile.html
There is another static web server that is quite nice: browser-sync.
It can be downloaded using node package manager:
npm install -g browser-sync
After installation, navigate to the project folder in the cmd prompt and just run the following:
browser-sync start --server --port 3001 --files="./*"
It will start catering all the files in the current folder in the browser.
More can be found out from BrowserSync
Thanks.
Here is my one-file/lightweight node.js static file web-server pet project with no-dependency that I believe is a quick and rich tool which its use is as easy as issuing this command on your Linux/Unix/macOS terminal (or termux on Android) when node.js (or nodejs-legacy on Debian/Ubuntu) is installed:
curl pad.js.org | node
(different commands exist for Windows users on the documentation)
It supports different things that I believe can be found useful,
Hierarchical directory index creation/serving
With sort capability on the different criteria
Upload from browser by [multi-file] drag-and-drop and file/text-only copy-paste and system clipboard screen-shot paste on Chrome, Firefox and other browsers may with some limitations (which can be turned off by command line options it provides)
Folder/note-creation/upload button
Serving correct MIMEs for well known file types (with possibility for disabling that)
Possibility of installation as a npm package and local tool or, one-linear installation as a permanent service with Docker
HTTP 206 file serving (multipart file transfer) for faster transfers
Uploads from terminal and browser console (in fact it was originally intended to be a file-system proxy for JS console of browsers on other pages/domains)
CORS download/uploads (which also can be turned off)
Easy HTTPS integration
Lightweight command line options for achieving better secure serving with it:
With my patch on node.js 8, you can have access to the options without first installation: curl pad.js.org | node - -h
Or first install it as a system-global npm package by [sudo] npm install -g pad.js and then use its installed version to have access to its options: pad -h
Or use the provided Docker image which uses relatively secure options by default. [sudo] docker run --restart=always -v /files:/files --name pad.js -d -p 9090:9090 quay.io/ebraminio/pad.js
The features described above are mostly documented on the main page of the tool http://pad.js.org which by some nice trick I used is also the place the tool source itself is also served from!
The tool source is on GitHub which welcomes your feedback, feature requests and ⭐s!
You can use the NPM serve package for this, if you don't need the NodeJS stuff it is a quick and easy to use tool:
1 - Install the package on your PC:
npm install -g serve
2 - Serve your static folder with serve <path> :
d:> serve d:\StaticSite
It will show you which port your static folder is being served, just navigate to the host like:
http://localhost:3000
I haven't had much luck with any of the answers on this page, however, below seemed to do the trick.
Add a server.js file with the following content:
const express = require('express')
const path = require('path')
const port = process.env.PORT || 3000
const app = express()
// serve static assets normally
app.use(express.static(__dirname + '/dist'))
// handle every other route with index.html, which will contain
// a script tag to your application's JavaScript file(s).
app.get('*', function (request, response){
response.sendFile(path.resolve(__dirname, 'dist', 'index.html'))
})
app.listen(port)
console.log("server started on port " + port)
Also make sure that you require express. Run yarn add express --save or npm install express --save depending on your setup (I can recommend yarn it's pretty fast).
You may change dist to whatever folder you are serving your content is. For my simple project, I wasn't serving from any folder, so I simply removed the dist filename.
Then you may run node server.js. As I had to upload my project to a Heroku server, I needed to add the following to my package.json file:
"scripts": {
"start": "node server.js"
}
Below worked for me:
Create a file app.js with below contents:
// app.js
var fs = require('fs'),
http = require('http');
http.createServer(function (req, res) {
fs.readFile(__dirname + req.url, function (err,data) {
if (err) {
res.writeHead(404);
res.end(JSON.stringify(err));
return;
}
res.writeHead(200);
res.end(data);
});
}).listen(8080);
Create a file index.html with below contents:
Hi
Start a command line:
cmd
Run below in cmd:
node app.js
Goto below URL, in chrome:
http://localhost:8080/index.html
That's all. Hope that helps.
Source: https://nodejs.org/en/knowledge/HTTP/servers/how-to-serve-static-files/
If you use the Express framework, this functionality comes ready to go.
To setup a simple file serving app just do this:
mkdir yourapp
cd yourapp
npm install express
node_modules/express/bin/express
Searching in NPM registry https://npmjs.org/search?q=server, I have found static-server https://github.com/maelstrom/static-server
Ever needed to send a colleague a file, but can't be bothered emailing
the 100MB beast? Wanted to run a simple example JavaScript
application, but had problems with running it through the file:///
protocol? Wanted to share your media directory at a LAN without
setting up Samba, or FTP, or anything else requiring you to edit
configuration files? Then this file server will make your life that
little bit easier.
To install the simple static stuff server, use npm:
npm install -g static-server
Then to serve a file or a directory, simply run
$ serve path/to/stuff
Serving path/to/stuff on port 8001
That could even list folder content.
Unfortunately, it couldn't serve files :)
You can try serve-me
Using it is so easy:
ServeMe = require('serve-me')();
ServeMe.start(3000);
Thats all.
PD: The folder served by default is "public".
Here's another simple web server.
https://www.npmjs.com/package/hostr
Install
npm install -g hostr
Change working director
cd myprojectfolder/
And start
hostr
For a healthy increase of performance using node to serve static resources, I recommend using Buffet. It works similar to as a web application accelerator also known as a caching HTTP reverse proxy but it just loads the chosen directory into memory.
Buffet takes a fully-bufferred approach -- all files are fully loaded into memory when your app boots, so you will never feel the burn of the filesystem. In practice, this is immensely efficient. So much so that putting Varnish in front of your app might even make it slower! 
We use it on the codePile site and found an increase of ~700requests/sec to >4k requests/sec on a page that downloads 25 resources under a 1k concurrent user connection load.
Example:
var server = require('http').createServer();
var buffet = require('buffet')(root: './file'); 
 
server.on('request', function (req, res) {
  buffet(req, res, function () {
    buffet.notFound(req, res);
  });
});
 
server.listen(3000, function () {
  console.log('test server running on port 3000');
});
Take a look on that link.
You need only to install express module of node js.
var express = require('express');
var app = express();
app.use('/Folder', express.static(__dirname + '/Folder'));
You can access your file like http://hostname/Folder/file.zip
First install node-static server via npm install node-static -g -g is to install it global on your system, then navigate to the directory where your files are located, start the server with static it listens on port 8080, naviaget to the browser and type localhost:8080/yourhtmlfilename.
A simple Static-Server using connect
var connect = require('connect'),
directory = __dirname,
port = 3000;
connect()
.use(connect.logger('dev'))
.use(connect.static(directory))
.listen(port);
console.log('Listening on port ' + port);
See also Using node.js as a simple web server
It isn't on NPM, yet, but I built a simple static server on Express that also allows you to accept form submissions and email them through a transactional email service (Sendgrid for now, Mandrill coming).
https://github.com/jdr0dn3y/nodejs-StatServe
For the benefit of searchers, I liked Jakub g's answer, but wanted a little error handling. Obviously it's best to handle errors properly, but this should help prevent a site stopping if an error occurs. Code below:
var http = require('http');
var express = require('express');
process.on('uncaughtException', function(err) {
console.log(err);
});
var server = express();
server.use(express.static(__dirname));
var port = 10001;
server.listen(port, function() {
console.log('listening on port ' + port);
//var err = new Error('This error won't break the application...')
//throw err
});
For dev work you can use (express 4)
https://github.com/appsmatics/simple-httpserver.git
I use Houston at work and for personal projects, it works well for me.
https://github.com/alejandro/Houston
const http = require('http');
const fs = require('fs');
const url = require('url');
const path = require('path');
let mimeTypes = {
'.html': 'text/html',
'.css': 'text/css',
'.js': 'text/javascript',
'.jpg': 'image/jpeg',
'.png': 'image/png',
'.ico': 'image/x-icon',
'.svg': 'image/svg+xml',
'.eot': 'appliaction/vnd.ms-fontobject',
'.ttf': 'aplication/font-sfnt'
};
http.createServer(function (request, response) {
let pathName = url.parse(request.url).path;
if(pathName === '/'){
pathName = '/index.html';
}
pathName = pathName.substring(1, pathName.length);
let extName = path.extName(pathName);
let staticFiles = `${__dirname}/template/${pathName}`;
if(extName =='.jpg' || extName == '.png' || extName == '.ico' || extName == '.eot' || extName == '.ttf' || extName == '.svg')
{
let file = fr.readFileSync(staticFiles);
res.writeHead(200, {'Content-Type': mimeTypes[extname]});
res.write(file, 'binary');
res.end();
}else {
fs.readFile(staticFiles, 'utf8', function (err, data) {
if(!err){
res.writeHead(200, {'Content-Type': mimeTypes[extname]});
res.end(data);
}else {
res.writeHead(404, {'Content-Type': 'text/html;charset=utf8'});
res.write(`<strong>${staticFiles}</strong>File is not found.`);
}
res.end();
});
}
}).listen(8081);
If you are intrested in ultra-light http server without any prerequisites
you should have a look at: mongoose
You also asked why requests are dropping - not sure what's the specific reason on your case, but in overall you better server static content using dedicated middleware (nginx, S3, CDN) because Node is really not optimized for this networking pattern. See further explanation here (bullet 13):
http://goldbergyoni.com/checklist-best-practice-of-node-js-in-production/

Resources