Nodejs app running on localhost:3000 is not accessible on Linux server - node.js

I want to deploy my web app on cloud server, the OS is Centos 7, I got a static IP address like "34.80.XXX.XX", if I set my web app running on port 80, and I can see the page when I enter "34.80.XXX.XX:80".
But I if I running on port 3000, and enter "34.80.XXX.XX:3000", it not working.
I have tried to stop my firewall:
# systemctl stop firewalld
And use the command below to check there is no other program running on port 3000
# netstat -tnlp
Here is the code in app.js
const Koa = require('koa')
const json = require('koa-json')
const app = new Koa()
const PORT = 3000
// make JSON Prettier middleware
app.use(json())
// Simple middleware example
app.use(async ctx => {
ctx.body = {msg: 'Hello World'}
})
app.listen(PORT)
console.log(`server run at http://localhost:${PORT}`)
I use pm2 to run it in background
[root#instance-1 01-hello]# pm2 start app.js -n hello
[PM2] Starting /home/xing/01-hello/app.js in fork_mode (1 instance)
[PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ hello │ default │ N/A │ fork │ 3835 │ 0s │ 0 │ online │ 0% │ 12.6mb │ root │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
And I enter the command netstat -tnlp to check app.js is running on port 3000
[root#instance-1 01-hello]# netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 15958/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1184/master
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 2421/mongod
tcp6 0 0 :::22 :::* LISTEN 15958/sshd
tcp6 0 0 :::3000 :::* LISTEN 3835/node /home/xin
tcp6 0 0 ::1:25 :::* LISTEN 1184/master
I have stuck on this point for long time, is there any good solution

Related

How can I print Cypress base Url in the `Git bash CLI console`

How can I print Cypress base Url in the Git bash CLI console while running the Cypress test ? Could someone please advise ?
When running headless, the browser logs don't show in the terminal, but those from the Cypress node process (aka cy.task()) do show up.
I assume the baseUrl is changing during the test, here is an example that does that and logs the current value using a task.
The configured value is http://localhost:3000, the test changes it to http://localhost:3001.
cypress.config.js
const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
setupNodeEvents(on, config) {
console.log('Printing baseUrl - during setup', config.baseUrl)
on('task', {
logBaseUrl(baseUrl) {
console.log('Printing baseUrl - from task', baseUrl)
return null
}
})
},
baseUrl: 'http://localhost:3000'
},
})
test
it('logs baseUrl to the terminal in run mode', () => {
console.log('Printing baseUrl - directly from test', Cypress.config('baseUrl')) // no show
Cypress.config('baseUrl', 'http://localhost:3001')
cy.task('logBaseUrl', Cypress.config('baseUrl'))
})
terminal
Printing baseUrl - during setup http://localhost:3000
====================================================================================================
Running: log-baseurl.cy.js (1 of 1)
Printing baseUrl - from task http://localhost:3001
√ logs baseUrl to the terminal in run mode (73ms)
1 passing (115ms)
(Results)
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Tests: 1 │
│ Passing: 1 │
│ Failing: 0 │
│ Pending: 0 │
│ Skipped: 0 │
│ Screenshots: 0 │
│ Video: true │
│ Duration: 0 seconds │
│ Spec Ran: log-baseurl.cy.js │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
====================================================================================================
(Run Finished)
Spec Tests Passing Failing Pending Skipped
┌────────────────────────────────────────────────────────────────────────────────────────────────┐
│ √ log-baseurl.cy.js 108ms 1 1 - - - │
└────────────────────────────────────────────────────────────────────────────────────────────────┘
√ All specs passed! 108ms 1 1 - - -
Done in 18.49s.
You can use a nodejs command console.log(Cypress.config().baseUrl).
That does not require Git Bash though, only nodejs installed on Your Windows.
Or you can add in your tests cy.config().baseUrl.

meteor Verifying Deployment - Connection refused

I am trying to deploy a meteor Application, But I am receiving this error message on the Verifying Deployment section with the following error message -
------------------------------------STDERR------------------------------------
: (7) Failed to connect to 172.17.0.2 port 3000: Connection refused
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (7) Failed to connect to 172.17.0.2 port 3000: Connection refused
=> Logs:
=> Setting node version
NODE_VERSION=14.17.4
v14.17.4 is already installed.
Now using node v14.17.4 (npm v6.14.14)
default -> 14.17.4 (-> v14.17.4 *)
=> Starting meteor app on port 3000
=> Redeploying previous version of the app
When I do the sudo netstat -tulpn | grep LISTEN in the server it shows this
tcp 0 0 10.0.3.1:53 0.0.0.0:* LISTEN 609/dnsmasq
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 406/systemd-resolve
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 745/sshd: /usr/sbin
tcp6 0 0 :::22 :::* LISTEN 745/sshd: /usr/sbin
When I run sudo docker ps i receive the following message -
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e51b1b4bf3a3 mup-appName:latest "/bin/sh -c 'exec $M…" About an hour ago Restarting (1) 49 seconds ago appName
68b723183f3d mongo:3.4.1 "/entrypoint.sh mong…" 9 days ago Restarting (100) 9 seconds ago mongodb
In my firewall i have also opened the Port 3000
If I check the Docker is running it seems like there is no docker running!!
Also in my mup.js file I am using http and not https
module.exports = {
servers: {
one: {
host: 'xx.xx.xxx.xxx',
username: 'ubuntu',
pem: '/home/runner/.ssh/id_rsa'
}
},
meteor: {
name: 'appName',
path: '../../',
docker: {
image: 'zodern/meteor:latest',
},
servers: {
one: {}
},
buildOptions: {
serverOnly: true
},
env: {
PORT: 3000,
ROOT_URL: 'http://dev-api.appName.com/',
NODE_ENV: 'production',
MAIL_URL: 'smtp://xxxx:xxx/eLPCB3nw3jubkq:#email-smtp.eu-north-1.amazonaws.com:587',
MONGO_URL: 'mongodb+srv://xxx:xx#xxx.iiitd.mongodb.net/Development?retryWrites=true&w=majority'
},
deployCheckWaitTime: 15
}
proxy: {
domains: 'dev.xxx.com',
ssl: {
letsEncryptEmail: 'info#xxx.com'
}
}
}
Any idea what might cause this issue?
I don't know why, but in the MUP docs the correct image name is zodern/meteor:root
If your app is slow to start, increase the deployCheckWaitTime . In my complex apps I put 600, just to ensure the app is up.

Request are not distributed across their worker process

I was just experimenting worker process hence try this:
const http = require("http");
const cluster = require("cluster");
const CPUs = require("os").cpus();
const numCPUs = CPUs.length;
if (cluster.isMaster) {
console.log("This is the master process: ", process.pid);
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on("exit", (worker) => {
console.log(`worker process ${process.pid} has died`);
console.log(`Only ${Object.keys(cluster.workers).length} remaining...`);
});
} else {
http
.createServer((req, res) => {
res.end(`process: ${process.pid}`);
if (req.url === "/kill") {
process.exit();
}
console.log(`serving from ${process.pid}`);
})
.listen(3000);
}
I use loadtest to check "Are Request distributed across their worker process?" But I got same process.pid
This is the master process: 6984
serving from 13108
serving from 13108
serving from 13108
serving from 13108
serving from 13108
...
Even when I kill one of them, I get the same process.pid
worker process 6984 has died
Only 3 remaining...
serving from 5636
worker process 6984 has died
Only 2 remaining...
worker process 6984 has died
Only 1 remaining...
How I am getting same process.pid when I killed that? And Why my requests are not distributed across their worker process?
Even when I use pm2 to test cluster mood using:
$ pm2 start app.js -i 3
[PM2] Starting app.js in cluster_mode (3 instances)
[PM2] Done.
┌────┬────────────────────┬──────────┬──────┬───────────┬──────────┬──────────┐
│ id │ name │ mode │ ↺ │ status │ cpu │ memory │
├────┼────────────────────┼──────────┼──────┼───────────┼──────────┼──────────┤
│ 0 │ app │ cluster │ 0 │ online │ 0% │ 31.9mb │
│ 1 │ app │ cluster │ 0 │ online │ 0% │ 31.8mb │
│ 2 │ app │ cluster │ 0 │ online │ 0% │ 31.8mb │
└────┴────────────────────┴──────────┴──────┴───────────┴──────────┴──────────┘
for loadtest -n 50000 http://localhost:3000 I check pm2 monit:
$ pm2 monit
┌─ Process List ───────────────────────────────────────────────────┐┌── app Logs ────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│[ 0] app Mem: 43 MB CPU: 34 % online ││ │
│[ 1] app Mem: 28 MB CPU: 0 % online ││ │
│[ 2] app Mem: 27 MB CPU: 0 % online ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
│ ││ │
└──────────────────────────────────────────────────────────────────┘└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
┌─ Custom Metrics ─────────────────────────────────────────────────┐┌─ Metadata ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Heap Size 20.81 MiB ││ App Name app │
│ Heap Usage 45.62 % ││ Namespace default │
│ Used Heap Size 9.49 MiB ││ Version N/A │
│ Active requests 0 ││ Restarts 0 │
└──────────────────────────────────────────────────────────────────┘└──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
left/right: switch boards | up/down/mouse: scroll | Ctrl-C: exit To go further check out https://pm2.io/
But surprisingly, app1 and app2 never hit any request as well as it didn't show any app log.
Update 1
I still couldn't figure out any solution. If any further query need please ask for that. I faced that issue first time. That's why maybe I was unable to represent where the exact problem occurring.
Update 2
After getting some answer I try to test it again with a simple node server:
Using pm2 without any config:
Using config suggested from #Naor Tedgi's answer:
Now the server is not running at all.
I got this
Probably it is OS related,I am on Ubuntu 20.04.
you don't have cluster mode enabled if you want to use pm2 as load balancer you need to add
exec_mode cluster
add this config file name it config.js
module.exports = {
apps : [{
script : "app.js",
instances : "max",
exec_mode : "cluster"
}]
}
and run pm2 start config.js
then all CPU usage will divide equally
tasted on
os macOS Catalina 10.15.7
node v14.15.4
Not sure why, but it seems that for whatever reason cluster doesn't behave on your machine the way it should.
In lieu of using node.js for balancing you can go for nginx then. On nginx part it's fairly easy if one of the available strategies is enough for you: http://nginx.org/en/docs/http/load_balancing.html
Then you need to make sure that your node processes are assigned different ports. In pm2 you can either use https://pm2.keymetrics.io/docs/usage/environment/ to either manually increment port based on the instance id or delegate it fully to pm2.
Needless to say, you'll have to send your requests to nginx in this case.

NodeJS - express server, pm2 cluser and nginx balancing - multiple threads

I'm trying to set up express server to listen on multiple threads. I've used pm2 to set up application on ports 3000, 3001, 3002, 3003 but requests are still waiting for each other...
express application index.js:
const express = require('express')
const axios = require('axios')
const app = express()
app.get('/', async (req, res) => {
console.log('-----> GOT REQUEST -> ' + (new Date()).getTime());
let resp = await axios.get("here some correct http get");
res.send("Hello world!")
console.log(' Response time: ' + (new Date()).getTime());
})
let instance = +process.env.NODE_APP_INSTANCE || 0;
let port = (+process.env.PORT || 3000) + instance;
app.listen(port, () => console.log('Example app listening on port ' + port))
so every instance is on another port. Now it's time for nginx:
upstream test_upstream {
least_conn;
server 127.0.0.1:3000;
server 127.0.0.1:3001;
server 127.0.0.1:3002;
server 127.0.0.1:3003;
}
server {
listen 8000;
location / {
proxy_hide_header Access-Control-Allow-Origin;
add_header Access-Control-Allow-Origin * always;
proxy_hide_header Access-Control-Allow-Methods;
add_header Access-Control-Allow-Methods "GET,POST,DELETE,PUT,OPTIONS" always;
proxy_hide_header Access-Control-Allow-Headers;
add_header Access-Control-Allow-Headers "Authorization, X-Requested-With, Content-Type" always;
proxy_hide_header Access-Control-Allow-Credentials;
add_header Access-Control-Allow-Credentials "true" always;
if ($request_method = OPTIONS ) { # Allow CORS
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Allow-Methods "GET,POST,DELETE,PUT,OPTIONS";
add_header Access-Control-Allow-Headers "Authorization, X-Requested-With, Content-Type";
add_header Access-Control-Allow-Credentials "true" always;
add_header Content-Length 0;
add_header Content-Type text/plain;
add_header Allow GET,POST,DELETE,PUT,OPTIONS;
return 200;
}
proxy_pass http://test_upstream;
}
}
So far so good. My environment:
node v 10.3.0
cpu cores 8, but i'm using only 4 instances
Ok, started application:
┌──────────┬────┬─────────┬───────┬────────┬─────────┬────────┬─────┬───────────┬───────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼─────────┼───────┼────────┼─────────┼────────┼─────┼───────────┼───────────────┼──────────┤
│ index │ 0 │ cluster │ 57069 │ online │ 6 │ 17m │ 0% │ 39.7 MB │ administrator │ disabled │
│ index │ 1 │ cluster │ 57074 │ online │ 6 │ 17m │ 0% │ 39.0 MB │ administrator │ disabled │
│ index │ 2 │ cluster │ 57091 │ online │ 6 │ 17m │ 0% │ 37.5 MB │ administrator │ disabled │
│ index │ 3 │ cluster │ 57097 │ online │ 6 │ 17m │ 0% │ 38.8 MB │ administrator │ disabled │
└──────────┴────┴─────────┴───────┴────────┴─────────┴────────┴─────┴───────────┴───────────────┴──────────┘
Now it's time to invoke it. I want to send multiple requests at the same time:
async sendRequest() {
const startTime = performance.now();
console.log("Sending request");
const els = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const promises = els.map(() => axios.get("http://my_server_with_nginx:8000"));
let results = await Promise.all(promises);
console.log(results);
const stopTime = performance.now();
console.log("Time of request: " + (stopTime - startTime));
}
And test looks like this:
And finally node app log:
0|index | -----> GOT REQUEST -> 1527796135425
0|index | Response time: 1527796135572
1|index | -----> GOT REQUEST -> 1527796135595
1|index | Response time: 1527796135741
2|index | -----> GOT REQUEST -> 1527796135766
2|index | Response time: 1527796136354
3|index | -----> GOT REQUEST -> 1527796136381
3|index | Response time: 1527796136522
0|index | -----> GOT REQUEST -> 1527796136547
0|index | Response time: 1527796136678
1|index | -----> GOT REQUEST -> 1527796136702
1|index | Response time: 1527796136844
2|index | -----> GOT REQUEST -> 1527796136868
2|index | Response time: 1527796137026
3|index | -----> GOT REQUEST -> 1527796137098
3|index | Response time: 1527796137238
0|index | -----> GOT REQUEST -> 1527796137263
0|index | Response time: 1527796137395
1|index | -----> GOT REQUEST -> 1527796137419
1|index | Response time: 1527796137560
As we can see, it's correctly balancing requests to nodes, but somewhere has stalled. How to force it to run in parallel?
It turns out that everything works just great.
Problem was in the browser. When browser send the same http get requests they are queued. To change that i had to change invocation:
const els = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const promises = els.map(() => axios.get("http://my_server_with_nginx:8000"));
to this:
const els = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
const promises = els.map((el) => axios.get(`http://my_server_with_nginx:8000?number=${el}`));
Are you sure that the requests are really on all cores? I mean to test this you will need to use a synchronous function. If you use async methods, the requests will be still accomplished asynchronously even on one thread.

Why not use EXIM an OpenDKIM service?

I tried to configure EXIM + OpenDKIM in CentOS 7...
(everything a latest version from repositories)
I used this description to configure a system: https://www.rosehosting.com/blog/how-to-install-and-configure-dkim-with-opendkim-and-exim-on-a-centos-7-vps/ , butI didnt use a default selector, i tried to use unique.
The outgoing mail haven't DKIM signature, I use this config in EXIM:
remote_smtp:
driver = smtp
DKIM_DOMAIN = $sender_address_domain
DKIM_SELECTOR = 20170915exim
DKIM_PRIVATE_KEY = ${if exists{/etc/opendkim/keys/$sender_address_domain/20170915exim}{/etc/opendkim/keys/$sender_address_domain/20170915exim}{0}}
DKIM_CANON = relaxed
DKIM_STRICT = 0
with this, /etc/opendkim:
.
├── keys
│ └── valami.com
│ ├── 20170915exim
│ └── 20170915exim.txt
├── KeyTable
├── SigningTable
└── TrustedHosts
But when I send a mail (with mail, or by telnet, or any others), the EXIM dont use an OpenDKIM. Of course the opendkim listening on port:
tcp 0 0 127.0.0.1:8891 0.0.0.0:* LISTEN 6663/opendkim
When I send a mail fromlocal host to outside:
2017-09-15 15:53:20 1dsr3M-0005fK-Ul <= root#valami.com H=localhost [127.0.0.1] P=smtp S=341
2017-09-15 15:53:21 1dsr3M-0005fK-Ul => xxx#gmail.com R=dnslookup T=remote_smtp H=gmail-smtp-in.l.google.com [74.125.133.26] X=TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128 CV=yes K C="250 2.0.0 OK o1si854413wrg.487 - gsmtp"
2017-09-15 15:53:21 1dsr3M-0005fK-Ul Completed
Why dont call an Exim daemon an OpenDKIM interface?
Thanks your help!
I SOLVED!
I have to add a 'dkim_sign_headers' variable to configuration file...
remote_smtp:
driver = smtp
dkim_domain = $sender_address_domain
dkim_selector = 20170915exim
dkim_private_key = ${if exists{/etc/opendkim/keys/$dkim_domain/$dkim_selector}{/etc/opendkim/keys/$dkim_domain/$dkim_selector}{0}}
dkim_canon = relaxed
dkim_strict = 0
dkim_sign_headers = subject:to:from

Resources