passenger not spawning app process - node.js

I compiled nginx with passenger support, but after I start nginx(with passenger), my nodeJS app is not started.
Here are a little bit more details of my configuration.
nginx congiure file:
server {
listen 80;
server_name example.com www.example.com;
location / {
root /var/www/nodejs;
index index.html index.htm index.php;
}
location ~ ^/letsplay(/.*|$) {
alias /var/www/nodejs/letsplay/public$1;
passenger_base_uri /letsplay;
passenger_app_root /var/www/nodejs/letsplay;
passenger_document_root /var/www/nodejs/letsplay/public;
passenger_enabled on;
passenger_startup_file restserver.js;
}
}
passenger-status
Version : 4.0.45
Date : 2014-06-30 18:19:47 +0000
Instance: 19879
----------- General information -----------
Max pool size : 6
Processes : 0
Requests in top-level queue : 0
----------- Application groups -----------
this is the nginx error log:
>
2014/06/30 18:33:42 [notice] 20046#0: using the "epoll" event method
[ 2014-06-30 18:33:42.0225 20047/7fd9f3cec780 agents/Base.cpp:1599 ]: Random seed: 1404153222
[ 2014-06-30 18:33:42.0226 20047/7fd9f3cec780 agents/Watchdog/Main.cpp:698 ]: Starting Watchdog...
[ 2014-06-30 18:33:42.0231 20047/7fd9f3cec780 agents/Watchdog/Main.cpp:538 ]: Options: { 'analytics_log_user' => 'nobody', 'default_group' => 'nobody', 'default_python' => 'python', 'default_ruby' => 'ruby', 'default_user' => 'nobody', 'log_level' => '2', 'max_pool_size' => '6', 'passenger_root' => '/home/danny/programms/passenger', 'passenger_version' => '4.0.45', 'pool_idle_time' => '300', 'temp_dir' => '/tmp', 'union_station_gateway_address' => 'gateway.unionstationapp.com', 'union_station_gateway_port' => '443', 'user_switching' => 'true', 'web_server_passenger_version' => '4.0.45', 'web_server_pid' => '20046', 'web_server_type' => 'nginx', 'web_server_worker_gid' => '996', 'web_server_worker_uid' => '997' }
[ 2014-06-30 18:33:42.0280 20050/7fd6340c0780 agents/Base.cpp:1599 ]: Random seed: 1404153222
[ 2014-06-30 18:33:42.0281 20050/7fd6340c0780 agents/HelperAgent/Main.cpp:642 ]: Starting PassengerHelperAgent...
[ 2014-06-30 18:33:42.0297 20050/7fd6340c0780 agents/HelperAgent/Main.cpp:649 ]: PassengerHelperAgent online, listening at unix:/tmp/passenger.1.0.20046/generation-0/request
[ 2014-06-30 18:33:42.0367 20058/7f856ed59880 agents/Base.cpp:1599 ]: Random seed: 1404153222
[ 2014-06-30 18:33:42.0369 20058/7f856ed59880 agents/LoggingAgent/Main.cpp:333 ]: Starting PassengerLoggingAgent...
[ 2014-06-30 18:33:42.0377 20058/7f856ed59880 agents/LoggingAgent/Main.cpp:321 ]: PassengerLoggingAgent online, listening at unix:/tmp/passenger.1.0.20046/generation-0/logging
[ 2014-06-30 18:33:42.0379 20047/7fd9f3cec780 agents/Watchdog/Main.cpp:728 ]: All Phusion Passenger agents started!
2014/06/30 18:33:42 [notice] 20046#0: nginx/1.6.0
2014/06/30 18:33:42 [notice] 20046#0: built by gcc 4.8.2 20131212 (Red Hat 4.8.2-7) (GCC)
2014/06/30 18:33:42 [notice] 20046#0: OS: Linux 3.14.5-x86_64-linode42
2014/06/30 18:33:42 [notice] 20046#0: getrlimit(RLIMIT_NOFILE): 1024:4096
2014/06/30 18:33:42 [notice] 20065#0: start worker processes
2014/06/30 18:33:42 [notice] 20065#0: start worker process 20066
[ 2014-06-30 18:33:42.6525 20000/7f1a8e1ea880 agents/LoggingAgent/Main.cpp:344 ]: Logging agent exiting with code 0.
[ 2014-06-30 18:33:42.6562 19992/7fd7b6257780 agents/HelperAgent/Main.cpp:605 ]: It's now 5 seconds after all clients have disconnected. Proceeding with graceful exit.
[ 2014-06-30 18:33:42.6563 19992/7fd7b6257780 agents/HelperAgent/Main.cpp:506 ]: Shutting down helper agent...
[ 2014-06-30 18:33:42.6566 19992/7fd7b6257780 agents/HelperAgent/Main.cpp:513 ]: Destroying application pool...
[ 2014-06-30 18:33:42.6745 20045/7f8b0fc0e780 agents/Watchdog/Main.cpp:388 ]: All Phusion Passenger agent processes have exited. Forcing all subprocesses to shut down.
[ 2014-06-30 18:33:42.6745 20045/7f8b0fc0e780 agents/Watchdog/Main.cpp:390 ]: Sending SIGTERM
[ 2014-06-30 18:33:43.6748 20045/7f8b0fc0e780 agents/Watchdog/Main.cpp:395 ]: Sending SIGKILL
[ 2014-06-30 18:33:45.0296 20050/7fd6340ad700 Pool2/Pool.h:827 ]: Analytics collection time...
[ 2014-06-30 18:33:45.0300 20050/7fd6340ad700 Pool2/Pool.h:930 ]: Analytics collection done; next analytics collection in 4.970 sec

Related

Can't connect to mongo with nodeJs on raspberry Pi

with this code on nodeJS :
var MongoClient = require('mongodb').MongoClient;
...
MongoClient.connect('mongodb://127.0.0.1:27017', { useUnifiedTopology: true }, (error, db) => {
if (error) {
obs.error(this.throwExceptionError(error));
} else {
this.dbConnect = db.db(dbb);
obs.next(true);
}
});
i can connect to MongoDB on windows and it works well.
I try to execute this code on my raspberry Pi but the code didn't work.
I installed mongodb, launch :
service mongodb start
When i execute mongo in a console, it works and i see my dbs and my collections.
The log of the mongoDb are :
Sun Jun 28 13:43:51.566 [initandlisten]
Sun Jun 28 13:43:51.566 [initandlisten] db version v2.4.14
Sun Jun 28 13:43:51.566 [initandlisten] git version: nogitversion
Sun Jun 28 13:43:51.566 [initandlisten] build info: Linux bm-wb-03 3.19.0-trunk-armmp #1 SMP Debian 3.19.1-1~exp1+plugwash1 (2015-03-28) armv7l BOOST_LIB_VERSION=1_58
Sun Jun 28 13:43:51.566 [initandlisten] allocator: system
Sun Jun 28 13:43:51.566 [initandlisten] options: { bind_ip: "127.0.0.1", config: "/etc/mongodb.conf", dbpath: "/var/lib/mongodb", journal: "true", logappend: "true", logpath: "/var/log/mongodb/mongodb.log", port: 27017 }
Sun Jun 28 13:43:51.578 [initandlisten] journal dir=/var/lib/mongodb/journal
Sun Jun 28 13:43:51.578 [initandlisten] recover : no journal files present, no recovery needed
Sun Jun 28 13:43:51.606 [websvr] admin web console waiting for connections on port 28017
Sun Jun 28 13:43:51.606 [initandlisten] waiting for connections on port 27017
And the command "netstat -tulpn"
pi#raspberrypi:~ $ sudo netstat -tulpn
Connexions Internet actives (seulement serveurs)
Proto Recv-Q Send-Q Adresse locale Adresse distante Etat PID/Program name
tcp 0 0 127.0.0.1:27017 0.0.0.0:* LISTEN 3407/mongod
tcp 0 0 0.0.0.0:5900 0.0.0.0:* LISTEN 486/vncserver-x11-c
tcp 0 0 127.0.0.1:28017 0.0.0.0:* LISTEN 3407/mongod
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 495/sshd
So i think the server is running...
I don't understand why i can't connect to mongoDB on my raspberry pi.
help plz ^^

sequelize and react won't connect postgresql without an error

I have a react app with express server. It's not working on one of my pc(others are fine exact same versions and setup). Problem is even though i can connect to my postgresql db both from terminal and pgadmin interface, neither express server or sequelize can't find/connect it and they don't produce any error. sequelize db:migrate finishes without an error or additional migration/table created message. This leads infinite loading on app but all pending requests eventually fail.
Here is my sequelize config file:
{
"development": {
"username": "postgres",
"password": "postgres",
"database": "db",
"host": "localhost",
"dialect": "postgres",
"port": 5432
},
"test": {
"username": "postgres",
"password": "postgres",
"database": "db",
"host": "127.0.0.1",
"dialect": "postgres",
"port": 5432
},
"production": {
"username": "postgres",
"password": "postgres",
"database": "db",
"host": "127.0.0.1",
"dialect": "postgres",
"port": 5432
}
}
and react config in .env and express server
.env
DB_PASSWORD=postgres
DB_PORT=5432
DB_DATABASE=db
DB_HOST=localhost
DB_USER=postgres
server.js
const connectionString = `postgresql://${process.env.DB_USER}:${
process.env.DB_PASSWORD
}#${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.DB_DATABASE}`;
sudo systemctl status postgresql.service returns this:
postgresql.service - PostgreSQL database server
Loaded: loaded (/usr/lib/systemd/system/postgresql.service; disabled; vendor preset: disabled)
Active: active (running) since Thu 2020-06-04 20:47:20 +03; 9min ago
Process: 38637 ExecStartPre=/usr/bin/postgresql-check-db-dir ${PGROOT}/data (code=exited, status=0/SUCCESS)
Main PID: 38640 (postgres)
Tasks: 14 (limit: 19096)
Memory: 123.1M
CGroup: /system.slice/postgresql.service
├─38640 /usr/bin/postgres -D /var/lib/postgres/data
├─38644 postgres: checkpointer
├─38645 postgres: background writer
├─38646 postgres: walwriter
├─38647 postgres: autovacuum launcher
├─38648 postgres: stats collector
├─38649 postgres: logical replication launcher
├─38662 postgres: postgres db ::1(42040) idle
├─38834 postgres: postgres db1 ::1(42242) idle
├─38899 postgres: postgres db2 ::1(42352) idle
├─38912 postgres: postgres db3 ::1(42378) idle
├─38949 postgres: postgres db4 ::1(42450) idle
├─38960 postgres: postgres db5 ::1(42470) idle
└─38971 postgres: postgres db6 ::1(42492) idle
Jun 04 20:47:20 archPC postgres[38640]: 2020-06-04 20:47:20.535 +03 [38640] LOG: starting PostgreSQL 12.3 on x8>
Jun 04 20:47:20 archPC postgres[38640]: 2020-06-04 20:47:20.536 +03 [38640] LOG: listening on IPv6 address "::1>
Jun 04 20:47:20 archPC postgres[38640]: 2020-06-04 20:47:20.536 +03 [38640] LOG: listening on IPv4 address "127>
Jun 04 20:47:20 archPC postgres[38640]: 2020-06-04 20:47:20.541 +03 [38640] LOG: listening on Unix socket "/run>
Jun 04 20:47:20 archPC postgres[38643]: 2020-06-04 20:47:20.565 +03 [38643] LOG: database system was shut down >
Jun 04 20:47:20 archPC postgres[38640]: 2020-06-04 20:47:20.573 +03 [38640] LOG: database system is ready to ac>
Jun 04 20:47:20 archPC systemd[1]: Started PostgreSQL database server.
I've been trying to fix this issue and finally found a solution.
If you are using Node v14 just like me, you need to update pg module from npm and that's all

Spark job can not acquire resource from mesos cluster

I am using Spark Job Server (SJS) to create context and submit jobs.
My cluster includes 4 servers.
master1: 10.197.0.3
master2: 10.197.0.4
master3: 10.197.0.5
master4: 10.197.0.6
But only master1 has a public ip.
First of all I set up zookeeper for master1, master3 and master3 and zookeeper-id from 1 to 3.
I intend use master1, master2, master3 to be a masters of cluster.
That mean quorum=2 I set for 3 masters.
The zk connect is zk://master1:2181,master2:2181,master3:2181/mesos
each server I also start mesos-slave so I have 4 slaves and 3 masters.
As you can see all slaves are conencted.
But the funny thing is when I create a job to run it can not acquire the resource.
From logs I saw that it's continuing DECLINE the offer. This logs from master.
I0523 15:01:00.116981 32513 master.cpp:3641] Processing DECLINE call for offers: [ dc18c89f-d802-404b-9221-71f0f15b096f-O4264 ] for framework dc18c89f-d802-404b-9221-71f0f15b096f-0001 (sql_context-1) at scheduler-f5196abd-f420-48c6-b2fe-0306595601d4#10.197.0.3:28765
I0523 15:01:00.117086 32513 master.cpp:3641] Processing DECLINE call for offers: [ dc18c89f-d802-404b-9221-71f0f15b096f-O4265 ] for framework dc18c89f-d802-404b-9221-71f0f15b096f-0001 (sql_context-1) at scheduler-f5196abd-f420-48c6-b2fe-0306595601d4#10.197.0.3:28765
I0523 15:01:01.460502 32508 replica.cpp:673] Replica in VOTING status received a broadcasted recover request from (914)#127.0.0.1:5050
I0523 15:01:02.117753 32510 master.cpp:5324] Sending 1 offers to framework dc18c89f-d802-404b-9221-71f0f15b096f-0000 (sql_context) at scheduler-9b4637cf-4b27-4629-9a73-6019443ed30b#10.197.0.3:28765
I0523 15:01:02.118099 32510 master.cpp:5324] Sending 1 offers to framework dc18c89f-d802-404b-9221-71f0f15b096f-0001 (sql_context-1) at scheduler-f5196abd-f420-48c6-b2fe-0306595601d4#10.197.0.3:28765
I0523 15:01:02.119299 32508 master.cpp:3641] Processing DECLINE call for offers: [ dc18c89f-d802-404b-9221-71f0f15b096f-O4266 ] for framework dc18c89f-d802-404b-9221-71f0f15b096f-0000 (sql_context) at scheduler-9b4637cf-4b27-4629-9a73-6019443ed30b#10.197.0.3:28765
I0523 15:01:02.119858 32515 master.cpp:3641] Processing DECLINE call for offers: [ dc18c89f-d802-404b-9221-71f0f15b096f-O4267 ] for framework dc18c89f-d802-404b-9221-71f0f15b096f-0001 (sql_context-1) at scheduler-f5196abd-f420-48c6-b2fe-0306595601d4#10.197.0.3:28765
I0523 15:01:02.900946 32509 http.cpp:312] HTTP GET for /master/state from 10.197.0.3:35778 with User-Agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36' with X-Forwarded-For='113.161.38.181'
I0523 15:01:03.118147 32514 master.cpp:5324] Sending 1 offers to framework dc18c89f-d802-404b-9221-71f0f15b096f-0001 (sql_context-1) at scheduler-f5196abd-f420-48c6-b2fe-0306595601d4#10.197.0.3:28765
For 1 of my slave I check
W0523 14:53:15.487599 32681 status_update_manager.cpp:475] Resending status update TASK_FAILED (UUID: 3c3a022c-2032-4da1-bbab-c367d46e07de) for task driver-20160523111535-0003 of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0019
W0523 14:53:15.487773 32681 status_update_manager.cpp:475] Resending status update TASK_FAILED (UUID: cfb494b3-6484-4394-bd94-80abf2e11ee8) for task driver-20160523112724-0001 of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0020
I0523 14:53:15.487820 32680 slave.cpp:3400] Forwarding the update TASK_FAILED (UUID: 3c3a022c-2032-4da1-bbab-c367d46e07de) for task driver-20160523111535-0003 of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0019 to master#10.197.0.3:5050
I0523 14:53:15.488008 32680 slave.cpp:3400] Forwarding the update TASK_FAILED (UUID: cfb494b3-6484-4394-bd94-80abf2e11ee8) for task driver-20160523112724-0001 of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0020 to master#10.197.0.3:5050
I0523 15:02:24.120436 32680 http.cpp:190] HTTP GET for /slave(1)/state from 113.161.38.181:63097 with User-Agent='Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36'
W0523 15:02:24.165690 32685 slave.cpp:4979] Failed to get resource statistics for executor 'driver-20160523111535-0003' of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0019: Container 'cac7667c-3309-4380-9f95-07d9b888e44e' not found
W0523 15:02:24.165771 32685 slave.cpp:4979] Failed to get resource statistics for executor 'driver-20160523112724-0001' of framework a9871c4b-ab0c-4ddc-8d96-c52faf0e66f7-0020: Container '9c661311-bf7f-4ea6-9348-ce8c7f6cfbcb' not found
From SJS Logs
[2016-05-23 15:04:10,305] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4565 with attributes: Map() mem: 63403.0 cpu: 8
[2016-05-23 15:04:10,305] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4566 with attributes: Map() mem: 47244.0 cpu: 8
[2016-05-23 15:04:10,305] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4567 with attributes: Map() mem: 47244.0 cpu: 8
[2016-05-23 15:04:10,366] WARN cheduler.TaskSchedulerImpl [] [akka://JobServer/user/context-supervisor/sql_context] - Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
[2016-05-23 15:04:10,505] DEBUG cheduler.TaskSchedulerImpl [] [akka://JobServer/user/context-supervisor/sql_context] - parentName: , name: TaskSet_0, runningTasks: 0
[2016-05-23 15:04:11,306] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4568 with attributes: Map() mem: 47244.0 cpu: 8
[2016-05-23 15:04:11,306] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4569 with attributes: Map() mem: 63403.0 cpu: 8
[2016-05-23 15:04:11,505] DEBUG cheduler.TaskSchedulerImpl [] [akka://JobServer/user/context-supervisor/sql_context] - parentName: , name: TaskSet_0, runningTasks: 0
[2016-05-23 15:04:12,308] DEBUG oarseMesosSchedulerBackend [] [] - Declining offer: dc18c89f-d802-404b-9221-71f0f15b096f-O4570 with attributes: Map() mem: 47244.0 cpu: 8
[2016-05-23 15:04:12,505] DEBUG cheduler.TaskSchedulerImpl [] [akka://JobServer/user/context-supervisor/sql_context] - parentName: , name: TaskSet_0, runningTasks: 0
In master2 logs
May 23 08:19:44 ants-vps mesos-master[1866]: E0523 08:19:44.273349 1902 process.cpp:1958] Failed to shutdown socket with fd 28: Transport endpoint is not connected
May 23 08:19:54 ants-vps mesos-master[1866]: I0523 08:19:54.274245 1899 replica.cpp:673] Replica in VOTING status received a broadcasted recover request from (1257)#127.0.0.1:5050
May 23 08:19:54 ants-vps mesos-master[1866]: E0523 08:19:54.274533 1902 process.cpp:1958] Failed to shutdown socket with fd 28: Transport endpoint is not connected
May 23 08:20:04 ants-vps mesos-master[1866]: I0523 08:20:04.275291 1897 replica.cpp:673] Replica in VOTING status received a broadcasted recover request from (1260)#127.0.0.1:5050
May 23 08:20:04 ants-vps mesos-master[1866]: E0523 08:20:04.275512 1902 process.cpp:1958] Failed to shutdown socket with fd 28: Transport endpoint is not connected
From master3:
May 23 08:21:05 ants-vps mesos-master[22023]: I0523 08:21:05.994082 22042 recover.cpp:193] Received a recover response from a replica in EMPTY status
May 23 08:21:15 ants-vps mesos-master[22023]: I0523 08:21:15.994051 22043 recover.cpp:109] Unable to finish the recover protocol in 10secs, retrying
May 23 08:21:15 ants-vps mesos-master[22023]: I0523 08:21:15.994529 22036 replica.cpp:673] Replica in EMPTY status received a broadcasted recover request from (1282)#127.0.0.1:5050
How to find the reason of that issues and fix it?

mongodb : SyncSourceFeedbackThread] SEVERE: Invalid access at address: 0xa8

last night out mongodb replset r1 was crashed ,then monit restart it.
this is process stats:
mongod 1390 1 0 Aug15 ? 00:39:29 /usr/bin/mongod -f /etc/mongod.conf
root 1967 1 8 Aug15 ? 18:53:29 /usr/bin/mongos -f /etc/mongod_route.conf
mongod 2127 1 0 Aug15 ? 00:56:07 /usr/bin/mongod -f /etc/mongod_r2.conf
mongod 2514 1 0 Aug15 ? 01:07:33 /usr/bin/mongod -f /etc/mongod_config.conf
mongod 2552 1 0 Aug15 ? 00:49:41 /usr/bin/mongod -f /etc/mongod_arbiter.conf
root 7722 21913 0 03:04 ? 00:00:00 [mongod_r1] <defunct>
mongod 7733 1 0 03:04 ? 00:05:06 /usr/bin/mongod -f /etc/mongod_r1.conf
root 13964 12745 0 11:52 pts/0 00:00:00 grep --color mongo
this is mongodb log:
2015-08-25T03:03:53.425+0800 [conn140823] authenticate db: local { authenticate: 1, nonce: "xxx", user: "__system", key: "xxx" }
2015-08-25T03:03:53.430+0800 [conn140823] replset couldn't find a slave with id 0, not tracking 53d71c612ea2da0bd4c469e6
2015-08-25T03:03:53.432+0800 [SyncSourceFeedbackThread] SEVERE: Invalid access at address: 0xa8
2015-08-25T03:03:53.565+0800 [SyncSourceFeedbackThread] SEVERE: Got signal: 11 (Segmentation fault).
Backtrace:0x11bd301 0x11bc6de 0x11bc7cf 0x33d060f710 0xeacaf6 0xeb19e8 0x1145332 0x1201c99 0x33d06079d1 0x36818e88fd
/usr/bin/mongod(_ZN5mongo15printStackTraceERSo+0x21) [0x11bd301]
/usr/bin/mongod() [0x11bc6de]
/usr/bin/mongod() [0x11bc7cf]
/lib64/libpthread.so.0() [0x33d060f710]
/usr/bin/mongod(_ZN5mongo18SyncSourceFeedback13replHandshakeEv+0xb86) [0xeacaf6]
/usr/bin/mongod(_ZN5mongo18SyncSourceFeedback3runEv+0x9b8) [0xeb19e8]
/usr/bin/mongod(_ZN5mongo13BackgroundJob7jobBodyEv+0xd2) [0x1145332]
/usr/bin/mongod() [0x1201c99]
/lib64/libpthread.so.0() [0x33d06079d1]
/lib64/libc.so.6(clone+0x6d) [0x36818e88fd]
2015-08-25T03:04:17.172+0800 ***** SERVER RESTARTED *****
2015-08-25T03:04:17.178+0800 [initandlisten] MongoDB starting : pid=7733 port=1201 dbpath=/data/mongo_r1 64-bit host=x2
2015-08-25T03:04:17.178+0800 [initandlisten] db version v2.6.0
2015-08-25T03:04:17.178+0800 [initandlisten] git version: 1c1c76aeca21c5983dc178920f5052c298db616c
2015-08-25T03:04:17.178+0800 [initandlisten] build info: Linux build14.nj1.10gen.cc 2.6.32-431.3.1.el6.x86_64 #1 SMP Fri Jan 3 21:39:27 UTC 2014 x86_64 BOOST_LIB_VERSION=1_49
2015-08-25T03:04:17.178+0800 [initandlisten] allocator: tcmalloc
2015-08-25T03:04:17.178+0800 [initandlisten] options: { config: "/etc/mongod_r1.conf", net: { bindIp: "10.165.46.132", port: 1201 }, processManagement: { fork: true, pidFilePath: "/var/run/mongodb/mongod_r1.pid" },
what happened?

Node.js cluster get master PID

I used following cluster code to fork multiple process for my node app.
if (cluster.isMaster) {
require('os').cpus().forEach(function () {
cluster.fork();
});
cluster.on('exit', function (worker, code, signal) {
cluster.fork();
});
} else if (cluster.isWorker) {
logger.log.info('Worker server started on port %d (ID: %d, PID: %d)', app.get('port'), cluster.worker.id, cluster.worker.process.pid);
}
the output is:
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 1, PID: 606)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 2, PID: 607)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 5, PID: 610)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 3, PID: 608)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 4, PID: 609)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 6, PID: 611)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 8, PID: 613)
Thu Sep 05 2013 20:30:03 GMT-0700 (PDT) - info: Worker server started on port 3000 (ID: 7, PID: 612)
There is 8 worker processes but when I checked process using pgrep, I saw 9
$ pgrep -l node
613 node
612 node
611 node
610 node
609 node
608 node
607 node
606 node
605 node
so one process extra must be master process and how do I print out the master process IP?
Thanks
I posted another question related to this one, I think it's might be useful for everyone to look at this as well:
Node.js cluster master process reboot after got kill & pgrep?
You can get the master process pid with process.pid inside if(cluster.isMaster). IP and port are properties of your app so that would be the same.
You can get the master (parent) pid with process.ppid.
This will let you send a signal which is useful for reloads without downtime.
For instance process.kill(process.ppid, 'SIGHUP');

Resources