I've installed and (hopefully) configured Monit creating a new task in /etc/monit.d (on CentOS 6.5)
my task file is called test:
check host test with address 127.0.0.1
start program = "/usr/local/bin/node /var/node/test/index.js" as uid node and gid node
stop program = "/usr/bin/pkill -f 'node /var/node/test/index.js'"
if failed port 7000 protocol HTTP
request /
with timeout 10 seconds
then restart
When I run:
service monit restart
In my monit logs appears:
[CEST Jul 4 09:50:43] info : monit daemon with pid [21946] killed
[CEST Jul 4 09:50:43] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit stopped
[CEST Jul 4 09:50:47] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit started
[CEST Jul 4 09:50:47] error : 'test' failed, cannot open a connection to INET[127.0.0.1:7000] via TCP
[CEST Jul 4 09:50:47] info : 'test' trying to restart
[CEST Jul 4 09:50:47] info : 'test' stop: /usr/bin/pkill
[CEST Jul 4 09:50:47] info : 'test' start: /usr/local/bin/node
I don't understand why the script does not work, if I run it from command line with:
su node # user created for node scripts
node /var/node/test/index.js
everything works correctly...
I've followed this tutorial.
How can I fix this problem? Thanks
The same was also not working for me, what i did is made a start/stop script and pass that script in start program & stop program parameter in monit.
You can found sample of start/stop script from here
Below is my monit setting for node.js app
check host my-node-app with address 127.0.0.1
start program = "/etc/init.d/my-node-app start"
stop program = "/etc/init.d/my-node-app stop"
if failed port 3002 protocol HTTP
request /
with timeout 5 seconds
then restart
if 5 restarts within 5 cycles then timeout
Related
I deployed my app to aws EC2 so I need a way of always having this node app running no matter what if the server restart if the app crashes whatever it always need to restart the app.
the app needs to running always. so I use npm start & is that enough to restart my app?
I've tried to use systemD, but I had error while start the service I've created :
sudo systemctl start nameX.service
Job for nameX.service failed because of unavailable resources or
another system error. See "systemctl status ...service" and
"journalctl -xeu ....service" for details.
sudo systemctl status nameX.service
nameX.service: Scheduled restart job, restart > systemd[1]: Stopped
My Node Server. Failed to load environment file> systemd[1]:
...service: Failed to run 'start' task: No > systemd[1]: ...service:
Failed with result 'resources'. systemd[1]: Failed to start My Node
Server.
So after installing mongodb in my Ubuntu, I tried to run "mongo", but it said,
MongoDB shell version v4.4.1
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect#src/mongo/shell/mongo.js:374:17
#(connect):2:6
exception: connect failed
exiting with code 1
So I enabled mongod service and started it, then ran the command,
sudo systemctl status mongod
And It said,
● mongod.service - MongoDB Database Server
Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2020-09-17 00:23:08 +06; 8min ago
Docs: https://docs.mongodb.org/manual
Process: 45414 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=exited, sta>
Main PID: 45414 (code=exited, status=1/FAILURE)
Sep 17 00:23:08 john systemd[1]: Started MongoDB Database Server.
Sep 17 00:23:08 john mongod[45414]: about to fork child process, waiting until server is>
Sep 17 00:23:08 john mongod[45427]: forked process: 45428
Sep 17 00:23:08 john mongod[45414]: ERROR: child process failed, exited with error numbe>
Sep 17 00:23:08 john mongod[45414]: To see additional information in this output, start >
Sep 17 00:23:08 john systemd[1]: mongod.service: Main process exited, code=exited, statu>
Sep 17 00:23:08 john systemd[1]: mongod.service: Failed with result 'exit-code'.
And I can't run the mongodb shell. What should I do?
I came across this issue yesterday and I was able to resolve it by:
removing the mongod.lockfile.
running the config fork command.
remove .lock file:
sudo rm /usr/local/var/mongodb/mongod.lock
Run:
mongod --config /usr/local/etc/mongod.conf --fork.
and use the mongo command again.
mongod need to be running before you can run mongo without that error.
p.s. here is the answer for others who stumble upon original question from the title
I also got that same error , I think this may happened because of some updation in our PC (like .NET framework updation something)
then I uninstalled and reinstalled MongoDB again and its working
You have yo go /etc , modify the mongod.conf, because:
"By default, MongoDB launches with bindIp set to 127.0.0.1,", which binds to the localhost network interface. This means that the mongod can only accept connections from clients that are running on the same machine.
Then could sudo nano mongod.conf and change 127.0.0.1 to 0.0.0.0
You must restart mongo.
Create a folder data in root C: directory.
Create another folder db inside data folder.
Now run mongod in cmd in path
C:\Program Files\MongoDB\Server\5.0\bin>mongo
Don't close this command prompt.
Open another cmd in same path
C:\ProgramFiles\MongoDB\Server\5.0\bin>mongo
Run mongo command.
Now it will connect.
The jetty on our linux server is not installed as a service as we have multiple jetty servers on different ports. And we use command./jetty.sh stop and ./jetty.sh start to stop and start jetty.
However, when I add sudo to the command, the server never stop/start successfully. When I run sudo ./jetty.sh stop, it shows
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
and the server was not stopped.
When I run sudo ./jetty.sh start, it shows
Starting Jetty: FAILED Tue Apr 23 23:07:15 CST 2019
How could this happen? From my understanding. Using sudo gives you more power and privilege to run commands. If you can successfully execute without sudo, then the command should never fail with sudo, since it only grants superuser privilege.
As a user it uses $HOME.
As root it uses system paths.
The error you got ..
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
... means that there was a bad pid file sitting around for a process that no longer exists.
Short answer, the processing is different if you are root (a service) vs a user (just an application).
I have a node.js server that I run using forever and I want to restart this server every hour using crontab.
In crontab I have written the following command to restart the server:
0 * * * * forever restart --minUptime 31536000000 --spinSleepTime 2000 /home/ubuntu/gt02/gt2.js
When I run the same command manually on the terminal then it will restart the server successfully, but the server is not getting restarted automatically using crontab.
Below is small snippet from the cron.log file that is getting printed every hour
Jun 22 17:00:01 ip-172-31-16-234 CRON[16722]: (root) CMD (forever restart --minUptime 31536000000 --spinSleepTime 2000 /home/ubuntu/gt02/gt2.js)
Jun 22 17:00:01 ip-172-31-16-234 CRON[16721]: (CRON) info (No MTA installed, discarding output)
Can anyone tell what I am doing wrong here and how to properly restart the server using contab
I have a chef infrastructure with chef-server/chef-client. I want to restart jetty from all machines using knife ssh.
There is a very strange behavior. When the jetty starts, it receive a kill signal and it stops. This is happening only when I'm using knife ssh.
2015-06-25 17:37:29.171:INFO:oejs.ServerConnector:main: Started ServerConnector#673b21af{HTTP/1.1}{0.0.0.0:8080}
2015-06-25 17:37:29.171:INFO:oejs.Server:main: Started #17901ms
2015-06-25 17:37:31.302:INFO:oejs.ServerConnector:Thread-1: Stopped ServerConnector#673b21af{HTTP/1.1}{0.0.0.0:8080}
2015-06-25 17:37:31.303:INFO:/:Thread-1: Destroying Spring FrameworkServlet 'spring'
INFO : org.springframework.web.context.support.XmlWebApplicationContext - Closing WebApplicationContext for namespace 'spring-servlet': startup date [Thu Jun 25 17:37:29 CEST 2015]; parent: Root WebApplicationContext
2015-06-25 17:37:31.307:INFO:/:Thread-1: Closing Spring root WebApplicationContext
INFO : org.springframework.web.context.support.XmlWebApplicationContext - Closing Root WebApplicationContext: startup date [Thu Jun 25 17:37:20 CEST 2015]; root of context hierarchy
INFO : org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean - Closing JPA EntityManagerFactory for persistence unit 'default'
INFO : org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler - Shutting down ExecutorService 'taskScheduler'
INFO : org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Shutting down ExecutorService
2015-06-25 17:37:31.509:INFO:oejsh.ContextHandler:Thread-1: Stopped o.e.j.w.WebAppContext#675e8fe2{/,file:/tmp/jetty-0.0.0.0-8080-root.war-_-any-6087241756199243276.dir/webapp/,UNAVAILABLE}{/opt/idm/root.war}
the command used to restart jetty is:
knife ssh -x root "name:*" "sh /opt/jetty/jetty-current/bin/jetty.sh start"
As I said above, if I execute the command from ssh, manually on each machine(without using knife), jetty starts and works fine. What something else knife ssh does instead of make a ssh on each machine and runs that command?
I've tried to fix this different ways including using & at command / creating another shell script that executes the command, but without any success.
Here is a paste2 with jetty.sh
There is something that kills jetty when I start it using knife. Have any idea what?
Edit: tried to put jetty.sh into /etc/init.d/jetty and start as a service with service jetty start, but there is the same result.
I've found a workaround which I used to solve the problem.
The thing is that knife ssh once finish execution, will kill every spawned process. Maybe there is a bug with this.
I've created a cookbook and inside it a recipe where I run service jetty restart. Then, using knife ssh I only execute this recipe from chef-client.