Restart node server using crontab - node.js

I have a node.js server that I run using forever and I want to restart this server every hour using crontab.
In crontab I have written the following command to restart the server:
0 * * * * forever restart --minUptime 31536000000 --spinSleepTime 2000 /home/ubuntu/gt02/gt2.js
When I run the same command manually on the terminal then it will restart the server successfully, but the server is not getting restarted automatically using crontab.
Below is small snippet from the cron.log file that is getting printed every hour
Jun 22 17:00:01 ip-172-31-16-234 CRON[16722]: (root) CMD (forever restart --minUptime 31536000000 --spinSleepTime 2000 /home/ubuntu/gt02/gt2.js)
Jun 22 17:00:01 ip-172-31-16-234 CRON[16721]: (CRON) info (No MTA installed, discarding output)
Can anyone tell what I am doing wrong here and how to properly restart the server using contab

Related

sudo ./jetty Stop or Start Failure

The jetty on our linux server is not installed as a service as we have multiple jetty servers on different ports. And we use command./jetty.sh stop and ./jetty.sh start to stop and start jetty.
However, when I add sudo to the command, the server never stop/start successfully. When I run sudo ./jetty.sh stop, it shows
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
and the server was not stopped.
When I run sudo ./jetty.sh start, it shows
Starting Jetty: FAILED Tue Apr 23 23:07:15 CST 2019
How could this happen? From my understanding. Using sudo gives you more power and privilege to run commands. If you can successfully execute without sudo, then the command should never fail with sudo, since it only grants superuser privilege.
As a user it uses $HOME.
As root it uses system paths.
The error you got ..
Stopping Jetty: start-stop-daemon: warning: failed to kill 18772: No such process
1 pids were not killed
No process in pidfile '/var/run/jetty.pid' found running; none killed.
... means that there was a bad pid file sitting around for a process that no longer exists.
Short answer, the processing is different if you are root (a service) vs a user (just an application).

cron selinux security context issue

my system is fedora 23.
trying to run a cronjob that is blocked by selinux in /etc/crontab.
* * * * sun,mon,tue,wed,thu,fri,sat root DISPLAY=:0 eog $HOME/Pictures/somepic.jpg
context for crontab:
-rw-r--r--. 1 root root unconfined_u:object_r:etc_t:s0 2664 Jan 23 18:12 /etc/crontab
if i run selinux in permissive mode, the job runs every time.
here's the journal entry for crond in 'enforce mode':
-- Logs begin at Wed 2016-01-20 10:40:21 PST. --
Jan 23 18:25:01 localhost.localdomain CROND[20342]: (root) CMDOUT (/bin/sh: root: command not found)
Jan 23 18:25:25 localhost.localdomain crond[938]: (CRON) INFO (Shutting down)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 12% if used.)
Jan 23 18:25:25 localhost.localdomain crond[18645]: ((null)) Unauthorized SELinux context=system_u:system_r:system_cronjob_t:s0-s0:c0.c1023 file_context=unconfined_u:object_r:etc_t:s0 (/etc/crontab)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) FAILED (loading cron table)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) Unauthorized SELinux context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 file_context=unconfined_u:object_r:user_cron_spool_t:s0 (/var/spool/cron/root)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (root) FAILED (loading cron table)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (running with inotify support)
Jan 23 18:25:25 localhost.localdomain crond[18645]: (CRON) INFO (#reboot jobs will be run at computer's startup.)
sebool settings:
cron_can_relabel --> off
cron_system_cronjob_use_shares --> off
cron_userdomain_transition --> on
fcron_crond --> off
I'm having the same issue. Had a crontab that worked great for a long time. Made an edit with crontab -e, and it stopped working. Tried as both root and normal user. Some searching around, this is a currently known bug.
https://bugzilla.redhat.com/show_bug.cgi?id=1263328
I tried the workaround listed in comment #19. It's working fine.
Create file mycron.cil with content:
(allow unconfined_t user_cron_spool_t( file ( entrypoint)))
Then run:
semodule -i mycron.cil
Then restart cron:
systemctl restart crond.service
Comment 21 tells how to remove the workaround when a fix is issued.
Remove by running:
semodule -r mycron
and I assume restart cron again.
You need to change the type of the cron file under var/spool/cron
Try this to apply files triggerd 'Unauthorized SELinux context' messages
# semanage fcontext -a -t user_cron_spool_t "/var/spool/cron(/.*)?"
# restorecon -R -vv /var/spool/cron
I was trying this for shutting down my rhel 7.3 server automatically at 11:00 pm daily through cron job in /etc/crontab.
I faced the similar issue and selinux did not allowed to run the job.
However I got the solution by creating a new crontab file in /etc/cron.d/ and the cron job successfully executed and shutdown the system on defined time in /etc/cron.d/crontab file.
I got solution from below RHEL page on point 24.1.2 Scheduling a Cron Job""
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-automating_system_tasks

Inconsistent systemd startup of freeswitch

I have two problems running freeswitch from systemd :
EDIT 2 - I have moved the slow start up question to here (Freeswitch pauses on check_ip at boot on centos 7.1) as although they may be related it's probably good as a standalone.
EDIT - I have noticed something else. Look at these next lines captured from the terminal output when running it from there. The gap is 4 minutes but it has been around 10 minutes before. I noticed it because I was trying to find out why port 8021 was taking several minutes to accept the fs_cli connection. Why does this happen? Never happened to me before and I've installed loads of FS boxes. This does the same thing on both 1.7 & todays 1.6.
2015-10-23 12:57:35.280984 [DEBUG] switch_scheduler.c:249 Added task 1 heartbeat (core) to run at 1445601455
2015-10-23 12:57:35.281046 [DEBUG] switch_scheduler.c:249 Added task 2 check_ip (core) to run at 1445601455
2015-10-23 13:01:31.100892 [NOTICE] switch_core.c:1386 Created ip list rfc6598.auto default (deny)
I sometimes get double processes started. Here is my status line after such an occurrence :
# systemctl status freeswitch -l
freeswitch.service - freeswitch
Loaded: loaded (/etc/systemd/system/multi-user.target.wants/freeswitch.service)
Active: activating (start) since Fri 2015-10-23 01:31:53 BST; 18s ago
Main PID: 2571 (code=exited, status=0/SUCCESS); : 2742 (freeswitch)
CGroup: /system.slice/freeswitch.service
├─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
└─usr/bin/freeswitch -ncwait -core -db /dev/shm -log /usr/local/freeswitch/log -conf /usr/local/freeswitch/conf -run /usr/local/freeswitch/run
Oct 23 01:31:53 fswitch-1 systemd[1]: Starting freeswitch...
Oct 23 01:31:53 fswitch-1 freeswitch[2742]: 2743 Backgrounding.
and there are two processes running.
The PID file is sometimes not written fast enough for the systemd process to pick it up, but by the time I see this (no matter how fast I run the command) it's always there by the time I do :
Oct 23 02:00:26 arribacom-sbc-1 systemd[1]: PID file
/usr/local/freeswitch/run/freeswitch.pid not readable (yet?) after
start.
Now, in (2) everything seems to work ok, and I can shut down the freeswitch process using
systemctl stop freeswitch
without any issues, but in (1) it just doesn't seem to do anything.
I'm wondering if the two are related, and that freeswitch is reporting back to systemd that the program is running before it actually is. Then systemd is either starting up another process or (sometimes) not.
Can anyone offer any pointers? I have tried to mail the freeswitch users list but despite being registered I simply cannot get any emails to appear on the list (but that's another problem).
* Update *
If I remove the -ncwait it seems to improve the double process starting but I still get the can't read PID warning, so I'm still sure there's an issue present, possibly around timing(?).
I'm on Centos 7.1, & my freeswitch version is
FreeSWITCH Version 1.7.0+git~20151021T165609Z~9fee9bc613~64bit (git
9fee9bc 2015-10-21 16:56:09Z 64bit)
and here's my freeswitch.service file (some things have been commented out until I understand what they are doing and any side effects they may have) :
[Unit]
Description=freeswitch
After=syslog.target network.target
#
[Service]
Type=forking
PIDFile=/usr/local/freeswitch/run/freeswitch.pid
PermissionsStartOnly=true
ExecStart=/usr/bin/freeswitch -nc -core -db /dev/shm -log /usr/local/freeswitch/log -conf /u
ExecReload=/usr/bin/kill -HUP $MAINPID
#ExecStop=/usr/bin/freeswitch -stop
TimeoutSec=120s
#
WorkingDirectory=/usr/bin
User=freeswitch
Group=freeswitch
LimitCORE=infinity
LimitNOFILE=999999
LimitNPROC=60000
LimitSTACK=245760
LimitRTPRIO=infinity
LimitRTTIME=7000000
#IOSchedulingClass=realtime
#IOSchedulingPriority=2
#CPUSchedulingPolicy=rr
#CPUSchedulingPriority=89
#UMask=0007
#
[Install]
WantedBy=multi-user.target
In the current master branch, take the two files from debian/ directory:
freeswitch-systemd.freeswitch.service -- should go as /lib/systemd/system/freeswitch.service
freeswitch-systemd.freeswitch.tmpfile -- should go as /usr/lib/tmpfiles.d/freeswitch.conf
You probably need to adapt the paths, or build FreeSWITCH to use standard Debian paths.

Postgresql 9.3 on Centos 7 with custom PGDATA

I am trying to set up Postgresql 9.3 server on Centos 7 (installation via yum) inside a custom directory, which in my case is an encrypted partition (/custom_container/database) that is mounted on startup. For a certain reason Postgresql does not behave like it should in the manual and makes an error on service startup.
Note: It does not want to accept the PGDATA environment variable which I set, and when running
su - postgres -c '/usr/pgsql-9.3/bin/initdb'
(given that the PGDATA directory is owned by postgres:postgres) the cluster gets initialized inside the default directory /var/lib/pgsql/9.3/data/
The only way to change that is using
su - postgres -c '/usr/pgsql-9.3/bin/initdb --pgdata=$PGDATA'
Which initializes the directory inside the custom container I am using. This is something I could not figure out, as the docs say that PGDATA variable is taken on default.
Problem: When running
service postgresql-9.3 start
I get an error with the log
postgresql-9.3.service - PostgreSQL 9.3 database server
Loaded: loaded (/usr/lib/systemd/system/postgresql-9.3.service; disabled)
Active: failed (Result: exit-code) since Mon 2014-11-10 15:24:15 CET; 1s ago
Process: 2785 ExecStartPre=/usr/pgsql-9.3/bin/postgresql93-check-db-dir ${PGDATA} (code=exited, status=1/FAILURE)
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Starting PostgreSQL 9.3 database server...
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: "/var/lib/pgsql/9.3/data/" is missing or empty.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: Use "/usr/pgsql-9.3/bin/postgresql93-setup initdb" to initialize t...ster.
Nov 10 15:24:15 CentOS-70-64-minimal postgresql93-check-db-dir[2785]: See %{_pkgdocdir}/README.rpm-dist for more information.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: postgresql-9.3.service: control process exited, code=exited status=1
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Failed to start PostgreSQL 9.3 database server.
Nov 10 15:24:15 CentOS-70-64-minimal systemd[1]: Unit postgresql-9.3.service entered failed state.
Which means that Postgresql, even though the cluster is initialized in the new $PGDATA directory (/custom_container/database) still looks for the cluster in /var/lib/pgsql/9.3/data/
Did anyone experience this Postgresql behavior before? Could it be that I forgot certain configuration options or that the problem comes from Postgresql installation?
Thank you in advance!
It appears the real problem was setting the environment variables, which I got working in the following thread:
Centos 7 environment variables for Postgres service
The issue is the PGDATA variable set inside the custom /etc/systemd/system/postgresql-9.3.service which should be created from the contents of /usr/lib/systemd/system/postgresql-9.3.service which uses the default PGDATA var.
You need to create a custom postgresql.service file in /etc/systemd/system/, which overrides the default PGDATA environment variable. Your custom service file can .include the default postgresql service file, so you only need to add what you want to change. That way, upgrades can still modify/improve? stuff in the default service file, while your change is preserved.
This is how I just did it in Centos 7:
cat <<END >/etc/systemd/system/postgresql.service
.include /lib/systemd/system/postgresql.service
[Service]
Environment=PGDATA=/mnt/postgres/data ## <== SET THIS TO YOUR WANTED $PGDATA
END
systemctl daemon-reload
systemctl restart postgresql.service
Verify :
ps -ax | grep [p]ostgres
Update:
Rather than manually creating the file and adding the .include line, you can also use the systemd built-in way:
systemctl edit postgresql.service
This will open your default editor and save your changes to /etc/systemd/system/postgresql.service.d/override.conf
try this:
## Login with postgres user
su - postgres
export PGDATA=/your_path/data
pg_ctl -D $PGDATA start &
I think the most "CentOS 7 way" to do it is to copy the service file:
sudo cp /usr/lib/systemd/system/postgresql-9.6.service /etc/systemd/system/postgresql-9.6.service
Then edit the file /etc/systemd/system/postgresql-9.6.service:
# Location of database directory
Environment=PGDATA=/mnt/volume/var/lib/pgsql/9.6/data/
Then start it sudo systemctl start postgresql-9.6 and verify:
# sudo ps -ax | grep postmaster
32100 ? Ss 0:00 /usr/pgsql-9.6/bin/postmaster -D /mnt/volume/var/lib/pgsql/9.6/data/
Try to edit file /etc/init.d/postgresql-9.3:
PGDATA=/your/custom/path

Monit does not start Node script

I've installed and (hopefully) configured Monit creating a new task in /etc/monit.d (on CentOS 6.5)
my task file is called test:
check host test with address 127.0.0.1
start program = "/usr/local/bin/node /var/node/test/index.js" as uid node and gid node
stop program = "/usr/bin/pkill -f 'node /var/node/test/index.js'"
if failed port 7000 protocol HTTP
request /
with timeout 10 seconds
then restart
When I run:
service monit restart
In my monit logs appears:
[CEST Jul 4 09:50:43] info : monit daemon with pid [21946] killed
[CEST Jul 4 09:50:43] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit stopped
[CEST Jul 4 09:50:47] info : 'nsxxxxxx.ip-xxx-xxx-xxx.eu' Monit started
[CEST Jul 4 09:50:47] error : 'test' failed, cannot open a connection to INET[127.0.0.1:7000] via TCP
[CEST Jul 4 09:50:47] info : 'test' trying to restart
[CEST Jul 4 09:50:47] info : 'test' stop: /usr/bin/pkill
[CEST Jul 4 09:50:47] info : 'test' start: /usr/local/bin/node
I don't understand why the script does not work, if I run it from command line with:
su node # user created for node scripts
node /var/node/test/index.js
everything works correctly...
I've followed this tutorial.
How can I fix this problem? Thanks
The same was also not working for me, what i did is made a start/stop script and pass that script in start program & stop program parameter in monit.
You can found sample of start/stop script from here
Below is my monit setting for node.js app
check host my-node-app with address 127.0.0.1
start program = "/etc/init.d/my-node-app start"
stop program = "/etc/init.d/my-node-app stop"
if failed port 3002 protocol HTTP
request /
with timeout 5 seconds
then restart
if 5 restarts within 5 cycles then timeout

Resources