How to use systemctl to restart a service - linux

I created a system service which can be start/stop/restart by running the command service. But when I try to do it by systemctl I always got below error. I don't know what wrong with my script file. Why it can be managed by service not systemctl?
# systemctl restart cooltoo_storage
Job for cooltoo_storage.service failed because the control process exited with error code. See "systemctl status cooltoo_storage.service" and "journalctl -xe" for details.
Below is the output of "systemctl status cooltoo_storage.service"
# systemctl status cooltoo_storage.service
● cooltoo_storage.service - LSB: cooltoo storage provider
Loaded: loaded (/etc/rc.d/init.d/cooltoo_storage)
Active: failed (Result: exit-code) since Tue 2016-05-03 17:17:12 CST; 1min 29s ago
Docs: man:systemd-sysv-generator(8)
Process: 32268 ExecStart=/etc/rc.d/init.d/cooltoo_storage start (code=exited, status=203/EXEC)
May 03 17:17:12 Cool-Too systemd[1]: Starting LSB: cooltoo storage provider...
May 03 17:17:12 Cool-Too systemd[1]: cooltoo_storage.service: control process exited, code=exited status=203
May 03 17:17:12 Cool-Too systemd[1]: Failed to start LSB: cooltoo storage provider.
May 03 17:17:12 Cool-Too systemd[1]: Unit cooltoo_storage.service entered failed state.
May 03 17:17:12 Cool-Too systemd[1]: cooltoo_storage.service failed.
Below is my script file /etc/init.d/cooltoo_storage,
### BEGIN INIT INFO
# Provides: cooltoo storage provider
# Required-Start: $local_fs $remote_fs $network $time $named
# Should-Start: $time sendmail
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: cooltoo storage provider
# Description: cooltoo storage provider
# chkconfig: 2345 90 90
### END INIT INFO
start() {
nginx -c /data/nginx/nginx.config
}
stop() {
nginx -c /data/nginx/nginx.config -s stop
}
restart() {
nginx -c /data/nginx/nginx.config -s reload
}
status() {
echo ''
}
action="$#"
if [[ "$MODE" == "auto" && -n "$init_script" ]] || [[ "$MODE" == "service" ]]; then
action="$1"
shift
fi
case "$action" in
start)
start ; exit $?;;
stop)
stop ; exit $?;;
restart)
restart ; exit $?;;
status)
status ; exit $?;;
*)
echo "Usage: $0 {start|stop|restart|force-reload|status|run}"; exit 1;
esac

Related

RHEL 8 systemctl startup script fails

I have an application startup script that works correctly under RHEL 7, using both the "service" and "systemctl" commands. However when I use the same startup script and service config file on RHEL 8, the "service" command will stop and start the application, but the "systemctl" command will not start it.
The contents of the alm.service file are:
[Unit]
Description=Application Lifecycle Management (ALM)
Documentation=https://software.microfocus.com/en-us/solutions/software-development-lifecycle
After=opt-repository.mount
[Service]
Type=forking
PIDFile=/var/opt/HP/ALM/runtime/MFALM.pid
User=alm
ExecStart=/etc/init.d/alm start
ExecStop=/etc/init.d/alm stop
[Install]
WantedBy=multi-user.target
The portion of the /etc/init.d/alm script that starts the application is:
#!/bin/bash
#
# chkconfig: 35 20 80
# Description: QC Application LifeCycle Management
#
export LANG="en_US.utf8"
source /etc/profile
start () {
echo -n "Starting HP ALM: "
#Log as alm if account is not the good one
testfile=/opt/repository/base_install/.test_$RANDOM_`date|sed 's/ //g'`
testcmd="touch $testfile"
rmcmd="rm $testfile"
if [ `id -u` -eq 0 ]; then
testcmd="su alm -c \"touch $testfile\""
rmcmd="su alm -c \"rm $testfile\""
fi
echo
eval "$testcmd" 2>/dev/null
if [ ! $? -eq 0 ]; then
echo $'\n'"$0 ERROR: UNABLE to access REPOSITORY (touch $testfile) on read-write mode !"
echo $'\n'"!!! ABORTING ALM restart !!!"
echo $'\n'"But this could also be caused by this `uname -n` client needing reboot due to NFS hang."
else
eval "$rmcmd"
cd /var/opt/HP/ALM/wrapper
id
ls -l HPALM
# ./HPALM start
/var/opt/HP/ALM/wrapper/HPALM start
echo "HPALM returned Status: " $?
ls -l /var/opt/HP/ALM/runtime/MFALM.pid
fi
}
The HPALM script forks the daemon process and creates the /var/opt/HP/ALM/runtime/MFALM.pid file when the process is started.
When I do service alm start the application starts correctly.
However, when I do systemctl start alm I get the following error in journalctl -xe
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: Starting QC ALM:
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: user ID uid=(alm) gid=(team_alm_L4_users)
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: calling ./HPALM start
Oct 03 21:18:29 12ee04f4556c44c alm[1311829]: uid=(alm) gid=(team_alm_L4_users)
Oct 03 21:18:29 12ee04f4556c44c alm[1311830]: -rwxr-xr-x. 1 alm team_alm_L4_users 54983 Oct 3 21:06 HPALM
Oct 03 21:18:29 12ee04f4556c44c alm[1311831]: /etc/init.d/alm: line 36: /var/opt/HP/ALM/wrapper/HPALM: Permission denied
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: HPALM returned Status: 126
Oct 03 21:18:29 12ee04f4556c44c alm[1311832]: ls: cannot access '/var/opt/HP/ALM/runtime/MFALM.pid': No such file or directory
The RHEL 8 release version is Red Hat Enterprise Linux release 8.6 (Ootpa)
These same files and scripts work correctly on RHEL 7.
Does anyone know what has changed with RHEL 8 that would cause this to fail with systemctl but allow it to work with service?

Why is my Postgres database working for a while and then not able to "start server" once restarted?

Recently, I've started playing around with an old Raspberry Pi 3 b+, and I thought it would be good practice to host a Postgres database on my local network and use it for whatever I want to work through. I understand that running Postgres on a Raspberry Pi with 1GB of memory is not ideal and can take a toll on the SDcard, but I've updated the postgresql.conf file and specified that the data directory path is to utilize a 1TB SSD. Additionally, I've installed zram and log2ram to try and curb some of the overhead on SDcard.
Overview of tech I'm working with:
Raspberry Pi 3 B+
Postgres 12
Ubuntu server 20.04 (no gui, only working from terminal)
1TB SSD
Yesterday, I was writing to the Postgres db from a python notebook without any issue, but once I restarted the Raspberry Pi, I was unable to reach the db from DataGrip and would receive the following error from my terminal in Ubuntu:
psql: error: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I checked the status of the postgres server and that seemed to be alright...:
● postgresql.service - PostgreSQL RDBMS
Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
Active: active (exited) since Thu 2021-01-28 13:34:41 UTC; 20min ago
Process: 1895 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
Main PID: 1895 (code=exited, status=0/SUCCESS)
Jan 28 13:34:41 ubuntu systemd[1]: Starting PostgreSQL RDBMS...
Jan 28 13:34:41 ubuntu systemd[1]: Finished PostgreSQL RDBMS.
This is what is provided in the postgresql-12-main.log:
2021-01-28 13:17:23.344 UTC [1889] LOG: starting PostgreSQL 12.5 (Ubuntu 12.5-0ubuntu0.20.04.1) on aarch64-unknown-linux-gnu, compiled by gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0, 64-bit
2021-01-28 13:17:23.362 UTC [1889] LOG: listening on IPv4 address "0.0.0.0", port 5432
2021-01-28 13:17:23.362 UTC [1889] LOG: listening on IPv6 address "::", port 5432
2021-01-28 13:17:23.365 UTC [1889] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2021-01-28 13:17:23.664 UTC [1899] LOG: database system was shut down at 2021-01-28 01:43:38 UTC
2021-01-28 13:17:24.619 UTC [1899] LOG: could not link file "pg_wal/xlogtemp.1899" to "pg_wal/000000010000000000000002": Operation not permitted
2021-01-28 13:17:24.670 UTC [1899] FATAL: could not open file "pg_wal/000000010000000000000002": No such file or directory
2021-01-28 13:17:24.685 UTC [1889] LOG: startup process (PID 1899) exited with exit code 1
2021-01-28 13:17:24.686 UTC [1889] LOG: aborting startup due to startup process failure
2021-01-28 13:17:24.708 UTC [1889] LOG: database system is shut down
pg_ctl: could not start server
Examine the log output.
Please let me know if you have any questions or if you would like for me to include any additional information. I appreciate any pointers you may have for head ahead of time.
This is what the /etc/init.d/postgres file looks like:::
#!/bin/sh
set -e
### BEGIN INIT INFO
# Provides: postgresql
# Required-Start: $local_fs $remote_fs $network $time
# Required-Stop: $local_fs $remote_fs $network $time
# Should-Start: $syslog
# Should-Stop: $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: PostgreSQL RDBMS server
### END INIT INFO
# Setting environment variables for the postmaster here does not work; please
# set them in /etc/postgresql/<version>/<cluster>/environment instead.
[ -r /usr/share/postgresql-common/init.d-functions ] || exit 0
. /usr/share/postgresql-common/init.d-functions
# versions can be specified explicitly
if [ -n "$2" ]; then
versions="$2 $3 $4 $5 $6 $7 $8 $9"
else
get_versions
fi
case "$1" in
start|stop|restart|reload)
if [ "$1" = "start" ]; then
create_socket_directory
fi
if [ -z "`pg_lsclusters -h`" ]; then
log_warning_msg 'No PostgreSQL clusters exist; see "man pg_createcluster"'
exit 0
fi
for v in $versions; do
$1 $v || EXIT=$?
done
exit ${EXIT:-0}
;;
status)
LS=`pg_lsclusters -h`
# no clusters -> unknown status
[ -n "$LS" ] || exit 4
echo "$LS" | awk 'BEGIN {rc=0} {if (match($4, "down")) rc=3; printf ("%s/%s (port %s): %s\n", $1, $2, $3, $4)}; END {exit rc}'
;;
force-reload)
for v in $versions; do
reload $v
done
;;
*)
echo "Usage: $0 {start|stop|restart|reload|force-reload|status} [version ..]"
exit 1
;;
esac
exit 0
config file (partly):
#------------------------------------------------------------------------------
# FILE LOCATIONS
#------------------------------------------------------------------------------
# The default values of these variables are driven from the -D command-line
# option or PGDATA environment variable, represented here as ConfigDir.
#data_directory = 'ConfigDir' # use data in another directory
# (change requires restart)
#hba_file = 'ConfigDir/pg_hba.conf' # host-based authentication file
# (change requires restart)
#ident_file = 'ConfigDir/pg_ident.conf' # ident configuration file
# (change requires restart)
# If external_pid_file is not explicitly set, no extra PID file is written.
#external_pid_file = '' # write an extra PID file
# (change requires restart)
/etc/init.d/postgresql (partly):
NOTE: this is from a non-standard installation. YMMV
# Data directory
#PGDATA="/data/db/postgres"
#PGDATA="/data/db/postgres/pgdata"
#PGDATA="/data/db/postgres-12/pgdata"
PGDATA="/data/db/postgres-11/pgdata"
(when upgrading, I tend to keep the commented-out older setting for reference)
Note: the config-file is not edited, every path refers to the ConfigDir (by default)
Additionally, for Postgres on a Pi, I set:
random_page_cost = 1.1
shared_buffers = 128MB
#work_mem = 4MB # keep the low default
effective_cache_size = 3GB # This is for a RaspberryPi-4
# for a Pi-3, I'd use ~700M
Okay, I think I've figured it out. Might be overkill but it works:
First thing I did was format and mount my 1TB SSD. Here is a good video for a walkthrough for formatting to ext4 and mounting. The difference between the video is that I've updated the fstab file to check my SSD during bootup or "0 2" at the end of the SSD mount options instead of "0 0".
Secondly, I installed Postgres. Here is a good walkthrough for that. The directions provided in that blog were more than I needed, but a good walkthrough nonetheless. I simply installed Postgres with:
sudo apt install postgresql postgresql-contrib
Third, I followed this walkthrough until the end of step two, but before beginning step 2, I added a symbolic link from /var/lib/postgresql/12/main to /YOUR/MOUNT/POSITION/postgresql/12/main by executing:
ln -s /var/lib/postgresql/12/main /YOUR/MOUNT/POSITION/postgresql/12/main
Lastly, before restarting the postgres server, I used this website to help me better configure my server. Enter your specs and it should give you some useful configuration settings.
If I remember anything I've left out I'll try and come back and edit this post. Otherwise comment if anything doesn't make sense or is unclear.

Fail to start a process as a service? (code=exited, status=203/EXEC)

I need to start the cassandra process as a service. To do that placed the following script inside the /etc/init.d directory as per the doc
so here is my script which I placed in init directory.
#!/bin/bash
# chkconfig: 2345 99 01
# description: Cassandra
. /etc/rc.d/init.d/functions
CASSANDRA_HOME=/home/cassandra/binaries/cassandra
CASSANDRA_BIN=$CASSANDRA_HOME/bin/cassandra
CASSANDRA_NODETOOL=$CASSANDRA_HOME/bin/nodetool
CASSANDRA_LOG=$CASSANDRA_HOME/logs/cassandra.log
CASSANDRA_PID=$CASSANDRA_HOME/cassandra.pid
CASSANDRA_LOCK=$CASSANDRA_HOME/cassandra.lock
PROGRAM="cassandra"
if [ ! -f $CASSANDRA_BIN ]; then
echo "File not found: $CASSANDRA_BIN"
exit 1
fi
RETVAL=0
start() {
if [ -f $CASSANDRA_PID ] && checkpid `cat $CASSANDRA_PID`; then
echo "Cassandra is already running."
exit 0
fi
echo -n $"Starting $PROGRAM: "
$CASSANDRA_BIN -p $CASSANDRA_PID -R >> $CASSANDRA_LOG 2>&1
sleep 60
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
touch $CASSANDRA_LOCK
echo_success
else
echo_failure
fi
echo
return $RETVAL
}
stop() {
if [ ! -f $CASSANDRA_PID ]; then
echo "Cassandra is already stopped."
exit 0
fi
echo -n $"Stopping $PROGRAM: "
# $CASSANDRA_NODETOOL -h 127.0.0.1 decommission
if kill `cat $CASSANDRA_PID`; then
RETVAL=0
rm -f $CASSANDRA_LOCK
echo_success
else
RETVAL=1
echo_failure
fi
echo
[ $RETVAL = 0 ]
}
status_fn() {
if [ -f $CASSANDRA_PID ] && checkpid `cat $CASSANDRA_PID`; then
echo "Cassandra is running."
exit 0
else
echo "Cassandra is stopped."
exit 1
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status_fn
;;
restart)
stop
start
;;
*)
echo $"Usage: $PROGRAM {start|stop|restart|status}"
RETVAL=3
esac
exit $RETVAL
However when execute the service cassandra start it ended up with an error as follows.
[root#casstestnode1 init.d]# service cassandra start
Starting cassandra (via systemctl): Job for cassandra.service failed because the control process exited with error code. See "systemctl status cassandra.service" and "journalctl -xe" for details.
[FAILED]
[root#casstestnode1 init.d]#
and
[root#casstestnode1 init.d]# systemctl status cassandra.service
● cassandra.service - SYSV: Cassandra
Loaded: loaded (/etc/rc.d/init.d/cassandra)
Active: failed (Result: exit-code) since Sun 2019-11-10 22:23:07 IST; 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 2624 ExecStart=/etc/rc.d/init.d/cassandra start (code=exited, status=203/EXEC)
Nov 10 22:23:07 casstestnode1 systemd[1]: Starting SYSV: Cassandra...
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service: control process exited, code=exited status=203
Nov 10 22:23:07 casstestnode1 systemd[1]: Failed to start SYSV: Cassandra.
Nov 10 22:23:07 casstestnode1 systemd[1]: Unit cassandra.service entered failed state.
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service failed.
[root#casstestnode1 init.d]#
I reboot the VM and retried again and still error is same. Any linux expert, Please help.
Additionally here is the output of the journalctl -xe
[root#casstestnode1 init.d]# journalctl -xe
-- Unit session-3153.scope has begun starting up.
Nov 11 23:55:01 casstestnode1 systemd[1]: Started Session 3154 of user cassandra.
-- Subject: Unit session-3154.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-3154.scope has finished starting up.
--
-- The start-up result is done.
Nov 11 23:55:01 casstestnode1 systemd[1]: Starting Session 3154 of user cassandra.
-- Subject: Unit session-3154.scope has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-3154.scope has begun starting up.
Nov 11 23:55:01 casstestnode1 CROND[24584]: (cassandra) CMD (. /home/cassandra/test/cr.sh)
Nov 11 23:55:01 casstestnode1 CROND[24585]: (cassandra) CMD (date >> /home/cassandra/test/ll.txt)
Nov 11 23:55:01 casstestnode1 postfix/pickup[23967]: A5210F1C27: uid=995 from=<cassandra>
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24090]: A5210F1C27: message-id=<20191111182501.A5210F1C27#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A5210F1C27: from=<cassandra#casstestnode1.localdomain>, size=837, nrcpt=1 (queue activ
Nov 11 23:55:01 casstestnode1 postfix/pickup[23967]: A8301F1C29: uid=995 from=<cassandra>
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24315]: A8301F1C29: message-id=<20191111182501.A8301F1C29#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A8301F1C29: from=<cassandra#casstestnode1.localdomain>, size=845, nrcpt=1 (queue activ
Nov 11 23:55:01 casstestnode1 postfix/local[24021]: A5210F1C27: to=<cassandra#casstestnode1.localdomain>, orig_to=<cassandra>, relay=loc
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24090]: A987AF1C2A: message-id=<20191111182501.A987AF1C2A#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A987AF1C2A: from=<>, size=2878, nrcpt=1 (queue active)
Nov 11 23:55:01 casstestnode1 postfix/bounce[24164]: A5210F1C27: sender non-delivery notification: A987AF1C2A
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A5210F1C27: removed
Nov 11 23:55:01 casstestnode1 postfix/local[23698]: A8301F1C29: to=<cassandra#casstestnode1.localdomain>, orig_to=<cassandra>, relay=loc
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24315]: AA5AFF1C27: message-id=<20191111182501.AA5AFF1C27#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: AA5AFF1C27: from=<>, size=2886, nrcpt=1 (queue active)
Nov 11 23:55:01 casstestnode1 postfix/bounce[24164]: A8301F1C29: sender non-delivery notification: AA5AFF1C27
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A8301F1C29: removed
Nov 11 23:55:01 casstestnode1 postfix/local[24021]: A987AF1C2A: to=<cassandra#casstestnode1.localdomain>, relay=local, delay=0.01, delay
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A987AF1C2A: removed
Nov 11 23:55:01 casstestnode1 postfix/local[23698]: AA5AFF1C27: to=<cassandra#casstestnode1.localdomain>, relay=local, delay=0.01, delay
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: AA5AFF1C27: removed
lines 1923-1959/1959 (END)
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]#
when I manually run the script
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# ./cassandra start
Starting cassandra: [ OK ]
[root#casstestnode1 init.d]# ./cassandra status
Cassandra is running.
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# service cassandra status
Cassandra is running.
[root#casstestnode1 init.d]# ./cassandra stop
Stopping cassandra: [ OK ]
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# ./cassandra status
Cassandra is stopped.
[root#casstestnode1 init.d]# service cassandra status
Cassandra is stopped.
[root#casstestnode1 init.d]#
203 EXIT_EXEC The actual process execution failed (specifically, the execve(2) system call). Most likely this is caused by a missing or non-accessible executable file
As per systemd.exec man page
So root cause is
exact path is different not /etc/rc.d/init.d/cassandra - you can figure out using find /etc -name cassandra
file is not executable less likely as you can do it manually.
Firstly, it is possible you are having an acute form of copypasteitis:
init script was obtained from one location ("the internet")
cassandra files are locateed in a different path than the init script assumes
your error claims:
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service: control process exited, code=exited status=203
this 203 usually means some executable was not found upon exec*() system call
(i.e. exec was called with a non existing executable path)
Secondly, you are on systemd based distribution, so there is no need to use system V (old, obsolete) type init script
Let's see how to get you both on-board and up-to-date:
find yourself somebody else's supposedly working systemd unit file for cassandra (e.g. this one )
open systemd documentation on a side: redhat official systemd doc
put the unit file to the right location, edit it
after you've edited it, run systemctl daemon-reload
now try systemd-way to launch cassandra: systemctl start cassandra
So, after you have found why it shoots the error, please use thee
now, for the error in the logs
I found a answer from this link. The cause was there was a blank line at the beginning of my cassandra script before #!/bin/bash . Once I removed and then I executed below two commands and issue was fixed.
chkconfig --add cassandra
systemctl daemon-reload

Debian init.d script fails to sleep

This problem occurs on a Pogoplug E02 running Debian jessie.
At startup the network interface takes several seconds to come online. A short delay is required after the "networking" script completes to ensure that ensuing network operations occur properly.
I wrote the following script and inserted it using update-rc.d. The script inserted correctly and executes at boot time in proper sequence, after networking and before the network-dependent scripts which were modified to depend on netdelay
cat /etc/init.d/netdelay
#! /bin/sh
### BEGIN INIT INFO
# Provides: netdelay
# Required-Start: networking
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Delay 5s after eth0 up for Pogoplug
# Description:
### END INIT INFO
PATH=/sbin:/usr/sbin:/bin:/usr/bin
./lib/init/vars.sh
./lib/lsb/init-functions
log_action_msg "Pausing for eth0 to come online"
/bin/sleep 5
log_action_msg "Continuing"
exit 0
When the script executes at startup there is no delay. I've used both sleep and /bin/sleep in the script but neither effect the desired delay. Boot log showing this attached below.
Thu Jan 1 00:00:25 1970: Configuring network interfaces...done.
Thu Jan 1 00:00:25 1970: INIT: Entering runlevel: 2
Thu Jan 1 00:00:25 1970: Using makefile-style concurrent boot in runlevel 2.
Thu Jan 1 00:00:26 1970: Starting SASL Authentication Daemon: saslauthd.
Thu Jan 1 00:00:29 1970: Pausing for eth0 to come online.
Thu Jan 1 00:00:30 1970: Continuing.
Thu Jan 1 00:00:33 1970: ntpdate updating system time.
Wed Feb 1 05:33:40 2017: Starting enhanced syslogd: rsyslogd.
(The Pogoplug has no hardware clock and has no idea what time it is until ntpdate has run.)
Can someone see where the problem might be?

cygnus service not starting as a service

I have installed cygnus using RPMs on my CentOS 7.0 , but I can't started as service:
[centos#cygnus-mongo ~]$ sudo service cygnus start
Starting cygnus (via systemctl): Job for cygnus.service failed. See 'systemctl status cygnus.service' and 'journalctl -xn' for details.
[FAILED]
Here is the errors log:
[centos#cygnus-mongo ~]$ sudo systemctl status cygnus.service
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since Tue 2016-02-23 07:09:48 UTC; 18s ago
Process: 1184 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
Feb 23 07:09:46 cygnus-mongo.novalocal systemd[1]: Starting SYSV: cygnus...
Feb 23 07:09:46 cygnus-mongo.novalocal su[1189]: (to cygnus) root on none
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: [FAILED]
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Tue 2016-02-23 07:08:59 UTC, end at Tue 2016-02-23 07:10:57 UTC. --
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for /mnt.
-- Subject: Unit mnt.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mnt.mount has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for File System Check on /dev/vdb.
-- Subject: Unit systemd-fsck#dev-vdb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-fsck#dev-vdb.service has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Startup finished in 1.659s (kernel) + 2.841s (initrd) + 1min 31.190s (userspace) = 1min 35.691s.
-- Subject: System start-up is now complete
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- All system services necessary queued for starting at boot have been
-- successfully started. Note that this does not mean that the machine is
-- now idle as services might still be busy with completing start-up.
--
-- Kernel start-up required 1659184 microseconds.
--
-- Initial RAM disk start-up required 2841741 microseconds.
--
-- Userspace start-up required 91190356 microseconds.
Feb 23 07:10:47 cygnus-mongo.novalocal dhclient[1068]: DHCPREQUEST on eth0 to 192.168.111.71 port 67 (xid=0x6acae4e0)
Feb 23 07:10:48 cygnus-mongo.novalocal dhclient[1068]: DHCPACK from 192.168.111.71 (xid=0x6acae4e0)
Feb 23 07:10:50 cygnus-mongo.novalocal dhclient[1068]: bound to 192.168.111.128 -- renewal in 44 seconds.
Feb 23 07:10:57 cygnus-mongo.novalocal sudo[1255]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Here is the service file that I did not change:
[centos#cygnus-mongo ~]$ cat /etc/rc.d/init.d/cygnus
#!/bin/bash
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-cygnus (FI-WARE project).
#
# fiware-cygnus is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-cygnus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-cygnus. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
#
# cygnus Start/Stop cygnus
#
# chkconfig: 2345 99 60
# description: cygnus
# Load some fancy functions for init.d
. /etc/rc.d/init.d/functions
PARAM=$1
CYGNUS_INSTANCE=${2}
COMPONENT_NAME=cygnus
PREFIX=/usr
CYGNUS_DIR=${PREFIX}/cygnus
FLUME_EXECUTABLE=${CYGNUS_DIR}/bin/cygnus-flume-ng
CYGNUS_USER=cygnus
cygnus_start()
{
local result=0
local cygnus_instance=${1}
if [[ ! -x ${FLUME_EXECUTABLE} ]]; then
printf "%s\n" "Fail - ${FLUME_EXECUTABLE} not exists or is not executable."
exit 1
fi
if [[ $(ls -l ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf 2> /dev/null | wc -l) -eq 0 ]]; then
if [[ ${cygnus_instance} == "" ]]; then
printf "%s\n" "There aren't any instance of Cygnus configured. Refer to file /usr/cygnus/conf/README.md for further information."
else
printf "%s\n" "There aren't any instance of Cygnus configured with the name ${cygnus_instance}. Refer to file /usr/cygnus/conf/README.md for further information."
fi
return 1
fi
for instance in $(ls ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf)
do
local NAME
NAME=${instance%.conf}
NAME=${NAME#*cygnus_instance_}
. ${instance}
CYGNUS_PID_FILE="/var/run/cygnus/cygnus_${NAME}.pid"
printf "%s" "Starting Cygnus ${NAME}... "
status -p ${CYGNUS_PID_FILE} ${FLUME_EXECUTABLE} &> /dev/null
if [[ ${?} -eq 0 ]]; then
printf "%s\n" " Already running, skipping $(success)"
continue
fi
CYGNUS_COMMAND="${FLUME_EXECUTABLE} agent -p ${ADMIN_PORT} --conf ${CONFIG_FOLDER} -f ${CONFIG_FILE} -n ${AGENT_NAME} -Dflume.log.file=${LOGFILE_NAME} &>> /var/log/cygnus/${LOGFILE_NAME} & echo \$! > ${CYGNUS_PID_FILE}"
su ${CYGNUS_USER} -c "${CYGNUS_COMMAND}"
sleep 2 # wait some time to know if flume is still alive
PID=$(cat ${CYGNUS_PID_FILE})
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm ${CYGNUS_PID_FILE}
else
chown ${CYGNUS_USER}:${CYGNUS_USER} ${CYGNUS_PID_FILE}
printf "%s\n" "$(success)"
fi
done
return ${result}
}
cygnus_stop()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running $(success)"
return 0
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%-50s" "Stopping Cygnus ${NAME}..."
PID=$(cat ${run_instance})
kill -HUP ${PID} &> /dev/null
sleep 2
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
rm -f ${run_instance}
printf "%s\n" "$(success)"
else
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm -f ${run_instance}
fi
done
return ${result}
}
cygnus_status()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running"
exit 1
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%s\n" "Cygnus ${NAME} status..."
status -p ${run_instance} ${FLUME_EXECUTABLE}
result=$((${result}+${?}))
done
return ${result}
}
case ${PARAM} in
'start')
cygnus_start ${CYGNUS_INSTANCE}
;;
'stop')
cygnus_stop ${CYGNUS_INSTANCE}
;;
'restart')
cygnus_stop ${CYGNUS_INSTANCE}
cygnus_start ${CYGNUS_INSTANCE}
;;
'status')
cygnus_status ${CYGNUS_INSTANCE}
;;
esac
my configuration is the following:
file cygnus_instance_mongo.conf :
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongo.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=/var/log/cygnus/cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
file agent_mongo.conf
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = kura_
# prefix pro the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = kura_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Any idea of what have I missed?
UPDATE after frb answer
I changed the log file path and I got a new error:
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Thu 2016-03-03 08:21:08 UTC, end at Thu 2016-03-03 08:22:07 UTC. --
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session opened for user cygnus by (uid=0)
Mar 03 08:21:49 cygnus-mongo.novalocal cygnus[1206]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session closed for user cygnus
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: [FAILED]
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
-- Subject: Unit cygnus.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit cygnus.service has failed.
--
-- The result is failed.
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
Mar 03 08:22:07 cygnus-mongo.novalocal sudo[1277]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Everything in the configuration is OK except for this line in cygnus_instance_mongo.conf:
LOGFILE_NAME=/var/log/cygnus/cygnus.log
It must be:
LOGFILE_NAME=cygnus.log
I.e. the name of the log file within /var/log/cygnus.
The error was reported in this line of the service logs:
bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory

Resources