Fail to start a process as a service? (code=exited, status=203/EXEC) - linux

I need to start the cassandra process as a service. To do that placed the following script inside the /etc/init.d directory as per the doc
so here is my script which I placed in init directory.
#!/bin/bash
# chkconfig: 2345 99 01
# description: Cassandra
. /etc/rc.d/init.d/functions
CASSANDRA_HOME=/home/cassandra/binaries/cassandra
CASSANDRA_BIN=$CASSANDRA_HOME/bin/cassandra
CASSANDRA_NODETOOL=$CASSANDRA_HOME/bin/nodetool
CASSANDRA_LOG=$CASSANDRA_HOME/logs/cassandra.log
CASSANDRA_PID=$CASSANDRA_HOME/cassandra.pid
CASSANDRA_LOCK=$CASSANDRA_HOME/cassandra.lock
PROGRAM="cassandra"
if [ ! -f $CASSANDRA_BIN ]; then
echo "File not found: $CASSANDRA_BIN"
exit 1
fi
RETVAL=0
start() {
if [ -f $CASSANDRA_PID ] && checkpid `cat $CASSANDRA_PID`; then
echo "Cassandra is already running."
exit 0
fi
echo -n $"Starting $PROGRAM: "
$CASSANDRA_BIN -p $CASSANDRA_PID -R >> $CASSANDRA_LOG 2>&1
sleep 60
RETVAL=$?
if [ $RETVAL -eq 0 ]; then
touch $CASSANDRA_LOCK
echo_success
else
echo_failure
fi
echo
return $RETVAL
}
stop() {
if [ ! -f $CASSANDRA_PID ]; then
echo "Cassandra is already stopped."
exit 0
fi
echo -n $"Stopping $PROGRAM: "
# $CASSANDRA_NODETOOL -h 127.0.0.1 decommission
if kill `cat $CASSANDRA_PID`; then
RETVAL=0
rm -f $CASSANDRA_LOCK
echo_success
else
RETVAL=1
echo_failure
fi
echo
[ $RETVAL = 0 ]
}
status_fn() {
if [ -f $CASSANDRA_PID ] && checkpid `cat $CASSANDRA_PID`; then
echo "Cassandra is running."
exit 0
else
echo "Cassandra is stopped."
exit 1
fi
}
case "$1" in
start)
start
;;
stop)
stop
;;
status)
status_fn
;;
restart)
stop
start
;;
*)
echo $"Usage: $PROGRAM {start|stop|restart|status}"
RETVAL=3
esac
exit $RETVAL
However when execute the service cassandra start it ended up with an error as follows.
[root#casstestnode1 init.d]# service cassandra start
Starting cassandra (via systemctl): Job for cassandra.service failed because the control process exited with error code. See "systemctl status cassandra.service" and "journalctl -xe" for details.
[FAILED]
[root#casstestnode1 init.d]#
and
[root#casstestnode1 init.d]# systemctl status cassandra.service
● cassandra.service - SYSV: Cassandra
Loaded: loaded (/etc/rc.d/init.d/cassandra)
Active: failed (Result: exit-code) since Sun 2019-11-10 22:23:07 IST; 41s ago
Docs: man:systemd-sysv-generator(8)
Process: 2624 ExecStart=/etc/rc.d/init.d/cassandra start (code=exited, status=203/EXEC)
Nov 10 22:23:07 casstestnode1 systemd[1]: Starting SYSV: Cassandra...
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service: control process exited, code=exited status=203
Nov 10 22:23:07 casstestnode1 systemd[1]: Failed to start SYSV: Cassandra.
Nov 10 22:23:07 casstestnode1 systemd[1]: Unit cassandra.service entered failed state.
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service failed.
[root#casstestnode1 init.d]#
I reboot the VM and retried again and still error is same. Any linux expert, Please help.
Additionally here is the output of the journalctl -xe
[root#casstestnode1 init.d]# journalctl -xe
-- Unit session-3153.scope has begun starting up.
Nov 11 23:55:01 casstestnode1 systemd[1]: Started Session 3154 of user cassandra.
-- Subject: Unit session-3154.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-3154.scope has finished starting up.
--
-- The start-up result is done.
Nov 11 23:55:01 casstestnode1 systemd[1]: Starting Session 3154 of user cassandra.
-- Subject: Unit session-3154.scope has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-3154.scope has begun starting up.
Nov 11 23:55:01 casstestnode1 CROND[24584]: (cassandra) CMD (. /home/cassandra/test/cr.sh)
Nov 11 23:55:01 casstestnode1 CROND[24585]: (cassandra) CMD (date >> /home/cassandra/test/ll.txt)
Nov 11 23:55:01 casstestnode1 postfix/pickup[23967]: A5210F1C27: uid=995 from=<cassandra>
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24090]: A5210F1C27: message-id=<20191111182501.A5210F1C27#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A5210F1C27: from=<cassandra#casstestnode1.localdomain>, size=837, nrcpt=1 (queue activ
Nov 11 23:55:01 casstestnode1 postfix/pickup[23967]: A8301F1C29: uid=995 from=<cassandra>
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24315]: A8301F1C29: message-id=<20191111182501.A8301F1C29#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A8301F1C29: from=<cassandra#casstestnode1.localdomain>, size=845, nrcpt=1 (queue activ
Nov 11 23:55:01 casstestnode1 postfix/local[24021]: A5210F1C27: to=<cassandra#casstestnode1.localdomain>, orig_to=<cassandra>, relay=loc
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24090]: A987AF1C2A: message-id=<20191111182501.A987AF1C2A#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A987AF1C2A: from=<>, size=2878, nrcpt=1 (queue active)
Nov 11 23:55:01 casstestnode1 postfix/bounce[24164]: A5210F1C27: sender non-delivery notification: A987AF1C2A
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A5210F1C27: removed
Nov 11 23:55:01 casstestnode1 postfix/local[23698]: A8301F1C29: to=<cassandra#casstestnode1.localdomain>, orig_to=<cassandra>, relay=loc
Nov 11 23:55:01 casstestnode1 postfix/cleanup[24315]: AA5AFF1C27: message-id=<20191111182501.AA5AFF1C27#casstestnode1.localdomain>
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: AA5AFF1C27: from=<>, size=2886, nrcpt=1 (queue active)
Nov 11 23:55:01 casstestnode1 postfix/bounce[24164]: A8301F1C29: sender non-delivery notification: AA5AFF1C27
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A8301F1C29: removed
Nov 11 23:55:01 casstestnode1 postfix/local[24021]: A987AF1C2A: to=<cassandra#casstestnode1.localdomain>, relay=local, delay=0.01, delay
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: A987AF1C2A: removed
Nov 11 23:55:01 casstestnode1 postfix/local[23698]: AA5AFF1C27: to=<cassandra#casstestnode1.localdomain>, relay=local, delay=0.01, delay
Nov 11 23:55:01 casstestnode1 postfix/qmgr[2035]: AA5AFF1C27: removed
lines 1923-1959/1959 (END)
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]#
when I manually run the script
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# ./cassandra start
Starting cassandra: [ OK ]
[root#casstestnode1 init.d]# ./cassandra status
Cassandra is running.
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# service cassandra status
Cassandra is running.
[root#casstestnode1 init.d]# ./cassandra stop
Stopping cassandra: [ OK ]
[root#casstestnode1 init.d]#
[root#casstestnode1 init.d]# ./cassandra status
Cassandra is stopped.
[root#casstestnode1 init.d]# service cassandra status
Cassandra is stopped.
[root#casstestnode1 init.d]#

203 EXIT_EXEC The actual process execution failed (specifically, the execve(2) system call). Most likely this is caused by a missing or non-accessible executable file
As per systemd.exec man page
So root cause is
exact path is different not /etc/rc.d/init.d/cassandra - you can figure out using find /etc -name cassandra
file is not executable less likely as you can do it manually.

Firstly, it is possible you are having an acute form of copypasteitis:
init script was obtained from one location ("the internet")
cassandra files are locateed in a different path than the init script assumes
your error claims:
Nov 10 22:23:07 casstestnode1 systemd[1]: cassandra.service: control process exited, code=exited status=203
this 203 usually means some executable was not found upon exec*() system call
(i.e. exec was called with a non existing executable path)
Secondly, you are on systemd based distribution, so there is no need to use system V (old, obsolete) type init script
Let's see how to get you both on-board and up-to-date:
find yourself somebody else's supposedly working systemd unit file for cassandra (e.g. this one )
open systemd documentation on a side: redhat official systemd doc
put the unit file to the right location, edit it
after you've edited it, run systemctl daemon-reload
now try systemd-way to launch cassandra: systemctl start cassandra
So, after you have found why it shoots the error, please use thee
now, for the error in the logs

I found a answer from this link. The cause was there was a blank line at the beginning of my cassandra script before #!/bin/bash . Once I removed and then I executed below two commands and issue was fixed.
chkconfig --add cassandra
systemctl daemon-reload

Related

cron.hourly script is only run manually

It does not seem that file I put to /etc/cron.hourly does not seem to work
It's called cron_hourly_homepage, so it does not seem that its filename issue. Ownership seems alright as well: -rwxr-xr-x
run-parts --test /etc/cron.hourly sees this file, run-parts --report /etc/cron.hourly can run it as well
Would like to get it fixed, so no crontab recommendations are necessary :D
● cron.service - Regular background program processing daemon
Loaded: loaded (/lib/systemd/system/cron.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2021-12-27 20:20:40 GMT; 1 day 23h ago
Docs: man:cron(8)
Main PID: 401 (cron)
Tasks: 1 (limit: 4915)
CPU: 9.213s
CGroup: /system.slice/cron.service
└─401 /usr/sbin/cron -f
Dec 29 17:19:01 raspberrypi CRON[9255]: (CRON) info (No MTA installed, discarding output)
Dec 29 17:19:01 raspberrypi CRON[9255]: pam_unix(cron:session): session closed for user root
Dec 29 18:17:01 raspberrypi CRON[9427]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
Dec 29 18:17:01 raspberrypi CRON[9428]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
It is common to forget cron scripts are running without a shell and no environment context.
Please see this answer

How can I access the return code of a service in ExecStop or ExecPostStop?

Is it possible to capture the exit code from a process running on, e.g. ExecStart in a systemd service unit? I want to check, if the process exited okay or if an error occurred.
After reading the systemd.service and systemd.exec docs, I thought $SERVICE_RESULT $EXIT_CODE and / or $EXIT_STATUS might help. But to no avail.
Given this test unit:
[Unit]
Description=Testing Run Order
[Service]
Type=simple
Environment=HELLO=WORLD
ExecStartPre=/bin/echo [StartPre] $MAINPID $SERVICE_RESULT $EXIT_CODE $EXIT_STATUS
ExecStartPost=/bin/echo [StartPost] $MAINPID $SERVICE_RESULT $EXIT_CODE $EXIT_STATUS
ExecStart=/bin/sleep 2
ExecStop=/bin/env
ExecStop=/bin/echo [Stop] $MAINPID $SERVICE_RESULT $EXIT_CODE $EXIT_STATUS
ExecStopPost=/bin/echo [StopPost] $MAINPID $SERVICE_RESULT $EXIT_CODE $EXIT_STATUS
I get the following output:
Aug 30 08:37:26 localhost.localdomain systemd[1]: Starting Testing Run Order...
Aug 30 08:37:26 localhost.localdomain echo[3458]: [StartPre]
Aug 30 08:37:26 localhost.localdomain systemd[1]: Started Testing service metrics.
Aug 30 08:37:26 localhost.localdomain echo[3460]: [StartPost] 3459
Aug 30 08:37:27 localhost.localdomain env[3465]: LANG=en_US.UTF-8
Aug 30 08:37:27 localhost.localdomain env[3465]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
Aug 30 08:37:27 localhost.localdomain env[3465]: HELLO=WORLD
Aug 30 08:37:27 localhost.localdomain echo[3467]: [Stop]
Aug 30 08:37:27 localhost.localdomain echo[3469]: [StopPost]
So nothing. I'm currently stuck with systemd 219 (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) on Centos7.
$EXIT_CODE and $EXIT_STATUS were only added in systemd v232. If you’re stuck with an older version, you’ll have to work around it, perhaps with something like this (untested):
ExitStart=/bin/sh -c '/* normal command goes here */; echo $? > /tmp/my-service.exit'
ExecStopPost=/bin/sh -c 'EXIT_STATUS=$(cat /tmp/my-service.exit); /* rest of command goes here */'
A side note: when you write something like
ExecStop=/bin/echo [Stop] $MAINPID $SERVICE_RESULT $EXIT_CODE $EXIT_STATUS
in your test unit, those variables are substituted by systemd. To check the environment of the actual process, use a shell (as above) or a command like env (as you did for one of the ExecStop lines).
You can actually read ExecMainStatusCode from the command systemctl show your_service --property=ExecMainStatus
For example, if you want to trigger a command depending on the result (graceful stop or hard crash) you could use something like this:
ExecStopPost=/bin/bash -c "if [ $(systemctl show your_service --property=ExecMainStatus) == 'ExecMainStatus=143' ]; then exit; else /bin/bash -c '/usr/local/bin/your_custom_program.sh'; fi"

How to use systemctl to restart a service

I created a system service which can be start/stop/restart by running the command service. But when I try to do it by systemctl I always got below error. I don't know what wrong with my script file. Why it can be managed by service not systemctl?
# systemctl restart cooltoo_storage
Job for cooltoo_storage.service failed because the control process exited with error code. See "systemctl status cooltoo_storage.service" and "journalctl -xe" for details.
Below is the output of "systemctl status cooltoo_storage.service"
# systemctl status cooltoo_storage.service
● cooltoo_storage.service - LSB: cooltoo storage provider
Loaded: loaded (/etc/rc.d/init.d/cooltoo_storage)
Active: failed (Result: exit-code) since Tue 2016-05-03 17:17:12 CST; 1min 29s ago
Docs: man:systemd-sysv-generator(8)
Process: 32268 ExecStart=/etc/rc.d/init.d/cooltoo_storage start (code=exited, status=203/EXEC)
May 03 17:17:12 Cool-Too systemd[1]: Starting LSB: cooltoo storage provider...
May 03 17:17:12 Cool-Too systemd[1]: cooltoo_storage.service: control process exited, code=exited status=203
May 03 17:17:12 Cool-Too systemd[1]: Failed to start LSB: cooltoo storage provider.
May 03 17:17:12 Cool-Too systemd[1]: Unit cooltoo_storage.service entered failed state.
May 03 17:17:12 Cool-Too systemd[1]: cooltoo_storage.service failed.
Below is my script file /etc/init.d/cooltoo_storage,
### BEGIN INIT INFO
# Provides: cooltoo storage provider
# Required-Start: $local_fs $remote_fs $network $time $named
# Should-Start: $time sendmail
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: cooltoo storage provider
# Description: cooltoo storage provider
# chkconfig: 2345 90 90
### END INIT INFO
start() {
nginx -c /data/nginx/nginx.config
}
stop() {
nginx -c /data/nginx/nginx.config -s stop
}
restart() {
nginx -c /data/nginx/nginx.config -s reload
}
status() {
echo ''
}
action="$#"
if [[ "$MODE" == "auto" && -n "$init_script" ]] || [[ "$MODE" == "service" ]]; then
action="$1"
shift
fi
case "$action" in
start)
start ; exit $?;;
stop)
stop ; exit $?;;
restart)
restart ; exit $?;;
status)
status ; exit $?;;
*)
echo "Usage: $0 {start|stop|restart|force-reload|status|run}"; exit 1;
esac

Cronjob script trouble launching application

I started with cronjobs a while ago, but up until yesterday I've run into a problem I can't figure/find out.
#reboot me /etc/application/start-script.sh
I have Raspbian Jessie (minimal) installed on a Raspberry Pi Zero. One of the users has a cronjob command #reboot. When I check "sudo /etc/init.d/cron status", I can see the cronjob is picked up after a reboot and executed. The only thing is that any output is dropped, the "No MTA installed"-message, (care?).
#!/bin/bash
# My start script
logfile=/home/me/logfile.log
echo "Starting program..." >> $logfile
application
echo "Program started!" >> $logfile
As you can see, it should create a log file, and it does this after a reboot when the script is called as a cronjob. This script works perfectly fine when you manualy execute it, it writes the output to the logfile AND starts the program.
The problem is: the program is not launched when the .sh script is called as a cronjob.
Why is only the application not started when the script is executed???
"sudo /etc/init.d/cron status" output
Mar 17 22:14:45 pizza-pi systemd[1]: Starting Regular background program processing daemon...
Mar 17 22:14:45 pizza-pi systemd[1]: Started Regular background program processing daemon.
Mar 17 22:14:45 pizza-pi cron[292]: (CRON) INFO (pidfile fd = 3)
Mar 17 22:14:45 pizza-pi cron[292]: (CRON) INFO (Running #reboot jobs)
Mar 17 22:14:45 pizza-pi CRON[296]: pam_unix(cron:session): session opened for user me by (uid=0)
Mar 17 22:14:45 pizza-pi CRON[318]: (me) CMD (etc/application/start-script.sh)
Mar 17 22:14:45 pizza-pi CRON[296]: (CRON) info (No MTA installed, discarding output)
Mar 17 22:14:45 pizza-pi pam_unix(cron:session): session closed for user me
Edit the /etc/rc.local file and add the following line in /etc/init.d/cron/start be sure that it should before exit 0.
Follow this link https://rahulmahale.wordpress.com/2014/09/03/solved-running-cron-job-at-reboot-on-raspberry-pi-in-debianwheezy-and-raspbian/
Hope answer is useful for you

cygnus service not starting as a service

I have installed cygnus using RPMs on my CentOS 7.0 , but I can't started as service:
[centos#cygnus-mongo ~]$ sudo service cygnus start
Starting cygnus (via systemctl): Job for cygnus.service failed. See 'systemctl status cygnus.service' and 'journalctl -xn' for details.
[FAILED]
Here is the errors log:
[centos#cygnus-mongo ~]$ sudo systemctl status cygnus.service
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since Tue 2016-02-23 07:09:48 UTC; 18s ago
Process: 1184 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
Feb 23 07:09:46 cygnus-mongo.novalocal systemd[1]: Starting SYSV: cygnus...
Feb 23 07:09:46 cygnus-mongo.novalocal su[1189]: (to cygnus) root on none
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: [FAILED]
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Tue 2016-02-23 07:08:59 UTC, end at Tue 2016-02-23 07:10:57 UTC. --
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for /mnt.
-- Subject: Unit mnt.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mnt.mount has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for File System Check on /dev/vdb.
-- Subject: Unit systemd-fsck#dev-vdb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-fsck#dev-vdb.service has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Startup finished in 1.659s (kernel) + 2.841s (initrd) + 1min 31.190s (userspace) = 1min 35.691s.
-- Subject: System start-up is now complete
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- All system services necessary queued for starting at boot have been
-- successfully started. Note that this does not mean that the machine is
-- now idle as services might still be busy with completing start-up.
--
-- Kernel start-up required 1659184 microseconds.
--
-- Initial RAM disk start-up required 2841741 microseconds.
--
-- Userspace start-up required 91190356 microseconds.
Feb 23 07:10:47 cygnus-mongo.novalocal dhclient[1068]: DHCPREQUEST on eth0 to 192.168.111.71 port 67 (xid=0x6acae4e0)
Feb 23 07:10:48 cygnus-mongo.novalocal dhclient[1068]: DHCPACK from 192.168.111.71 (xid=0x6acae4e0)
Feb 23 07:10:50 cygnus-mongo.novalocal dhclient[1068]: bound to 192.168.111.128 -- renewal in 44 seconds.
Feb 23 07:10:57 cygnus-mongo.novalocal sudo[1255]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Here is the service file that I did not change:
[centos#cygnus-mongo ~]$ cat /etc/rc.d/init.d/cygnus
#!/bin/bash
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-cygnus (FI-WARE project).
#
# fiware-cygnus is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-cygnus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-cygnus. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
#
# cygnus Start/Stop cygnus
#
# chkconfig: 2345 99 60
# description: cygnus
# Load some fancy functions for init.d
. /etc/rc.d/init.d/functions
PARAM=$1
CYGNUS_INSTANCE=${2}
COMPONENT_NAME=cygnus
PREFIX=/usr
CYGNUS_DIR=${PREFIX}/cygnus
FLUME_EXECUTABLE=${CYGNUS_DIR}/bin/cygnus-flume-ng
CYGNUS_USER=cygnus
cygnus_start()
{
local result=0
local cygnus_instance=${1}
if [[ ! -x ${FLUME_EXECUTABLE} ]]; then
printf "%s\n" "Fail - ${FLUME_EXECUTABLE} not exists or is not executable."
exit 1
fi
if [[ $(ls -l ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf 2> /dev/null | wc -l) -eq 0 ]]; then
if [[ ${cygnus_instance} == "" ]]; then
printf "%s\n" "There aren't any instance of Cygnus configured. Refer to file /usr/cygnus/conf/README.md for further information."
else
printf "%s\n" "There aren't any instance of Cygnus configured with the name ${cygnus_instance}. Refer to file /usr/cygnus/conf/README.md for further information."
fi
return 1
fi
for instance in $(ls ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf)
do
local NAME
NAME=${instance%.conf}
NAME=${NAME#*cygnus_instance_}
. ${instance}
CYGNUS_PID_FILE="/var/run/cygnus/cygnus_${NAME}.pid"
printf "%s" "Starting Cygnus ${NAME}... "
status -p ${CYGNUS_PID_FILE} ${FLUME_EXECUTABLE} &> /dev/null
if [[ ${?} -eq 0 ]]; then
printf "%s\n" " Already running, skipping $(success)"
continue
fi
CYGNUS_COMMAND="${FLUME_EXECUTABLE} agent -p ${ADMIN_PORT} --conf ${CONFIG_FOLDER} -f ${CONFIG_FILE} -n ${AGENT_NAME} -Dflume.log.file=${LOGFILE_NAME} &>> /var/log/cygnus/${LOGFILE_NAME} & echo \$! > ${CYGNUS_PID_FILE}"
su ${CYGNUS_USER} -c "${CYGNUS_COMMAND}"
sleep 2 # wait some time to know if flume is still alive
PID=$(cat ${CYGNUS_PID_FILE})
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm ${CYGNUS_PID_FILE}
else
chown ${CYGNUS_USER}:${CYGNUS_USER} ${CYGNUS_PID_FILE}
printf "%s\n" "$(success)"
fi
done
return ${result}
}
cygnus_stop()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running $(success)"
return 0
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%-50s" "Stopping Cygnus ${NAME}..."
PID=$(cat ${run_instance})
kill -HUP ${PID} &> /dev/null
sleep 2
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
rm -f ${run_instance}
printf "%s\n" "$(success)"
else
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm -f ${run_instance}
fi
done
return ${result}
}
cygnus_status()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running"
exit 1
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%s\n" "Cygnus ${NAME} status..."
status -p ${run_instance} ${FLUME_EXECUTABLE}
result=$((${result}+${?}))
done
return ${result}
}
case ${PARAM} in
'start')
cygnus_start ${CYGNUS_INSTANCE}
;;
'stop')
cygnus_stop ${CYGNUS_INSTANCE}
;;
'restart')
cygnus_stop ${CYGNUS_INSTANCE}
cygnus_start ${CYGNUS_INSTANCE}
;;
'status')
cygnus_status ${CYGNUS_INSTANCE}
;;
esac
my configuration is the following:
file cygnus_instance_mongo.conf :
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongo.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=/var/log/cygnus/cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
file agent_mongo.conf
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = kura_
# prefix pro the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = kura_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Any idea of what have I missed?
UPDATE after frb answer
I changed the log file path and I got a new error:
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Thu 2016-03-03 08:21:08 UTC, end at Thu 2016-03-03 08:22:07 UTC. --
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session opened for user cygnus by (uid=0)
Mar 03 08:21:49 cygnus-mongo.novalocal cygnus[1206]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session closed for user cygnus
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: [FAILED]
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
-- Subject: Unit cygnus.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit cygnus.service has failed.
--
-- The result is failed.
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
Mar 03 08:22:07 cygnus-mongo.novalocal sudo[1277]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Everything in the configuration is OK except for this line in cygnus_instance_mongo.conf:
LOGFILE_NAME=/var/log/cygnus/cygnus.log
It must be:
LOGFILE_NAME=cygnus.log
I.e. the name of the log file within /var/log/cygnus.
The error was reported in this line of the service logs:
bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory

Resources