RHEL 8 systemctl startup script fails - linux

I have an application startup script that works correctly under RHEL 7, using both the "service" and "systemctl" commands. However when I use the same startup script and service config file on RHEL 8, the "service" command will stop and start the application, but the "systemctl" command will not start it.
The contents of the alm.service file are:
[Unit]
Description=Application Lifecycle Management (ALM)
Documentation=https://software.microfocus.com/en-us/solutions/software-development-lifecycle
After=opt-repository.mount
[Service]
Type=forking
PIDFile=/var/opt/HP/ALM/runtime/MFALM.pid
User=alm
ExecStart=/etc/init.d/alm start
ExecStop=/etc/init.d/alm stop
[Install]
WantedBy=multi-user.target
The portion of the /etc/init.d/alm script that starts the application is:
#!/bin/bash
#
# chkconfig: 35 20 80
# Description: QC Application LifeCycle Management
#
export LANG="en_US.utf8"
source /etc/profile
start () {
echo -n "Starting HP ALM: "
#Log as alm if account is not the good one
testfile=/opt/repository/base_install/.test_$RANDOM_`date|sed 's/ //g'`
testcmd="touch $testfile"
rmcmd="rm $testfile"
if [ `id -u` -eq 0 ]; then
testcmd="su alm -c \"touch $testfile\""
rmcmd="su alm -c \"rm $testfile\""
fi
echo
eval "$testcmd" 2>/dev/null
if [ ! $? -eq 0 ]; then
echo $'\n'"$0 ERROR: UNABLE to access REPOSITORY (touch $testfile) on read-write mode !"
echo $'\n'"!!! ABORTING ALM restart !!!"
echo $'\n'"But this could also be caused by this `uname -n` client needing reboot due to NFS hang."
else
eval "$rmcmd"
cd /var/opt/HP/ALM/wrapper
id
ls -l HPALM
# ./HPALM start
/var/opt/HP/ALM/wrapper/HPALM start
echo "HPALM returned Status: " $?
ls -l /var/opt/HP/ALM/runtime/MFALM.pid
fi
}
The HPALM script forks the daemon process and creates the /var/opt/HP/ALM/runtime/MFALM.pid file when the process is started.
When I do service alm start the application starts correctly.
However, when I do systemctl start alm I get the following error in journalctl -xe
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: Starting QC ALM:
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: user ID uid=(alm) gid=(team_alm_L4_users)
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: calling ./HPALM start
Oct 03 21:18:29 12ee04f4556c44c alm[1311829]: uid=(alm) gid=(team_alm_L4_users)
Oct 03 21:18:29 12ee04f4556c44c alm[1311830]: -rwxr-xr-x. 1 alm team_alm_L4_users 54983 Oct 3 21:06 HPALM
Oct 03 21:18:29 12ee04f4556c44c alm[1311831]: /etc/init.d/alm: line 36: /var/opt/HP/ALM/wrapper/HPALM: Permission denied
Oct 03 21:18:29 12ee04f4556c44c alm[1311785]: HPALM returned Status: 126
Oct 03 21:18:29 12ee04f4556c44c alm[1311832]: ls: cannot access '/var/opt/HP/ALM/runtime/MFALM.pid': No such file or directory
The RHEL 8 release version is Red Hat Enterprise Linux release 8.6 (Ootpa)
These same files and scripts work correctly on RHEL 7.
Does anyone know what has changed with RHEL 8 that would cause this to fail with systemctl but allow it to work with service?

Related

Cronjob script trouble launching application

I started with cronjobs a while ago, but up until yesterday I've run into a problem I can't figure/find out.
#reboot me /etc/application/start-script.sh
I have Raspbian Jessie (minimal) installed on a Raspberry Pi Zero. One of the users has a cronjob command #reboot. When I check "sudo /etc/init.d/cron status", I can see the cronjob is picked up after a reboot and executed. The only thing is that any output is dropped, the "No MTA installed"-message, (care?).
#!/bin/bash
# My start script
logfile=/home/me/logfile.log
echo "Starting program..." >> $logfile
application
echo "Program started!" >> $logfile
As you can see, it should create a log file, and it does this after a reboot when the script is called as a cronjob. This script works perfectly fine when you manualy execute it, it writes the output to the logfile AND starts the program.
The problem is: the program is not launched when the .sh script is called as a cronjob.
Why is only the application not started when the script is executed???
"sudo /etc/init.d/cron status" output
Mar 17 22:14:45 pizza-pi systemd[1]: Starting Regular background program processing daemon...
Mar 17 22:14:45 pizza-pi systemd[1]: Started Regular background program processing daemon.
Mar 17 22:14:45 pizza-pi cron[292]: (CRON) INFO (pidfile fd = 3)
Mar 17 22:14:45 pizza-pi cron[292]: (CRON) INFO (Running #reboot jobs)
Mar 17 22:14:45 pizza-pi CRON[296]: pam_unix(cron:session): session opened for user me by (uid=0)
Mar 17 22:14:45 pizza-pi CRON[318]: (me) CMD (etc/application/start-script.sh)
Mar 17 22:14:45 pizza-pi CRON[296]: (CRON) info (No MTA installed, discarding output)
Mar 17 22:14:45 pizza-pi pam_unix(cron:session): session closed for user me
Edit the /etc/rc.local file and add the following line in /etc/init.d/cron/start be sure that it should before exit 0.
Follow this link https://rahulmahale.wordpress.com/2014/09/03/solved-running-cron-job-at-reboot-on-raspberry-pi-in-debianwheezy-and-raspbian/
Hope answer is useful for you

cygnus service not starting as a service

I have installed cygnus using RPMs on my CentOS 7.0 , but I can't started as service:
[centos#cygnus-mongo ~]$ sudo service cygnus start
Starting cygnus (via systemctl): Job for cygnus.service failed. See 'systemctl status cygnus.service' and 'journalctl -xn' for details.
[FAILED]
Here is the errors log:
[centos#cygnus-mongo ~]$ sudo systemctl status cygnus.service
cygnus.service - SYSV: cygnus
Loaded: loaded (/etc/rc.d/init.d/cygnus)
Active: failed (Result: exit-code) since Tue 2016-02-23 07:09:48 UTC; 18s ago
Process: 1184 ExecStart=/etc/rc.d/init.d/cygnus start (code=exited, status=1/FAILURE)
Feb 23 07:09:46 cygnus-mongo.novalocal systemd[1]: Starting SYSV: cygnus...
Feb 23 07:09:46 cygnus-mongo.novalocal su[1189]: (to cygnus) root on none
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:46 cygnus-mongo.novalocal cygnus[1184]: bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: [FAILED]
Feb 23 07:09:48 cygnus-mongo.novalocal cygnus[1184]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
Feb 23 07:09:48 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Tue 2016-02-23 07:08:59 UTC, end at Tue 2016-02-23 07:10:57 UTC. --
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for /mnt.
-- Subject: Unit mnt.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit mnt.mount has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Dependency failed for File System Check on /dev/vdb.
-- Subject: Unit systemd-fsck#dev-vdb.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-fsck#dev-vdb.service has failed.
--
-- The result is dependency.
Feb 23 07:10:33 cygnus-mongo.novalocal systemd[1]: Startup finished in 1.659s (kernel) + 2.841s (initrd) + 1min 31.190s (userspace) = 1min 35.691s.
-- Subject: System start-up is now complete
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- All system services necessary queued for starting at boot have been
-- successfully started. Note that this does not mean that the machine is
-- now idle as services might still be busy with completing start-up.
--
-- Kernel start-up required 1659184 microseconds.
--
-- Initial RAM disk start-up required 2841741 microseconds.
--
-- Userspace start-up required 91190356 microseconds.
Feb 23 07:10:47 cygnus-mongo.novalocal dhclient[1068]: DHCPREQUEST on eth0 to 192.168.111.71 port 67 (xid=0x6acae4e0)
Feb 23 07:10:48 cygnus-mongo.novalocal dhclient[1068]: DHCPACK from 192.168.111.71 (xid=0x6acae4e0)
Feb 23 07:10:50 cygnus-mongo.novalocal dhclient[1068]: bound to 192.168.111.128 -- renewal in 44 seconds.
Feb 23 07:10:57 cygnus-mongo.novalocal sudo[1255]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Here is the service file that I did not change:
[centos#cygnus-mongo ~]$ cat /etc/rc.d/init.d/cygnus
#!/bin/bash
# Copyright 2014 Telefonica Investigación y Desarrollo, S.A.U
#
# This file is part of fiware-cygnus (FI-WARE project).
#
# fiware-cygnus is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General
# Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any
# later version.
# fiware-cygnus is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied
# warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more
# details.
#
# You should have received a copy of the GNU Affero General Public License along with fiware-cygnus. If not, see
# http://www.gnu.org/licenses/.
#
# For those usages not covered by the GNU Affero General Public License please contact with iot_support at tid dot es
#
# cygnus Start/Stop cygnus
#
# chkconfig: 2345 99 60
# description: cygnus
# Load some fancy functions for init.d
. /etc/rc.d/init.d/functions
PARAM=$1
CYGNUS_INSTANCE=${2}
COMPONENT_NAME=cygnus
PREFIX=/usr
CYGNUS_DIR=${PREFIX}/cygnus
FLUME_EXECUTABLE=${CYGNUS_DIR}/bin/cygnus-flume-ng
CYGNUS_USER=cygnus
cygnus_start()
{
local result=0
local cygnus_instance=${1}
if [[ ! -x ${FLUME_EXECUTABLE} ]]; then
printf "%s\n" "Fail - ${FLUME_EXECUTABLE} not exists or is not executable."
exit 1
fi
if [[ $(ls -l ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf 2> /dev/null | wc -l) -eq 0 ]]; then
if [[ ${cygnus_instance} == "" ]]; then
printf "%s\n" "There aren't any instance of Cygnus configured. Refer to file /usr/cygnus/conf/README.md for further information."
else
printf "%s\n" "There aren't any instance of Cygnus configured with the name ${cygnus_instance}. Refer to file /usr/cygnus/conf/README.md for further information."
fi
return 1
fi
for instance in $(ls ${CYGNUS_DIR}/conf/cygnus_instance_${cygnus_instance}*.conf)
do
local NAME
NAME=${instance%.conf}
NAME=${NAME#*cygnus_instance_}
. ${instance}
CYGNUS_PID_FILE="/var/run/cygnus/cygnus_${NAME}.pid"
printf "%s" "Starting Cygnus ${NAME}... "
status -p ${CYGNUS_PID_FILE} ${FLUME_EXECUTABLE} &> /dev/null
if [[ ${?} -eq 0 ]]; then
printf "%s\n" " Already running, skipping $(success)"
continue
fi
CYGNUS_COMMAND="${FLUME_EXECUTABLE} agent -p ${ADMIN_PORT} --conf ${CONFIG_FOLDER} -f ${CONFIG_FILE} -n ${AGENT_NAME} -Dflume.log.file=${LOGFILE_NAME} &>> /var/log/cygnus/${LOGFILE_NAME} & echo \$! > ${CYGNUS_PID_FILE}"
su ${CYGNUS_USER} -c "${CYGNUS_COMMAND}"
sleep 2 # wait some time to know if flume is still alive
PID=$(cat ${CYGNUS_PID_FILE})
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm ${CYGNUS_PID_FILE}
else
chown ${CYGNUS_USER}:${CYGNUS_USER} ${CYGNUS_PID_FILE}
printf "%s\n" "$(success)"
fi
done
return ${result}
}
cygnus_stop()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running $(success)"
return 0
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%-50s" "Stopping Cygnus ${NAME}..."
PID=$(cat ${run_instance})
kill -HUP ${PID} &> /dev/null
sleep 2
FLUME_PID=$(ps -ef | grep -v "grep" | grep "${PID:-not_found}")
if [[ -z ${FLUME_PID} ]]; then
rm -f ${run_instance}
printf "%s\n" "$(success)"
else
printf "%s\n" "$(failure)"
result=$((${result}+1))
rm -f ${run_instance}
fi
done
return ${result}
}
cygnus_status()
{
local result=0
local cygnus_instance=${1}
if [[ $(ls -l /var/run/cygnus/cygnus_${cygnus_instance}*.pid 2> /dev/null | wc -l) -eq 0 ]]; then
printf "%s\n" "There aren't any instance of Cygnus ${cygnus_instance} running"
exit 1
fi
for run_instance in $(ls /var/run/cygnus/cygnus_${cygnus_instance}*.pid)
do
local NAME
NAME=${run_instance%.pid}
NAME=${NAME#*cygnus_}
printf "%s\n" "Cygnus ${NAME} status..."
status -p ${run_instance} ${FLUME_EXECUTABLE}
result=$((${result}+${?}))
done
return ${result}
}
case ${PARAM} in
'start')
cygnus_start ${CYGNUS_INSTANCE}
;;
'stop')
cygnus_stop ${CYGNUS_INSTANCE}
;;
'restart')
cygnus_stop ${CYGNUS_INSTANCE}
cygnus_start ${CYGNUS_INSTANCE}
;;
'status')
cygnus_status ${CYGNUS_INSTANCE}
;;
esac
my configuration is the following:
file cygnus_instance_mongo.conf :
# Who to run cygnus as. Note that you may need to use root if you want
# to run cygnus in a privileged port (<1024)
CYGNUS_USER=cygnus
# Where is the config folder
CONFIG_FOLDER=/usr/cygnus/conf
# Which is the config file
CONFIG_FILE=/usr/cygnus/conf/agent_mongo.conf
# Name of the agent. The name of the agent is not trivial, since it is the base for the Flume parameters
# naming conventions, e.g. it appears in .sources.http-source.channels=...
AGENT_NAME=cygnusagent
# Name of the logfile located at /var/log/cygnus. It is important to put the extension '.log' in order to the log rotation works properly
LOGFILE_NAME=/var/log/cygnus/cygnus.log
# Administration port. Must be unique per instance
ADMIN_PORT=8081
# Polling interval (seconds) for the configuration reloading
POLLING_INTERVAL=30
file agent_mongo.conf
cygnusagent.sources = http-source
cygnusagent.sinks = mongo-sink
cygnusagent.channels = mongo-channel
#=============================================
# source configuration
# channel name where to write the notification events
cygnusagent.sources.http-source.channels = mongo-channel
# source class, must not be changed
cygnusagent.sources.http-source.type = org.apache.flume.source.http.HTTPSource
# listening port the Flume source will use for receiving incoming notifications
cygnusagent.sources.http-source.port = 5050
# Flume handler that will parse the notifications, must not be changed
cygnusagent.sources.http-source.handler = com.telefonica.iot.cygnus.handlers.OrionRestHandler
# URL target
cygnusagent.sources.http-source.handler.notification_target = /notify
# Default service (service semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service = def_serv
# Default service path (service path semantic depends on the persistence sink)
cygnusagent.sources.http-source.handler.default_service_path = def_servpath
# Number of channel re-injection retries before a Flume event is definitely discarded (-1 means infinite retries)
cygnusagent.sources.http-source.handler.events_ttl = 10
# Source interceptors, do not change
cygnusagent.sources.http-source.interceptors = ts gi
# TimestampInterceptor, do not change
cygnusagent.sources.http-source.interceptors.ts.type = timestamp
# GroupinInterceptor, do not change
cygnusagent.sources.http-source.interceptors.gi.type = com.telefonica.iot.cygnus.interceptors.GroupingInterceptor$Builder
# Grouping rules for the GroupingInterceptor, put the right absolute path to the file if necessary
# See the doc/design/interceptors document for more details
cygnusagent.sources.http-source.interceptors.gi.grouping_rules_conf_file = /usr/cygnus/conf/grouping_rules.conf
# ============================================
# OrionMongoSink configuration
# sink class, must not be changed
cygnusagent.sinks.mongo-sink.type = com.telefonica.iot.cygnus.sinks.OrionMongoSink
# channel name from where to read notification events
cygnusagent.sinks.mongo-sink.channel = mongo-channel
# FQDN/IP:port where the MongoDB server runs (standalone case) or comma-separated list of FQDN/IP:port pairs where the MongoDB replica set members run
cygnusagent.sinks.mongo-sink.mongo_hosts = 127.0.0.1:27017
# a valid user in the MongoDB server (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_username =
# password for the user above (or empty if authentication is not enabled in MongoDB)
cygnusagent.sinks.mongo-sink.mongo_password =
# prefix for the MongoDB databases
cygnusagent.sinks.mongo-sink.db_prefix = kura_
# prefix pro the MongoDB collections
cygnusagent.sinks.mongo-sink.collection_prefix = kura_
# true is collection names are based on a hash, false for human redable collections
cygnusagent.sinks.mongo-sink.should_hash = false
#=============================================
# mongo-channel configuration
# channel type (must not be changed)
cygnusagent.channels.mongo-channel.type = memory
# capacity of the channel
cygnusagent.channels.mongo-channel.capacity = 1000
# amount of bytes that can be sent per transaction
cygnusagent.channels.mongo-channel.transactionCapacity = 100
Any idea of what have I missed?
UPDATE after frb answer
I changed the log file path and I got a new error:
[centos#cygnus-mongo ~]$ sudo journalctl -xn
-- Logs begin at Thu 2016-03-03 08:21:08 UTC, end at Thu 2016-03-03 08:22:07 UTC. --
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session opened for user cygnus by (uid=0)
Mar 03 08:21:49 cygnus-mongo.novalocal cygnus[1206]: Starting Cygnus mongo... bash: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:49 cygnus-mongo.novalocal su[1211]: pam_unix(su:session): session closed for user cygnus
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: cat: /var/run/cygnus/cygnus_mongo.pid: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: [FAILED]
Mar 03 08:21:51 cygnus-mongo.novalocal cygnus[1206]: rm: cannot remove ‘/var/run/cygnus/cygnus_mongo.pid’: No such file or directory
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: cygnus.service: control process exited, code=exited status=1
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Failed to start SYSV: cygnus.
-- Subject: Unit cygnus.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit cygnus.service has failed.
--
-- The result is failed.
Mar 03 08:21:51 cygnus-mongo.novalocal systemd[1]: Unit cygnus.service entered failed state.
Mar 03 08:22:07 cygnus-mongo.novalocal sudo[1277]: centos : TTY=pts/0 ; PWD=/home/centos ; USER=root ; COMMAND=/bin/journalctl -xn
Everything in the configuration is OK except for this line in cygnus_instance_mongo.conf:
LOGFILE_NAME=/var/log/cygnus/cygnus.log
It must be:
LOGFILE_NAME=cygnus.log
I.e. the name of the log file within /var/log/cygnus.
The error was reported in this line of the service logs:
bash: /var/log/cygnus//var/log/cygnus/cygnus.log: No such file or directory

Get the PID of a process started with sudo via ssh

I need the process id of a process (here sleep 20) started remotely via SSH and sudo.
date is inserted to illustrate the duration of the SSH connection. Without connection there is also no process on my remote machine, of course.
$ date; ssh pc1 "sleep 20 & echo \$!"; date # works
Mi 20. Jan 16:18:29 CET 2016
11540
Mi 20. Jan 16:18:50 CET 2016
$ date; ssh pc1 "echo password | sudo -S sleep 20"; date # works
Mi 20. Jan 16:20:44 CET 2016
[sudo] password for lab: Mi 20. Jan 16:21:04 CET 2016
$ date; ssh pc1 "echo password | sudo -S sleep 20 & echo \$!"; date # does not
Mi 20. Jan 16:21:55 CET 2016
11916
Mi 20. Jan 16:21:56 CET 2016
On a second machine the last, complete command works fine:
$ date; ssh pc2 "echo password | sudo -S sleep 20 & echo \$!"; date
Mi 20. Jan 16:23:40 CET 2016
6035
[sudo] password for lab: Mi 20. Jan 16:24:01 CET 2016
Any suggestion why there is this different behaviour of the two machines?
Info: I know the risk of clear passwords but it's a shared account in an isolated test network.
Something like this?
$ remote_pid=$(ssh mauro#planck 'sleep 20 > /dev/null 2>&1 & echo $!')
$ echo $remote_pid
13878
or...
$ remote_pid=$(ssh mauro#planck 'echo secret | sudo -S sleep 20 > /tmp/log 2>&1 & echo $!')
It looks like an issue with incomplete process dependencies. With some additional milliseconds the connection (and process) keeps established the whole time.
$ date; ssh pc1 "echo password | sudo -S sleep 20 & echo \$! && sleep 0.01"; date
Do 21. Jan 14:50:39 CET 2016
[sudo] password for lab: 6841
Do 21. Jan 14:51:00 CET 2016

2 questions, cron job not running restart, kill not gracefully restarting

I put the following cron job in my root crontab under var/spool/cron
*/5 * * * * service php-fpm-5.5.11 restart
I see it called in the cron logs every 5 minutes, so I know it is being called, but it is not restarting php-fpm.
Question 1:
Is there a different way to restart services when calling them in cron?
What would be the correct way to call this restart?
Another question and the root of the problem is I have another call that runs every night that sometimes kills my website altogether because php-fpm is not restarting correctly:
/bin/kill -SIGUSR1 `cat /opt/pifpm/php-5.5.11/var/run/php-fpm.pid 2>/dev/null` 2>/dev/null || true
I get:
[12-Jul-2015 00:52:29] ERROR: An another FPM instance seems to already listen on /opt/pifpm/fpmsockets/5.5.11.sock
[12-Jul-2015 00:52:29] ERROR: FPM initialization failed
Question 2
Is there a better way to call the kill statement? For instance:
[ ! -f /opt/pifpm/php-5.5.11/var/run/php-fpm.pid ] || kill -USR2 `cat /opt/pifpm/php-5.5.11/var/run/php-fpm.pid`
This is an nginx and centos setup.
Here is a portion of the cron log:
Jul 15 12:15:01 insp CROND[7325]: (root) CMD (service php-fpm-5.5.11 restart)
Jul 15 12:15:01 insp CROND[7326]: (root) CMD (/usr/local/cpanel/scripts/recoverymgmt >/dev/null 2>&1)
Jul 15 12:15:01 insp CROND[7327]: (root) CMD (/usr/local/cpanel/bin/dcpumon >/dev/null 2>&1)
Jul 15 12:15:01 insp CROND[7332]: (root) CMD (/usr/local/cpanel/scripts/autorepair recoverymgmt >/dev/null 2>&1)
Jul 15 12:15:01 insp CROND[7333]: (root) CMD (/usr/local/cpanel/bin/dbindex >/dev/null 2>&1)
Jul 15 12:16:53 insp /usr/bin/crontab[7530]: (root) BEGIN EDIT (root)
Jul 15 12:16:57 insp /usr/bin/crontab[7530]: (root) END EDIT (root)
Jul 15 12:20:01 insp CROND[7842]: (root) CMD (/usr/lib64/sa/sa1 1 1)
Jul 15 12:20:01 insp CROND[7845]: (root) CMD (/usr/local/cpanel/bin/dcpumon >/dev/null 2>&1)
Jul 15 12:20:01 insp CROND[7846]: (root) CMD (service php-fpm-5.5.11 restart)
Jul 15 12:20:01 insp CROND[7847]: (root) CMD (/usr/local/maldetect/maldet --mkpubpaths >> /dev/null 2>&1)
Answer to the Question number 1.
/etc/rc.d/init.d/php-fpm-5.5.11 restart is the correct path to use in cron.
The /etc/rc.d/init.d has most of the services in the directory including httpd

Can't find documents with MediaWiki 1.21 with the Lucene-search extension

We're running MediaWiki 1.21 on Ubuntu 12.04.3 with the Lucene-search extension 2.1.3 (from its build.properties file).
I followed the instructions for a Single Host Setup (using ant to build the jar), and Setting Up Suggestions for the Search Box. Things seemed to be working just fine. However, new documents aren't being matched by the type-ahead search feature. Looking at the filesystem, I see that there are various items in the application's indexes directory:
$ cd /usr/local/search/lucene-search-2/indexes
$ ls -l
total 24
drwxr-xr-x 10 root root 4096 Aug 20 2013 import
drwxr-xr-x 7 root root 4096 Apr 14 06:42 index
drwxr-xr-x 2 root root 4096 Apr 14 06:41 search
drwxr-xr-x 9 root root 4096 Aug 20 2013 snapshot
drwxr-xr-x 2 root root 4096 Aug 20 2013 status
drwxr-xr-x 8 root root 4096 Aug 20 2013 update
We have a daily cron job that runs the Lucene-search build command, which dumps the wiki database as xml, and then modifies files in the import and snapshot folders. I noticed that the job reads from the search folder, which contains symbolic links to the update folder:
$ ls -l search/
total 24
lrwxrwxrwx 1 root root 70 Feb 12 21:39 wikidb -> /usr/local/search/lucene-search-2/indexes/update/wikidb/20140212064727
lrwxrwxrwx 1 root root 73 Feb 12 21:39 wikidb.hl -> /usr/local/search/lucene-search-2/indexes/update/wikidb.hl/20140212064727
lrwxrwxrwx 1 root root 76 Apr 14 06:41 wikidb.links -> /usr/local/search/lucene-search-2/indexes/update/wikidb.links/20140414064150
lrwxrwxrwx 1 root root 77 Feb 12 21:39 wikidb.prefix -> /usr/local/search/lucene-search-2/indexes/update/wikidb.prefix/20140212064728
lrwxrwxrwx 1 root root 78 Feb 12 21:39 wikidb.related -> /usr/local/search/lucene-search-2/indexes/update/wikidb.related/20140212064713
lrwxrwxrwx 1 root root 76 Feb 12 21:39 wikidb.spell -> /usr/local/search/lucene-search-2/indexes/update/wikidb.spell/20140212064740
Only the wikidb.links entry is current. The others are a couple of months old, which makes me think I missed something in how our daily cron task is setup. Here's the job:
#!/bin/sh
log=/var/log/lucene-search-2-cron.log
(
echo "Building wiki lucene-search indexes ..."
cd /usr/local/search/lucene-search-2
./build
echo "Stopping the lsearchd service..."
service lsearchd stop
# ok, so stopping the service apparently doesn't mean that the processes are gone, whack them manually
# See tip on using the "[x]yz" character class option so you don't need the additional "grep -v xyz":
# http://stackoverflow.com/questions/3510673/find-and-kill-a-process-in-one-line-using-bash-and-regex
echo "Killing any lucene-search processes that didn't terminate..."
kill -9 $(ps -ef | grep '[l]search' | awk '{print $2}')
echo "Starting the lsearchd service..."
service lsearchd start
) > $log 2>&1
And here's the service script /etc/init.d/lsearchd:
#!/bin/sh -e
### BEGIN INIT INFO
# Provides: lsearchd
# Required-Start: $syslog
# Required-Stop: $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 1
# Short-Description: Start the Lucene Search daemon
# Description: Provide a Lucene Search backend for MediaWiki. Copied by John Ericson from: http://ubuntuforums.org/showthread.php?t
=1476445
### END INIT INFO
# Set to install directory of lucense-search. For example: /usr/local/lucene-search-2.1.3
LUCENE_SEARCH_DIR="/usr/local/search/lucene-search-2"
# Set username for daemon to run as. Can also use syntax "username:groupname" to also specify group for daemon to run as. For example: me:me
RUN_AS_USER="lsearchd"
OPTIONS="-configfile $LUCENE_SEARCH_DIR/lsearch.conf"
test -x $LUCENE_SEARCH_DIR/lsearchd || exit 0
test -n "$RUN_AS_USER" && CHUID_ARG="--chuid $RUN_AS_USER" || CHUID_ARG=""
if [ -f "/etc/default/lsearchd" ] ; then
. /etc/default/lsearchd
fi
. /lib/lsb/init-functions
case "$1" in
start)
cd $LUCENE_SEARCH_DIR
log_begin_msg "Starting Lucene Search Daemon..."
start-stop-daemon --start --quiet --oknodo --chdir $LUCENE_SEARCH_DIR --background $CHUID_ARG --exec $LUCENE_SEARCH_DIR/lsearchd -- $OPT
IONS
log_end_msg $?
;;
stop)
log_begin_msg "Stopping Lucene Search Daemon..."
start-stop-daemon --stop --quiet --oknodo --retry 2 --chdir $LUCENE_SEARCH_DIR $CHUID_ARG --exec $LUCENE_SEARCH_DIR/lsearchd
log_end_msg $?
;;
restart)
$0 stop
sleep 1
$0 start
;;
reload|force-reload)
log_begin_msg "Reloading Lucene Search Daemon..."
start-stop-daemon --stop -signal 1 --chdir $LUCENE_SEARCH_DIR $CHUID_ARG --exec $LUCENE_SEARCH_DIR/lsearchd
log_end_msg $?
;;
status)
status_of_proc $LUCENE_SEARCH_DIR/lsearchd lsearchd && exit 0 || exit $?
;;
*)
log_success_msg "Usage: /etc/init.d/lsearchd {start|stop|restart|reload|force-reload|status}"
exit 1
esac
exit 0
Update #1:
I deleted the update directory and ran the build command manually from the console as root. As expected, it only generated the update/wikidb.links entry, none of the other folders exist. I reviewed my earlier setup notes, and don't see anything different, so how did those folders get created, and how do they get maintained?
Update #2:
I retraced my steps from the initial install, and couldn't see anything I missed. So on a chance, I stopped the service and ran lsearchd from the console, and it created the missing directories! So I terminated the process and tried things again: deleted the indexes folder and ran the cron script from the console as root. I confirmed that when run this way, lsearchd DID NOT create the missing directories. And of course, now I remember that I had run lsearchd from the console when initially setting things up, verifying that it was getting client queries for the wiki's Search input field. And these are the indexes it had been using for the lookups, which explains why new documents are not included.
Here is what the command looks like when run as a service:
$ ps -ef | grep [l]search
lsearchd 10192 1 0 14:02 ? 00:00:00 /bin/bash /usr/local/search/lucene-search-2/lsearchd -configfile /usr/local/search/lucene-search-2/lsearch.conf
lsearchd 10198 10192 0 14:02 ? 00:00:01 java -Djava.rmi.server.codebase=file:///usr/local/search/lucene-search-2/LuceneSearch.jar -Djava.rmi.server.hostname=AMWikiBugz -jar /usr/local/search/lucene-search-2/LuceneSearch.jar -configfile /usr/local/search/lucene-search-2/lsearch.conf
So the remaining question is:
Why does lsearchd NOT create the directories when run as a service?
This was a permissions issue. d'oh!
The cron job and service init scripts all execute as root, however the service process is instantiated as the lsearchd user. Once I changed ownership of /usr/local/search/lucene-search-2/indexes/ and all subdirectories to be owned by lsearchd:lsearchd, the lsearchd process was able to create the missing directories when run via the service under cron.
It would have helped if something along the way had logged an error message to syslog indicating that it couldn't write to the target folder.

Resources