Running Graylog collector as root - any other options? - graylog2

It seems the only way to gather nginx, apache and system logs through the graylog collector is to run it as root.
Best practice holds that running services as root is generally ill advised.
Is there a way to collect said logs apart from running the service as root, or is that the general way to go?

I know this thread is almost 20 days old, but still :
I am running the graylog-collector as a custom user, using an init script with the following content :
do_start () {
log_daemon_msg "Starting system $NAME Daemon"
if [ ! -e $PIDDIR ] ; then
mkdir $PIDDIR
chown ${DAEMON_USER}:${DAEMON_USER} $PIDDIR
fi
start-stop-daemon --background --start \
--user $DAEMON_USER \
--chuid $DAEMON_USER \
--make-pidfile \
--pidfile $PIDFILE \
--startas /bin/bash -- -c "exec $DAEMON $DAEMON_OPT >> /var/log/graylog-collector/console.log 2>&1" || return 2
sleep 2
log_end_msg $?
}
Might be interesting to know that i did a custom install, as there is no packages built for debian 6.
Hope this helps.

In Centos6, many items I want to track via Graylog are permissions 600 owned by root:root. So short of changing ownership/permissions of all /var/log files on all of my servers, there isn't a good way for graylog-collector to access these files without running as root.
I'm new to graylog, but gather that graylog-collector is just sending information (not listening on any ports). So that lowers the risk some. Running tomcat, apache, or some other listening daemon as root has higher risks.

Related

how to properly configure jenkins swarm as a service to get proper scrshoots?

I have a trouble to find out what is the problem in node setup (centos+gnome+swarm as a service) as it does connect,run gui tests properly but returns "broken" (whole white or "something went wrong) screenshots.
In our CI env, we build and test GUI application (RED - Robot Editor) using Eclipse tool RCPTT which can click on GUI elements to validate functionalities.
Tests are are executed on nodes Centos7 with metacity+gnome+vncserver, whenever something with GUI is wrong (GUI element is not found,validation is not consistant with test criteria), report is created together with screenshot,so tester is able to have a look what has changed in tested app.
When node is manually configured (from Jenkins Nodes configuration page) or swarm script is executed by user on Node (via ssh), screenshots are fine.
When swarm as a service is executed (node is connected, systemctl status is green,by the same user as run manually), everything is ok besides sreenshots are off (screen res is fine,whole screen is white or with error "Oh no! Something has gone wrong".
I do not see any error in logs from RCPTT,xvnc,in job console.
What can be a root cause of broken screenshots?
env setup:
service definition
[Unit]
Description=Swarm client to create Jenkins slave
After=network.target
After=display-manager.service
[Service]
ExecStart=<path>/swarm_client.sh start
ExecStop=<path>/swarm_client.sh stop
Type=forking
PIDFile=<path>/slave.pid
User=root
Group=root
[Install]
WantedBy=graphical.target
swarm_client.sh
function startclient {
PUBIP=`public ip string`
java \
-jar ${SWARM_HOME}/swarm-client-3.3.jar \
-executors 1 \
-deleteExistingClients \
-disableClientsUniqueId \
-fsroot ${CLIENT_HOME} \
-labels "linux" \
-master <jenkins> \
-name node-Swarm-${PUBIP} 2>&1 > ${CLIENT_HOME}/slave.log &
PID=$!
RETCODE=$?
echo $PID > ${CLIENT_HOME}/slave.pid
exit $RETCODE
}
function stopclient {
if [ -f ${CLIENT_HOME}/slave.pid ];then
PID=`head -n1 ${CLIENT_HOME}/slave.pid`
kill $PID
rm -f ${CLIENT_HOME}/slave.pid
fi
}
SWARM_HOME=<path>/jenkins/swarm
CLIENT_HOME=<path>/jenkins
case "$1" in
start)
startclient
;;
stop)
stopclient
;;
*)
echo "Usage: systemctl {start|stop} swarm_client.service" || true
exit 1
esac
xvnc logs:
Fri Jul 7 11:05:40 2017
vncext: VNC extension running!
vncext: Listening for VNC connections on all interface(s), port 5942
vncext: created VNC server for screen 0
gnome-session-is-accelerated: llvmpipe detected.
ok, after rubber duck session and some googling it seems that while setting up a service which will be dependant on user environment properties/settings (swarm client is indeed a reverse remote shell), such service should import at least env properties from user shell.
In my case, if swarm_client.sh was working fine from ssh but not as service, it needed to use user's ssh/bash env properties
#export environment of user to file
env > user.env
Add such file to service description under [Service] section:
EnvironmentFile=<path>/user.env
I have not investigated what exactly was missing but this is good enough for my case.
Hope that it will help someone with the same problems with swarm as a service under Centos/RH

How to enable full standard output log for bash scripts run by Azure linux (ubuntu) DSC box

We are deploying Elasticsearch on linux Azure VMs using DSC for linux.
At the moment it is a huge challenge to debug DSC configuration due to long (several minutes) turnaround for a DSC build / run and close to useless logs generated by OMI service: dsc.log
The file contains lots of "noise" and very limited useful output of the commands. I.e. if script config step failed it would just state that:
"A general error occurred, not covered by a more specific error code.. The related ResourceId is [nxScript]/somename/"
On the other hand, according to OMI logging and debugging:
The logging level for the omiserver.log log file cannot be changed from the default in this version of the Operations Manager Agents for UNIX and Linux.
What is the best way to log all standard output from DSC run shell scripts?
So far, the only way that we managed to get full logs from OMI server was by changing /init.d/omid file's daemon startup command line from sth like:
$CREATE_LINKS && start-stop-daemon --start --quiet --pidfile $PIDFILE --name "omid" --startas $OMI_BIN -- --configfile=/etc/opt/omi/conf/omiserver.conf -d
to
$CREATE_LINKS && start-stop-daemon --start --quiet --pidfile $PIDFILE --name "omid" --startas /bin/bash --background -- -c "exec $OMI_BIN --configfile=/etc/opt/omi/conf/omiserver.conf > /var/log/omiserver.log 2>&1"
Please note: this workaround is only good enough as a debugging solution and should never be propagated to anywhere close to production. omiserver.log would contain all scripts executed as well their std and err output.

What's the difference? sudo restart

What's the difference between:
sudo /etc/init.d/apache2 restart
and
sudo service apache2 restart
I tried the first one, and it didn't apply my changes, while
sudo service apache2 restart
did uptake my changes.
Here is what is actually happening when you run sudo /etc/init.d/apache2 restart:
if ! $APACHE2CTL configtest > /dev/null 2>&1; then
$APACHE2CTL configtest || true
log_end_msg 1
exit 1
fi
if check_htcacheclean ; then
log_daemon_msg "Restarting web server" "htcacheclean"
stop_htcacheclean
log_progress_msg apache2
else
log_daemon_msg "Restarting web server" "apache2"
fi
PID=$(pidof_apache) || true
if ! apache_wait_stop; then
log_end_msg 1 || true
fi
if $APACHE2CTL start; then
if check_htcacheclean ; then
start_htcacheclean || log_end_msg 1
fi
log_end_msg 0
else
log_end_msg 1
fi
As you can see; first a config test is run, if this is successful the server is stopped and then started.
I find it hard to believe that running this command did not apply your changes if they were properly saved and valid. I only use this command and have never had that issue.
/usr/bin/service is described as:
# A convenient wrapper for the /etc/init.d init scripts.
And it does the following:
SERVICEDIR="/etc/init.d"
# Otherwise, use the traditional sysvinit
if [ -x "${SERVICEDIR}/${SERVICE}" ]; then
exec env -i LANG="$LANG" PATH="$PATH" TERM="$TERM" "$SERVICEDIR/$SERVICE" ${ACTION} ${OPTIONS}
else
echo "${SERVICE}: unrecognized service" >&2
exit 1
fi
So the commands are basically identical, sudo service apache2 restart is just a wrapper for sudo /etc/init.d/apache2 restart.
You may also use sudo /etc/init.d/apache2 reload, this reloads the config without restarting the server. This only works if you have changed config, it will not load modules that you have enabled, for that you need to restart Apache.
Edit: This code is from a Debian system.
In general, whether these two commands are identical or not depends on your Linux distribution.
The first one requires the existence of a traditional SysV-style init script. Until a few years ago that was practically the only way to control services and the service script was just a simple wrapper that basically added the init script path.
These days many Linux distributions have switched to alternative service management systems such as upstart or systemd. Therefore, the service wrapper may also make use of these systems, providing a certain degree of compatibility.
Bottom line: depending on the specifics of your Linux distribution, the /etc/init.d/apache2 may not exist at all, it may just use service itself, or it may do nothing at all. My Mageia Linix system, for example, launches Apache using a systemd service file and has no SysV init script at all for it.
You are generally better off just using service <service> <command>....

How do I run GAE as a background-service

I found a init.d script template -- filled in the blanks and tried to invoke GAE using something like:
start-stop-daemon -S --background python
/opt/google_appengine/dev_appserver.py --host=0.0.0.0
--admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www
This doesn't work...but if I run from the command line works fine but hangs the input...
How do I invoke this command at startup using init.d and change to the user "gae" -- similar to Apache runs as www-data
I also (briefly) tried to use start-stop-daemon to control Google App Engine (without any luck), so I ended up using /etc/rc.local to launch the daemon.
Add the following to /etc/rc.local (before any exit command):
sudo -i -u gae python /opt/google_appengine/dev_appserver.py --host=0.0.0.0 \
--storage_path /var/cache/appengine/gae \
--admin_host=0.0.0.0 --php_executable_path=/usr/bin/php-cgi /var/www > /dev/null 2> /dev/null &
Note, I included a storage_path in the launch options. Make sure you do the following:
sudo mkdir -p /var/cache/appengine/gae
sudo chown gae: /var/cache/appengine/gae
To restart the process (after an update), I just kill python and manually execute rc.local:
sudo killall -9 python
sudo /etc/rc.local
I have finally figured out how and why the start-stop-daemon was not working...it all boiled down to some simple syntactical errors and a (still?) misunderstanding on my behalf:
https://unix.stackexchange.com/questions/154692/start-stop-daemon-wont-start-my-python-script-as-service
In brief, when I use this init.d script and register it accordingly, GAE starts and stops accordingly:
#!/bin/sh
### BEGIN INIT INFO
# Provides: Google App Engine daemon management
# Required-Start:
# Required-Stop:
# Default-Start:
# Default-Stop:
# Short-Description: Google App Engine initscript
# Description: Start/stop appengine web server
### END INIT INFO
# Author: Alex Barylski <alex.barylski#gmail.com>
. /lib/lsb/init-functions
#
# Initialize variables
#
name=appengine
desc="Google App Engine"
bind=0.0.0.0
docroot=/var/www
phpexec=/usr/bin/php-cgi
pidfile=/var/run/$name.pid
args="--host=$bind --admin_host=$bind --php_executable_path=$phpexec"
prog='/usr/bin/python /opt/google_appengine/dev_appserver.py'
#
# TODO: Figure out how to switch user (ie: --chuid www-data)
#
case "${1}" in
start)
echo "Starting...$desc"
start-stop-daemon --start --make-pidfile --background --oknodo \
--pidfile $pidfile \
--name $name \
--exec $prog \
-- $args $docroot
;;
stop)
echo "Stopping...$desc"
start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $prog
;;
restart)
${0} stop
sleep 1
${0} start
;;
*)
echo "Usage: ${0} {start|stop|restart}"
exit 1
;;
esac
exit 0
I cannot figure out how to start the service as www-data and I am sure I could make this script more robust but for development purposes it is sufficient and runs as a daemon.
Hope this helps someone in the future,
Alex

Automatically starting Celery from within Django app

I am getting a Django 1.6 set up started on a Linux (Debian Whiskey) server on Google Compute Engine. I've got Celery 3.1 running in the background to help with some processes. When I start a new instance (using a snapshot I've created), I always need to start Celery. I am looking for a way to start Celery automatically on server-load. This is particularly helpful if the server decides to restart, as they seem to do now and then. To achieve this, I've edited the rc.local file:
$ sudo nano /etc/rc.local
It used to contain the following:
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
I've edited the file such that it now reads:
cd /home/user/gce_app celery -A myapp.tasks --concurrency=1 --loglevel=info worker > output.log 2> errors.log &
exit 0
[ -x /sbin/initctl ] && initctl emit --no-wait google-rc-local-has-run || true
The directory:
/home/user/gce_app
is where my Django project resides and the directory I need to be in to start Celery. However, after restarting the instance, when I type in:
$ celery status
Error: No nodes replied within time constraint.
Opening the errors.log file, I see:
/etc/rc.local: 14: /etc/rc.local: celery: not found
Surely the cd at the start of that code string should address this? Is there a way (within the Django project itself) to start the Celery instance when the project is started to make the code more platform-independent and immune to inevitable OS updates?
I think you're missing a semicolon between your 'cd' and celery invocations. Also, I suspect rc.local may not be searching your path, so you may need to give an absolute path to celery. e.g.
cd /home/user/gce_app; /usr/bin/celery ...
Alternatively, you might look at using a startup script from the GCE metadata to avoid needing to modify rc.local.
Since you seem to be using upstart this might help you:
description "runs celery"
start on runlevel [2345]
stop on runlevel [!2345]
console log
env VENV='/srv/myvirtualenv'
env PROJECT='/srv/run/mydjangoproject'
exec su -s /bin/sh -c 'exec "$0" "$#"' www-data -- /usr/bin/env PATH=$VENV:$PATH $VENV/python $PROJECT/manage.py celeryd
respawn
respawn limit 10 5

Resources