Minecraft server script to start on reboot - linux

I am using a script I found on a minecraft wiki to automatically start my minecraft server after I reboot.
console log
exec start-stop-daemon --stop "stop" --start --chdir /minecraft --chuid minecraft \
--exec /usr/bin/java -- -Xms1536m -Xmx2048M -jar minecraft_server.jar nogui 2>&1
start on runlevel [2345]
stop on runlevel [^2345]
respawn
respawn limit 20 5
Every time I try to start the script I get this error.
root#bcserv:/# tail /var/log/upstart/minecraft-server.log
start-stop-daemon: only one command can be specified
Try 'start-stop-daemon --help' for more information.
Anyone know what I am doing wrong in my exec syntax? I am running Ubuntu Linux 13.10
Tried removing the --stop "stop" now I am getting this.
root#bcserv:/home/chris# tail /var/log/upstart/minecraft-server.log
/usr/bin/java already running.
And server does not appear to start.
root#bcserv:/home/chris# ps -aux |grep mine
root 4564 0.0 0.0 9452 904 pts/2 S+ 17:21 0:00 grep --color=auto mine –
Any other suggestions? It doesn't seem to be picking up my minecraft options.

Remove ' --stop "stop" ' from your exec.
You can only have one Command, --stop or --start, and it looks as if you just need the latter.

Related

Scripts launched from crontab don't work as if I run as self

I have a script on my home theater PC running Ubuntu 22. The purpose of this script is to check
When was mouse/keyboard activity last seen?
Is there active audio playing (music or video)
and then it determines if the htpc is idle or not, and relays this information back to my home automation server.
I have a "helper" script that fires this up in a tmux session (mainly so I can attach and debug since I have it dump output to the terminal) at boot.
If I run this script as the main user (htpc), then it works great. When it launches from the crontab, pulse audio states that there is no pulse audio daemon running. Checking ps shows that both run as htpc.
Output of running directly from a terminal (I'm SSH'd in)
htpc#htpc:~/Documents$ ps aux | grep -i idle
htpc 8397 0.0 0.0 11308 3736 ? Ss 00:03 0:00 tmux new-session -d -s idle-script exec /home/htpc/Documents/report_idle_to_hass.sh
htpc 8398 0.0 0.0 10100 4012 pts/2 Ss+ 00:03 0:00 /bin/bash /home/htpc/Documents/report_idle_to_hass.sh
htpc 8455 0.0 0.0 9208 2180 pts/3 S+ 00:03 0:00 grep --color=auto -i idle
Output of running from a crontab job:
htpc#htpc:~/Documents$ ps aux | grep -i idle
htpc 6720 0.0 0.0 11304 3604 ? Ss 23:57 0:00 tmux new-session -d -s idle-script exec /home/htpc/Documents/report_idle_to_hass.sh
htpc 6721 0.0 0.0 10100 3988 pts/2 Ss+ 23:57 0:00 /bin/bash /home/htpc/Documents/report_idle_to_hass.sh
htpc 6748 0.0 0.0 9208 2416 pts/3 S+ 23:57 0:00 grep --color=auto -i idle
Visually, I don't see why one works and another doesn't.
tmux output of a manually started (me via ssh) instance:
Counter overflowed: No
[][]
Counter overflowed: No
[][]
Note that the [][] simply is a response from my home automation server. It's an empty response because the state did not change. It will populate with previous state information if there was a change.
tmux output of a crontab-started instance:
Counter overflowed: Yes
[][]
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
Counter overflowed: Yes
[][]
No PulseAudio daemon running, or not running as session daemon.
No PulseAudio daemon running, or not running as session daemon.
Crontab entry (running crontab -e):
#reboot exec /home/htpc/Documents/start_idle_report.sh
I'll give the script giving the error first, the only part that doesn't work is the pacmd command. It does successfully grab the window name and keyboard/mouse last input time. Only pacmd has issues.
check_idle.sh:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
export DISPLAY=:0.0
# Set the timeout value to consider device idle
time_to_be_idle_ms=60000
audio_check="`exec pacmd list-sink-inputs | grep \"state: RUNNING\"`"
idle_time="`exec xprintidle`"
current_app="`exec xdotool getwindowfocus getwindowname`"
if [ -z "$audio_check" ]
then
if [ "$idle_time" -gt "$time_to_be_idle_ms" ]
then
if [[ "$current_app" == *"Frigate"* ]]
then
# No audio, idle time, but cameras are in focus
echo "No"
else
# No audio, idle time, and cameras aren't in focus
echo "Yes"
fi
else
# No audio playing, but idle time not met
echo "No"
fi
else
# Playing audio
echo "No"
fi
start_idle_report.sh:
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
# Kick off the report script
if [[ "`ps aux | grep -i report_idle_to_hass | wc -l`" -eq "1" ]];
then
tmux new-session -d -s "idle-script" "exec /home/htpc/Documents/report_idle_to_hass.sh"
fi
report_idle_to_hass.sh (some info redacted):
#!/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
submit_state() {
# This function sends the desired output to my server
}
submit_state "No"
last_state="No"
counter=0
while [ true ]
do
idle_check="`/home/htpc/Documents/check_idle.sh`"
if [[ "$idle_check" != "$last_state" ]]
then
echo "States changed: $idle_check"
submit_state "$idle_check"
last_state="$idle_check"
counter=0
else
counter=$((counter+1))
fi
if [ "$counter" -gt 10 ]
then
echo "Counter overflowed: $last_state"
submit_state "$last_state"
counter=0
fi
sleep 2
done
I looked up how pulse figures out what to connect to: https://www.freedesktop.org/wiki/Software/PulseAudio/FAQ/
It lists several environment variables. When I look with printenv, there are no display related or pulse related variables set.
I do not have a ~/.pulse folder to contain the config, and the default pulse file in /etc/pulse is empty (nothing but comments in there).
How is it finding a pulse server when I run it but not in crontab?
I thought maybe it was a path issue (even though I define the paths in the script)
htpc#htpc:~/Documents$ which pacmd
/usr/bin/pacmd
I have tried it with the 'exec' removed and added. Tried 'exec' when I found another question on stack overflow that suggested using exec because it was firing off the script with a 'sh -c' instead, and I figured this might be the problem, but it did not fix it.
I expected the behavior to be identical to me sshing in and simply typing the tmux command myself, but for some reason when started via a crontab job it does not work.
Feels a lot like the commands are being executed as a different user, or some variable issue.
If your issue is that it is running as cron and not as you, then you need to place your cron entries in a user-specific crontab under /var/spool/cron/crontabs (Ubuntu MATE 20.04), not in the main crontab.

How to create a cron job in server for starting celery app?

the server where I have hosted my app usually restarts due to maintenance and when it does a function that I kept open in the background stops and I have to manually turn it on.
Here are the commands that I do in ssh
ssh -p19199 -i <my ssh key file name> <my username>#server.net
source /home/myapp/virtualenv/app/3.8/bin/activate
cd /home/myapp/app
celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach
I need to start this celery app whenever there is no 'celery' function in the command ps axuww. If it is running already then it will show:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
myapp 8792 0.1 0.2 1435172 82252 ? Sl Jun27 1:27 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8898 0.0 0.2 1115340 92420 ? S Jun27 0:32 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8899 0.0 0.2 1098900 76028 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8900 0.0 0.2 1098904 76028 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 8901 0.0 0.2 1098908 76040 ? S Jun27 0:00 /home/myapp/virtualenv/app/3.8/bin/python3 -m celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log
myapp 28216 0.0 0.0 10060 2928 pts/1 Ss 15:57 0:00 -bash
myapp 28345 0.0 0.0 49964 3444 pts/1 R+ 15:57 0:00 ps axuww
I need the cron job to check every 15 minutes.
You have everything figured out already, just put it inside a IF block in a shell (Bash) script, save it wherever you feel like (eg, $HOME). After the script, we'll set Cron.
I'm gonna call this script "start_celery.sh":
#!/usr/bin/env bash -ue
#
# Start celery only if it is not running yet
#
if [[ `ps auxww | grep "[c]elery"` ]]; then
source /home/myapp/virtualenv/app/3.8/bin/activate
cd /home/myapp/app
celery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach
fi
Remember to make it executable!
$ chmod +x start_celery.sh
A few notes:
the output from ps is being filtered by grep "[c]elery" because we don't one the grep line that would come out if we simply did grep "celery" (reference: https://www.baeldung.com/linux/grep-exclude-ps-results).
Using grep "celery" | grep -v "grep" would not work in our case because we want to use the lack of celery commands (exit code "1") to trigger the IF block.
setting "-ue" in the script fails the whole thing/script in case something goes wrong (this is a precaution I always use, and I live it here as a shared knowledge/hint. Even if we are not using variables (-u) I like to use it). Feel free to remove it. (reference https://www.gnu.org/software/bash/manual/html_node/The-Set-Builtin.html)
Now we have the script -- ~/start_celery.sh in our home directory, we just have to include it in our (user's) crontab jobs. To do that, we can use some help from some websites of our (awesome) geek community:
The first one is https://crontab.guru/ . If nothing else, it puts the cronjobs syntax in very simple, visual, straight words. They also offer a payed service in case you want to monitor your jobs.
In the same spirit of online monitoring (maybe of your interest, I live it here for the records and because it's free), you may find interesting https://cron-job.org/ .
Whatever the resources we use to figure out the crontab line we need, "every 15 minutes" is: */15 * * * * .
To set that up, we open our (user's) cronjob table (crontab) and set our job accordingly:
Open crontab jobs:
$ crontab -e
Set the (celery) job (inside crontab):
*/15 * * * * /home/myapp/start_celery.sh
This should work, but I didn't test the whole thing. Let me know how that goes.
I think what you need is a corn job running your script at startup as long as you don't experience any function crashes.
Start by opening a terminal window and run the command below. Don't forget to <add your ssh key> and <username>. This will generate the script.sh file for you and save it in $HOME
echo -e "ssh -p19199 -i <my ssh key file name> <my username>#server.net\n\nsource /home/myapp/virtualenv/app/3.8/bin/activate\n\ncd /home/myapp/app\n\ncelery -A app.mycelery worker --concurrency=4 --loglevel=INFO -f celery.log --detach" >> $HOME/startup.sh
Make it executable using the following command:
sudo chmod +x $HOME/startup.sh
Now create the cron job to run the script when you restart your server. In the terminal create a new cron job.
crontab -e
This will open a new editor screen for you to write the cron job. Press the letter i on your keyboard to start the insert mode and type the following
#reboot sh $HOME/startup.sh
Exit the editor and save your changes by pressing esc key then type :wq and hit Enter
Restart your server and the cron job should run the script for you.
If you want to manage the tasks that should run when a system reboots, shutdowns are alike, you should not make use of cron. These tasks are managed by the Power Management Utilities (pm-utils). Have a look at man pm-action and this question for a somewhat detailed explanation.
Obviously, as a user it is not always possible to configure these hooks for pm-util and alternative options are considered. Having a cron-job running every x minutes is an obvious solution that would allow the user to automatically start a job.
The command you would run in cron would look something like:
pgrep -f celery 2>/dev/null || /path/to/celery_startup
Where /path/to/celery_startup is a simple bash script that contains the commands to be executed (the ones mentioned in the OP).

How to supress stdout and stderr message when using pkill

I'm trying to kill some process in ubuntu 18.04 for which I am using pkill command. But I am to able to suppress Killed message for some reason.
Here is process which are running.
# ps -a
PID TTY TIME CMD
2346 pts/0 00:00:00 gunicorn
2353 pts/0 00:00:00 sh
2360 pts/0 00:00:00 gunicorn
2363 pts/0 00:00:00 gunicorn
2366 pts/0 00:00:00 ps
My attempts to kill the process and supressing logs
# 1st attempt
# pkill -9 gunicorn 2>&1 /dev/null
pkill: only one pattern can be provided
Try `pkill --help' for more information.
#2nd attempt (This killed process but got output `Killed` and have to press `enter` to get into command line)
# pkill -9 gunicorn > /dev/null
root#my-ubuntu:/# Killed
#3rd attempt(behavior similar to previous attempt)
# pkill -9 gunicorn 2> /dev/null
root#my-ubuntu:/# Killed
root#my-ubuntu:/#
What is it that I am missing?
I think you want this syntax:
pkill -9 gunicorn &>/dev/null
the &> is a somewhat newer addition in Bash ( think 4.0 ??) that is a shorthand way of redirecting both stdout and stderr.
Also, are you running pkill from the same terminal session that gunicorn was started on? I don't think pkill prints a message like "Killed" which makes me wonder if that is coming from some other process....
You might be able to suppress it by running set +m in the terminal (to disable job monitoring). To reenable, run set -m.
I found the only way to prevent output from pkill was to use the advice here:
https://www.cyberciti.biz/faq/how-to-redirect-output-and-errors-to-devnull/
command 1>&- 2>&-
This closes stdout/stderr for the command.

Upstart: Error when using command substitution in post-start script stanza during startup sequence

I'm seeing an issue in upstart where using command substitution inside a post-start script stanza causes an error (syslog reports "terminated with status 1"), but only during the initial system startup.
I've tried using just about every startup event hook under the sun. local-filesystems and net-device-up worked without error about 1/100 tries, so it looks like a race condition. It works just fine on manual start/stop. The command substitutions I've seen trigger the error are a simple cat or date, and I've tried using both the $() way and the backtick way. I've also tried using sleep in pre-start to beat the race condition but that did nothing.
I'm running Ubuntu 11.10 on VMWare with a Win7 host. Spent too many hours troubleshooting this already... Anyone got any ideas?
Here is my .conf file for reference:
start on runlevel [2345]
stop on runlevel [016]
env NODE_ENV=production
env MYAPP_PIDFILE=/var/run/myapp.pid
respawn
exec start-stop-daemon --start --make-pidfile --pidfile $MYAPP_PIDFILE --chuid node-svc --exec /usr/local/n/versions/0.6.14/bin/node /opt/myapp/live/app.js >> /var/log/myapp/audit.node.log 2>&1
post-start script
MYAPP_PID=`cat $MYAPP_PIDFILE`
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] + Started $UPSTART_JOB [$MYAPP_PID]: PROCESS=$PROCESS UPSTART_EVENTS=$UPSTART_EVENTS" >> /var/log/myapp/audit.upstart.log
end script
post-stop script
MYAPP_PID=`cat $MYAPP_PIDFILE`
echo "[`date -u +%Y-%m-%dT%T.%3NZ`] - Stopped $UPSTART_JOB [$MYAPP_PID]: PROCESS=$PROCESS UPSTART_STOP_EVENTS=$UPSTART_STOP_EVENTS EXIT_SIGNAL=$EXIT_SIGNAL EXIT_STATUS=$EXIT_STATUS" >> /var/log/myapp/audit.upstart.log
end script
The most likely scenario I can think of is that $MYAPP_PIDFILE has not been created yet.
Because you have not specified an 'expect' stanza, the post-start is run as soon as the main process has forked and execed. So, as you suspected, there is probably a race between start-stop-daemon running node and writing that pidfile and /bin/sh forking, execing, and forking again to exec cat $MYAPP_PIDFILE.
The right way to do this is to rewrite your post-start as such:
post-start script
for i in 1 2 3 4 5 ; do
if [ -f $MYAPP_PIDFILE ] ; then
echo ...
exit 0
fi
sleep 1
done
echo "timed out waiting for pidfile"
exit 1
end script
Its worth noting that in Upstart 1.4 (included first in Ubuntu 12.04), upstart added logging ability, so there's no need to redirect output into a special log file. All console output defaults to /var/log/upstart/$UPSTART_JOB.log (which is rotated by logrotate). So those echos could just be bare echos.

ubuntu: start (upstart) second instance of mongodb

the standard upstart script that comes with mongodb works fine:
# Ubuntu upstart file at /etc/init/mongodb.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb/
mkdir -p /var/log/mongodb/
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGODB="yes"
if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec /usr/bin/mongod -- --config /etc/mongodb.conf; fi
end script
if i want to run a second instance of mongod i thought i just copy both /etc/mongodb.conf -> /etc/mongodb2.conf and /etc/init/mongodb.conf -> /etc/init/mongodb2.conf and change the std port in the first conf-file. then adjust the script above to start with the newly created /etc/mongodb2.conf.
i can then just say start mongodb2and the service starts ... but it is killed right after starting. what do i change, to get both processes up and running?
# Ubuntu upstart file at /etc/init/mongodb2.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb2/
mkdir -p /var/log/mongodb2/
end script
start on runlevel [2345]
stop on runlevel [06]
script
ENABLE_MONGODB="yes"
if [ -f /etc/default/mongodb ]; then . /etc/default/mongodb; fi
if [ "x$ENABLE_MONGODB" = "xyes" ]; then exec start-stop-daemon --start --quiet --chuid mongodb --exec /usr/bin/mongod -- --config /etc/mongodb2.conf; fi
end script
i couldn't get the "standard" upstart script to work (as described above), so i changed it like this:
# Ubuntu upstart file at /etc/init/mongodb.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb/
mkdir -p /var/log/mongodb/
end script
start on runlevel [2345]
stop on runlevel [06]
script
exec sudo -u mongodb /usr/bin/mongod --config /etc/mongodb.conf
end script
and if you want to run other instances of mongodb just copy the *.conf files and make the changes to /etc/mongodb2.conf and /etc/init/mongodb2.conf
# Ubuntu upstart file at /etc/init/mongodb2.conf
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
pre-start script
mkdir -p /var/lib/mongodb2/
mkdir -p /var/log/mongodb2/
end script
start on runlevel [2345]
stop on runlevel [06]
script
exec sudo -u mongodb /usr/bin/mongod --config /etc/mongodb2.conf
end script
i think the only thing that is not working is restart mongodb - you have to stop and then start again ...
I know there's already an accepted solution but I think this one is more elegant.
The other way is to use start-stop-daemon's pid file creation. For example, I have 2 mongos running on the same server with 2 different upstart scripts, and the two magic lines are:
exec start-stop-daemon --make-pidfile --pidfile /var/run/mongodb-router.pid --start --startas /data/bin/mongos --chuid mongo -- --logappend --logpath /mnt/log/mongos.log --configdb mongo2-config01,mongo2-config02,mongo2-config03
exec start-stop-daemon --make-pidfile --pidfile /var/run/mongodb-routerrt.pid --start --startas /data/bin/mongos --chuid mongo -- --logappend --logpath /mnt/log/mongos-rt.log --configdb mongort-config01,mongort-config02,mongort-config03 --port 27027
Note that one has '--pidfile /var/run/mongodb-router.pid' and the other has '--pidfile /var/run/mongodb-routerrt.pid' and a different port.
Yeah I ran into this same issue today. The reason is that the default script uses the start-stop-daemon to start mongo, which is specifically designed to ensure that only one version of a process is running. You already figured out that one way to fix this is to not use start-stop-daemon and to start the binary yourself. That's the way I do it too but I'd be curious to hear if there's a better way.
This is how I do it.
2 instances of mongodb, with start-stop-daemon, on the same server
that are my start-stop-daemon configs
exec start-stop-daemon --make-pidfile --pidfile /var/lib/mongodb/db1.pid --start --quiet --chuid mongodb --name mongod1 --exec /usr/bin/mongod -- --config etc/mongodb1.conf
exec start-stop-daemon --make-pidfile --pidfile /var/lib/mongodb/db2.pid --start --quiet --chuid mongodb --name mongod2 --exec /usr/bin/mongod -- --config etc/mongodb2.conf
pay attention to the --name option. That did the trick for me
the two daemons cannot listen on the same tcp port, thus you have to change the --port parameter of mongod2 in order to listen to a different port.
the two daemons cannot share the same data dir, thus you have to change the --data-dir parameter of mongod2.
I find the below upstart works for me
# Ubuntu upstart file at /etc/init/mongodb.conf
description "manage mongodb instance"
start on runlevel [2345]
stop on runlevel [06]
limit nofile 20000 20000
kill timeout 300 # wait 300s between SIGTERM and SIGKILL.
env MONGODB_USER=mongodb
env MONGODB_DATA=/var/lib/mongodb/
env MONGODB_LOG=/var/log/mongodb/
env MONGODB_PID=/var/run/mongodb.pid
pre-start script
if [ ! -d $MONGODB_DATA ]; then
mkdir -p $MONGODB_DATA
chown $MONGODB_USER:$MONGODB_USER $MONGODB_DATA
fi
if [ ! -d $MONGODB_LOG ]; then
mkdir -p $MONGODB_LOG
chown $MONGODB_USER:$MOGODB_USER $MONGODB_LOG
fi
end script
exec start-stop-daemon --start --pidfile $MONGODB_PID --chuid $MONGODB_USER:$MONGODB_USER --exec /usr/bin/mongod -- --config /etc/mongodb/mongodb.conf
pre-stop exec start-stop-daemon --signal QUIT --stop --quiet --pidfile $MONGODB_PID --chuid $MONGODB_USER:$MONGODB_USER --exec /usr/bin/mongod -- --config /etc/mongodb/mongodb.conf

Resources