[: : integer expression expected , by systemctl status of my service that is calling a bash script in ubuntu 18.04 - linux

I do not have much experience with bash scripts, but i got the idea from the internet.
My bash script uses xprintidle to shutdown after the computer is in idle for some time.
I can run the script in terminal without any problem.
But when the /etc/systemd/system/poweroff.service is calling the script it gives the error in the systemctl status.
Jul 30 16:43:40 godo systemd[1]: Started autopoweroff.
Jul 30 16:43:42 godo bash[3107]: couldn't open display
Jul 30 16:43:42 godo bash[3107]: /usr/local/bin/poweroff.sh: line 5: [: : integer expression expected
Jul 30 16:43:42 godo bash[3107]: end
Here is the script:
#!/bin/bash
sleep 2
myidle=$(xprintidle)
myidletime=$((10000))
while [ "$myidle" -le "$myidletime" ]; do
echo $myidle
sleep 1
myidle=$(xprintidle)
done
#sudo shutdown -P now
#shutdown -P 5
echo "end"
And here is the service:
[Unit]
Description=autopoweroff
[Service]
ExecStart=/bin/bash /usr/local/bin/poweroff.sh
[Install]
WantedBy=multi-user.target
I hope you can help me and I do not waste your time with these beginner questions.
Thanks

When xprintidle does not have a display it print: "couldn't open display", you are trying to then compare this invalid value as an interger using "-le".
Since xprintidle returns exit code 1 when it does not have a display, you can use
set -e
at the start of your script to exit when an error occurs.

xprintidle - utility printing user's idle time in X
When your script runs in the systemd context, it has no X server, so xprintidle fails and output couldn't open display to stderr.
Your statement myidle=$(xprintidle) causes the myidle assignment to fail.
At this point you have to decide what you want to do when the X environment is not available.
A possiblity, is to have myidle with a default 0 value:
typeset -i myidle # Tells Bash it is an int and default to 0 if not assigned a value
myidle=$(xprintidle 2>/dev/null) || true # no error state generated
I think you need another way to get the idle value of the current currently running X session.
Here it is:
#!/bin/dash
sleep 2
# get the X DISPLAY of the first logged-in user with a X session
DISPLAY="$(
w --short --no-header \
| awk '{ if( match($3, ":") ) { print $3; exit; } }'
)"
export DISPLAY
myidletime=$((10000))
while myidle=$(xprintidle 2>/dev/null) && [ "$myidle" -le $myidletime ]; do
echo "$myidle"
sleep 1
done
#sudo shutdown -P now
#shutdown -P 5
echo "end"

Related

Graceful shutdown on Archlinux

Bringing this question straight from here.
I am running archlinux and I have a VM running often on it along with the system. Most of the time actually.
My goal is to produce the following behaviour:
A shutdown / poweroff / reboot / halt signal is sent to the system
No action other then trying to shut down the virtual machines gracefully
If the VMs are shut down gracefully after X seconds, proceeds with shutting down the host system too.
If not, execute a different command
Just give me a good idea on what to work on, because I don't even know from where to begin. I guess there is a call to the kernel that can be looked at.
Let me know.
My current code
At the moment I am using these scripts to gracefully shutdown my kvm virtual machines, and it works! But only as long as my user launches a shutdown or a reboot using his shell. Any other case wouldn't work.
These alias:
alias sudocheck="/bin/bash /home/damiano/.script/sudocheck"
alias sudo="sudocheck "
Are triggering this function:
#!/bin/bash
# This script checks for what is being passed to sudo.
# If the command passed is poweroff or reboot, it
# launches a custom script instead, that also looks
# fur currently running virtual machines and shuts them.
sudocheck() {
if [ $1 == "poweroff" ] || [ $1 == "reboot" ]; then
eval "sudo /home/damiano/.script/graceful $#"
else
eval "sudo $#"
fi
}
sudocheck $#
That launches this script if needed:
#!/bin/bash
i=0
e=0
## if virsh finds VMs running
virsh -c qemu:///system list | awk '{ print $3}' | \
if grep running > /dev/null ; then
virsh -c qemu:///system list --all | grep running | awk '{print "-c qemu:///system shutdown "$2}' | \
## shuts them dow gracefully
xargs -L1 virsh
## wait 30 seconds for them to go down
until (( i >= 30 || e == 1 )) ; do
## check every second for their status
virsh -c qemu:///system list --all | awk '{ print $3}' | \
if grep -E '(running|shutdown)' > /dev/null ; then
## keep waiting if still running
if (( i <= 30 )) ; then
sleep 1 && let i++ && echo $i
else
e=1 && notify-send 'Shutdown has been canceled' 'Please check the status of your virtual machines: seems like even though a stop signal has been sent, some are still running.' --urgency=critical
fi
else
## if no machine is running anymore, original power command can be executed
e=1 && eval $#
fi
done
fi
Systemd Unit
I also made the following draft, to manage the execution of my VM:
bootvm#.service
[Unit]
Description=This service manages the execution of the %i virtual machine
Documentation=https://libvirt.org/manpages/virsh.html
[Service]
ExecStartPre=virsh -c qemu:///system
ExecStart=virsh start %i
ExecStop=virsh -c qemu:///system
ExecStop=virsh shutdown %i
TimeoutStopSec=30
KillMode=none
[Install]
WantedBy=multi-user.target
But how can I tell the system to don't shut down the desktop environment, to stay as it is UNTIL the VM has been successfully shut down?
Because if the system can't shut down the vm, I want to do it while still in my DE. I don't want the computer to begin stopping all the services and remain hung until it just forces the shut down.

switch desktops after 5 minutes idle (xprintidle): crontab or daemon?

on my raspberry pi (raspbian running) I would like to have the current desktop switched to desktop n#0 after 5 minutes of idle system (no mouse or keyboard action), through wmctrl -s 0 and xprintidle for idle time checking.
Please keep in mind I'm no expert...
I tried 2 different ways, none of them working and I was wondering which one of them is the best way to do have the job done:
bash script and crontab
I wrote a simple script which checks if xprintidle is greater than a previously set $IDLE_TIME, than it switches desktops (saved in /usr/local/bin/switchDesktop0OnIdle):
#!/bin/bash
# 5 minutes in ms
IDLE_TIME=$((5*60*1000))
# Sequence to execute when timeout triggers.
trigger_cmd() {
wmctrl -s 0
}
sleep_time=$IDLE_TIME
triggered=false
while sleep $(((sleep_time+999)/1000)); do
idle=$(xprintidle)
if [ $idle -ge $IDLE_TIME ]; then
if ! $triggered; then
trigger_cmd
triggered=true
sleep_time=$IDLE_TIME
fi
else
triggered=false
# Give 100 ms buffer to avoid frantic loops shortly before triggers.
sleep_time=$((IDLE_TIME-idle+100))
fi
done
script itself works.
Then I added it to crontab (crontab -e) for have it run every 6 minutes
*/6 * * * * * sudo /usr/local/bin/switchDesktop0OnIdle
not sure sudo is necessary or not.
Anyway It doesn't work: googling around I understood that crontab runs in its own environment with its own variables. Even though I don't remember how to access this environment (oops) I do remember that I get these 2 errors running the script in it (which correctly works in "normal" shell)
could not open display (is it important ?)
bla bla -ge error, unary operator expected or similar: basically xprintidle doesn't work in this environment a gives back an empty value
What am I missing ?
infinite-while bash script running as daemon
second method I tried to set up a script with an internal infinite-while checking if xprintidle is greater then 5 minutes. In this case desktop is switched (less elegant?). Saved also in /usr/local/bin/switchDesktop0OnIdle
#!/bin/bash
triggered=false
while :
do
if [ `xprintidle` -ge 300000 ]; then
if [ triggered == false ]
wmctrl -s 0
triggered = true
fi
else
triggered = false
fi
fi
done
again the script itself works.
I tried to create a daemon in /etc/init.d/switchDesktop0OnIdle (really not an expert here, modified an existing one)
#! /bin/sh
# /etc/init.d/switchDesktop0OnIdle
### BEGIN INIT INFO
# Provides: switchDesktop0OnIdle
# Required-Start: $all
# Required-Stop: $all
# Should-Start:
# Should-Stop:
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description:
# Description:
### END INIT INFO
DAEMON=/usr/local/bin/switchDesktop0OnIdle
NAME=switchDesktop0OnIdle
test -x $DAEMON || exit 0
case "$1" in
start)
echo -n "Starting daemon: "
start-stop-daemon --start --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
stop)
echo -n "Shutting down daemon:"
start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
restart)
echo -n "Restarting daemon: "
start-stop-daemon --stop --oknodo --retry 30 --exec $DAEMON
start-stop-daemon --start --exec $DAEMON
echo "switchDesktop0OnIdle."
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
esac
exit 0
I set it up
sudo update-rc.d switchDesktop0OnIdle defaults
and
sudo service switchDesktop0OnIdle start
(necessary?)
...and nothing happens...
also I don't find the process with ps -ef | grep switchDesktop0OnIdle but it seems running with sudo service switchDesktop0OnIdle status
can anyone please help?
thank you
Giuseppe
As you suspected, the issue is that when you run your scripts from init or from cron, they are not running within the GUI environment you want them to control. In principle, a Linux system can have multiple X environments running. When you are using one, there are environment variables that direct the executables you are using to the environment you are in.
There are two parts to the solution: your scripts have to know which environment they are acting on, and they have to have authorization to interact with that environment.
You almost certainly are using a DISPLAY value of ":0", so export DISPLAY=:0 at the beginning of your script will handle the first part of the problem. (It might be ":0.0", which is effectively equivalent).
Authorization is a bit more complex. X can be set up to do authorization in different ways, but the most common is to have a file .Xauthority in your home directory which contains a token that is checked by the X server. If you install a script in your own crontab, it will run under your own user id (you probabl shouldn't use sudo), so it will read the right .Xauthority file. If you run from the root crontab, or from an init script, it will run as the root user, so it will have access to everything but will still need to know where to take the token from. I think that adding export XAUTHORITY=/home/joe/.Xauthority to the script will work. (Assuming your user id is joe.)

Cron script to restart memcached not working

I have a script in cron to check memcached and restart it if it's not working. For some reason it's not functioning.
Script, with permissions:
-rwxr-xr-x 1 root root 151 Aug 28 22:43 check_memcached.sh
Crontab entry:
*/5 * * * * /home/mysite/www/check_memcached.sh 1> /dev/null 2> /dev/null
Script contents:
#!/bin/sh
ps -eaf | grep 11211 | grep memcached
if [ $? -ne 0 ]; then
service memcached restart
else
echo "eq 0 - memcache running - do nothing"
fi
It works fine if I run it from the command line but last night memcached crashed and it was not restarted from cron. I can see cron is running it every 5 minutes.
What am I doing wrong?
Do I need to use the following instead of service memcached restart?
/etc/init.d/memcached restart
I have another script that checks to make sure my lighttpd instance is running and it works fine. It works a little differently to verify it's running but is using the init.d call to restart things.
Edit - Resolution: Using /etc/init.d/memcached restart solved this problem.
What usually causes crontab problems is command paths. In the command line, the paths to commands are already there, but in cron they're often not. If this is your issue, you can solve it by adding the following line into the top of your crontab:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
This will give cron explicit paths to look through to find the commands your script runs.
Also, your shebang in your script is wrong. It needs to be:
#!/bin/bash
I suspect the problem is with the grep 11211 - it's not clear the meaning of the number, and that grep may not be matching the desired process.
I think you need to log the actions of this script - then you see what's actually happening.
#!/bin/bash
exec >> /tmp/cronjob.log 2>&1
set -xv
cat2 () { tee -a /dev/stderr; }
ps -ef | cat2 | grep 11211 | grep memcached
if [ $? -ne 0 ]; then
service memcached restart
else
echo "eq 0 - memcache running - do nothing"
fi
exit 0
The set -xv output is captured to a log file in /tmp. The cat2 will copy the stdin to the log file, so you can see what grep is acting upon.
Save below code as check_memcached.sh
#!/bin/bash
MEMCACHED_STATUS=`systemctl is-active memcached.service`
if [[ ${MEMCACHED_STATUS} == 'active' ]]; then
echo " Service running.... so exiting "
exit 1
else
service memcached restart
fi
And you can schedule it as cron.

echo message not coming on terminal with systemd

I have systemd service, say xyzWarmup.service.
Here is the service file
[Unit]
Description=Xyz agent.
After=fooAfter.service
Before=fooBefore1.service
Before=fooBefore2.service
[Service]
# During boot the xyz.sh script reads input from /dev/console. If the user
# hits <ESC>, it will skip waiting for xyz and abc to startup.
Type=oneshot
StandardInput=tty
StandardOutput=tty
ExecStart=/usr/bin/xyz.sh start
RemainAfterExit=True
ExecStop=/usr/bin/xyz.sh stop
[Install]
WantedBy=multi-user.target
Following is the part of xyz.sh.
#! /bin/bash
#
### BEGIN INIT INFO
# Required-Stop: Post
### END INIT INFO
XYZ=/usr/bin/Xyz
prog="Xyz"
lockfile=/var/lock/subsys/$prog
msg="Completing initialization"
start() {
# Run wfw in background
ulimit -c 0
# wfw has a default timeout of 10 minutes - just pick a large value
wfw -t 3600 xyz abc >/dev/null 2>&1 &
PID=$!
# Display the message here after spawning wfw so Esc can work
echo -n $"$msg (press ESC to skip): "
while [ 1 ]; do
read -s -r -d "" -N 1 -t 0.2 CHAR || true
if [ "$CHAR" = $'\x1B' ]; then
kill -9 $PID 2>/dev/null
# fall through to wait for process to exit
fi
STATE="`ps -p $PID -o state=`"
if [ "$STATE" = "" ]; then
# has exited
wait $PID 2>/dev/null
if [ $? -eq 0 ]; then
echo "[ OK ]"
echo
exit 0
else
echo "[ FAILED ]"
echo "This is failure"
exit 1
fi
fi
done
}
When this script runs during boot I see the following message coming from the script
Completing initialization (press ESC to skip):
Updated:
This is the additional output which I see after the previous line
[[ OK ] Started Xyz agent.\n'
If you carefully see, there are 2 opening square brackets( '[' ), from this it looks like that systemd is overwriting the log messages. The first "[" comes from the initscript's "[ OK ]". Can somebody explain this better ?
I don't see "[ OK ]" or "[ FAILED ]" on my screen.
When I was using this script as initscript in Fedora14, I used to see these messages. Once, I have shifted to systemd. I have started seeing this issue.
systemd version is : systemd-201-2.fc18.9.i686 and systemd.default_standard_output=tty
Kindly help.
It looks to me that your issue here is that the script is never getting attached to the TTY. The output is showing up because you have that hard-coded to go to /dev/console in your script. With StandardInput=tty, systemd waits until the tty is available, and it's probably already in use. Your script is just sitting there not connected to input in the infinite loop. You could try StandardInput=tty-force, and I bet that will work, although I'm not sure what else that might break.
Personally, I think I might go back and rethink the approach entirely. It sounds like you want the boot to entirely block on this service, but let you skip by hitting escape. Maybe there's a better way?

is .bashrc getting run twice when entering a new bash instance?

I want to display the number of nested sub-shells in my bash prompt.
I often type ":sh" during a vim editing session in order to do something, then exit back to the editor. Sometimes I attempt to exit back to the editor out of habit, forgetting that I am not in any editing session and my terminal closes!
To avoid this, I added a bit of code to my .bashrc that would keep a count of the number of nested sub-shells and display it in the prompt.
Here is the code:
echo "1: SHLVL=$SHLVL"
if [[ -z $SHPID ]] ; then
echo "2: SHLVL=$SHLVL"
SHPID=$$
let "SHLVL = ${SHLVL:0} + 1"
fi
echo "3: SHLVL=$SHLVL"
(For those who may wonder, the test "-z $SHPID" insures that $SHLVL won't get incremented again if I run ". .bashrc" again in the same shell, perhaps to test something.)
But the output looks like this:
lsiden#morpheus ~ (morpheus) (2) $ bash
1: SHLVL=3
2: SHLVL=3
3: SHLVL=4
lsiden#morpheus ~ (morpheus) (4) $ ps
PID TTY TIME CMD
10421 pts/2 00:00:00 bash
11363 pts/2 00:00:00 bash
11388 pts/2 00:00:00 ps
As you can see, there are now two instances of bash on the stack, but the variable $SHLVL has been incremented twice. The output shows that before this snippet of code even executes in my .bashrc, SHLVL has already been incremented by 1!
Is it possible for .bashrc to get run twice somehow without seeing the output of the echo commands?
SHLVL is incremented automatically whenever you fire up a shell:
~$ echo $SHLVL
1
~$ bash -c 'echo $SHLVL'
2
and then you're incrementing it again in the .bashrc.

Resources