shut down ("destroy") libvirt VM when it reboots - linux

For a while, I was using virt-install to install an OS on libvirt VMs. I learned that the OS has an autoinstaller feature that requires the use of a second CD-ROM (to feed information about the desired configuration to the installer), but I found that virt-install unfortunately ignores all but one --cdrom argument. The alternative that I came up with is to output the VM configuration virt-install would use with just one CD-ROM to a file using the --print-xml argument, edit that file to add the second CD-ROM, and then use virsh create <xml config file>.
When I was using virt-install before, the VM rebooted itself at the end of installation and virt-install would notice and shut down ("destroy") the VM instead of allowing it to reboot, leaving me with a nice clean installed disk image. However, now when the VM reboots after completing installation, it actually boots up again instead of shutting down cleanly, so I can't programmatically tell when the installation has completed. After the reboot it looks like the same qemu-system-x86_64 process is being used, so I also can't use it to tell when the installation has completed.
How can I force libvirt to shut down ("destroy") the VM instead of rebooting the way virt-install did? Alternatively, is there some other indicator I can use to tell that a VM reboot has occurred?

Although there doesn't seem to be a way to automatically destroy a libvirt VM on reboot through a special incantation of virsh create or by changing options in the domain XML file, I stumbled across the very useful virsh event command:
$ virsh help event
NAME
event - (null)
SYNOPSIS
event [<domain>] [<event>] [--all] [--loop] [--timeout <number>] [--list]
DESCRIPTION
List event types, or wait for domain events to occur
OPTIONS
[--domain] <string> filter by domain name, id, or uuid
[--event] <string> which event type to wait for
--all wait for all events instead of just one type
--loop loop until timeout or interrupt, rather than one-shot
--timeout <number> timeout seconds
--list list valid event types
The command blocks until an event of the specified type occurs for the specified domain. This allowed me to achieve my goal of emulating the behavior in virt-install by doing:
$ virsh event domain1 --event restart
event 'reboot' for domain -
events received: 1
$ virsh destroy domain1
And it even gives me a built-in timeout mechanism!

Related

How do I use systemd to replace cron jobs meant to run every five minutes?

We have an embedded target environment (separate from out host build environment) in which systemd is running but not cron.
We also have a script which, under most systems, I would simply create a cron entry to run it every five minutes.
Now I know how to create a service under systemd but this script is a one-shot that exits after it's done its work. What I'd like to do is have it run immediately on boot (after the syslog.target, of course) then every five minutes thereafter.
After reading up on systemd timers, I created the following service file is /lib/systemd/system/xyzzy.service:
[Unit]
Description=XYZZY
After=syslog.target
[Service]
Type=simple
ExecStart=/usr/bin/xyzzy.dash
and equivalent /lib/systemd/system/xyzzy.timer:
[Unit]
Description=XYZZY scheduler
[Timer]
OnBootSec=0min
OnUnitActiveSec=5min
[Install]
WantedBy=multi-user.target
Unfortunately, when booting the target, the timer does not appear to start since the output of systemctl list-timers --all does not include it. Starting the timer unit manually seems to work okay but this is something that should be run automatically with user intervention.
I would have thought the WantedBy would ensure the timer unit was installed and running and would therefore start the service periodically. However, I've noticed that the multi-user.target.wants directory does not actually have a symbolic link for the timer.
How is this done in systemd?
The timer is not active until you actually enable it:
systemctl enable xyzzy.timer
If you want to see how it works before rebooting, you can also start it:
systemctl start xyzzy.timer
In terms of doing that for a separate target environment where you aren't necessarily able to easily run arbitrary commands at boot time (but do presumably control the file system content), you can simply create the same symbolic links (in your development area) that the enable command would do.
For example (assuming SYSROOT identifies the root directory of the target file system):
ln -s ${SYSROOT}/lib/systemd/system/xyzzy.timer
${SYSROOT}/lib/systemd/system/multi-user.target.wants/xyzzy.timer
This will effectively put the timer unit into an enabled state for the multi-user.target, so systemd will start it with that target.
Also, normally your custom files would be stored in /etc/systemd/system/. The equivalent lib directory is intended to host systemd files installed by packages or the OS.
If it's important that your cron job run precisely every 5 minutes, you should check the accuracy because systemd's monotonic timers can slip over time

Start and Stop script of ubuntu 12.04

I have a script (twoRules.sh) which add rules to ovs plugin bridge.
The rules gets deleted when someone does service neutron-plugin-openvswitch-agent restart or reboots the system. So where should I put my scripts so that after the restart of neutron-plugin-openvswitch-agent the (twoRules.sh) scripts get executed successfully and rules remain added.
I tried putting it in /etc/init.d/neutron-plugin-openvswitch-agent file as other people suggested but this file is only called on /etc/init.d/neutron-plugin-openvswitch-agent restart and not on service neutron-plugin-openvswitch-agent restart.
You have to convert the script to a a SysV-style init script. There are many documents out there explaining about this.
http://www.debian-administration.org/article/28/Making_scripts_run_at_boot_time_with_Debian
http://www.cyberciti.biz/tips/how-to-controlling-access-to-linux-services.html
https://wiki.debian.org/Daemon
This way you can configure the script to be executed after certain services start or stop or when runlevel changes.

stopping an linux aws instance from linux command line

Is there a way to stop and AWS ec2 instance from the VM itself?
If i start a ec2 linux based instance, is there a way for me to stop that instance by giving some linux command like "shutdown now"?
It's working with
shutdown -h now
or
shutdown -h +10
If you don't use "-h" parameter to halt, the instance will keep on running state.
Yes, with a couple of caveats.
If you are using an instance store backed instance, your only option will be to terminate. Without EBS volumes, the instance cannot exist in a stopped state.
There is also a flag that can be set on the instance as to how instance initiated shutdown is handled. This can be stop or terminate. If you want to stop your instance, make sure that this flag is configured correctly.
Other than that, you would use the normal Linux shutdown commands. shutdown now
Try these commands.
1. sudo poweroff
2. sudo shutdown -h now

Effect of redirects on reboot command

I'm running linux on a mips based system (specifically openwrt on a router).
When I run the reboot (as supplied by busybox) i.e. just reboot on it's own, the system reboots, but some of the services (webserver, dhcp/dns, dsl stuff) don't start up.
However when I reboot via the web interface, all the services start normally. I looked at the code and saw that the web interface runs reboot > /dev/null 2>&1. Running this command also reboots and starts up0 the services properly.
My question is how does redirecting stdout and stderr to /dev/null affect the startup of services upon the next boot?
Also, I'm wondering, would reboot contain architecture specific code?
No, redirecting stdout/stderr must not be able to affect the boot process (and where would that be saved anyway?). There must be something else causing this.
Does "shutdown -r now" work?

Detect pending linux shutdown

Since I install pending updates for my Ubuntu server as soon as possible, I have to restart my linux server quite often. I'm running an webapp on that server and would like to warn my users about the pending restart. Right now, I do this manually, adding an announcement before the restart, give them some time to finish their work, restart and remove the announcement.
I hope, shutdown -r +60 writes an file with all the information about the restart, which I can check on every access. Is there such a file? Would prefer a file in a virtual file system like /proc for performance reasons...
I'm running Ubuntu 10.04.2 LTS
If you are using systemd, the following command shows the scheduled shutdown info.
cat /run/systemd/shutdown/scheduled
Example of output:
USEC=1636410600000000
WARN_WALL=1
MODE=reboot
As remarked in a comment by #Björn, USEC is the timestamp in micro seconds.
You can convert it to a human friendly format dropping the last 6 figures and using date like this:
$ date -d #1636410600
Mon Nov 8 23:30:00 CET 2021
The easiest solution I can envisage means writing a script to wrap the shutdown command, and in that script create a file that your web application can check for.
As far as I know, shutdown doesn't write a file to the underlying files system, although it does trigger broadcast messages warning of the shutdown, which I suppose you could write a program to intercept .. but the above solution seems the easiest.
Script example:
shutdown.bsh
touch /somefolder/somefile
shutdown -r $1
then check for 'somefile' in your web app.
You'd need to add a startup link that erased the 'somefile' otherwise it would still be there when the system comes up and the web app would always be telling your users it was about to shut down.
You can simply check for running shutdown process:
if ps -C shutdown > /dev/null; then
echo "Shutdown is pending"
else
echo "Shutdown is not scheduled"
fi
For newer linux distributions versions you might need to do:
busctl get-property org.freedesktop.login1 /org/freedesktop/login1 org.freedesktop.login1.Manager ScheduledShutdown
The method of how shutdown works has changed
Tried on:
- Debian Stretch 9.6
- Ubuntu 18.04.1 LTS
References
Check if shutdown schedule is active and when it is
The shutdown program on a modern systemd-based Linux system
You could write a daemon that does the announcement when it catches the SIGINT / SIGQUIT signal.

Resources