Automatically suspend VMs after shutdown of Proxmox host - proxmox

I'm looking for a way to suspend my VMs after the Proxmox host do a restart. Using Hyper-V, its possible to define an action for each VM like suspend or restart, which should be done on the VM after host reboot. Proxmox by default shutdown the VM together with the host. I couldn't find any config option, only to let Proxmox automatically start a VM after shutdown.
I found this article: http://8086.support/content/13/75/en/how-do-i-configure-kvm-to-suspend_restore-virtual-machines-when-the-host-is-rebooted.html Seems exactly what I need, but the file /etc/sysconfig/libvirt-guests doesn't exist. This file is part of the libvirt-client package, which is not installed and so no part of Proxmox. So I'm not sure, if its a good idea to use Proxmox together with another management solution, which libvirt seems to be. According to this Entry, its even not possible.
Isn't there a native way from proxmox to suspend a VM after host shutdown?

Have you tried posting on the Proxmox forums? They're the expert of their product so I'd recommend it.
Even if there's not an easy "built in" way to configure that by default, it's still possible. Proxmox is Debian under the hood, so you could write a script to do what you want on shutdown/reboot.
The builtin pvesh allows you to interact with your PVE server from the commandline and do tons of different things (including suspend and start). It interacts with the PVE RESTful API. Info on pvesh is here and the full API docs are here.
Once you've written a script that will suspend or restart your VMs, you can then leverage SystemD to launch your script at the appropriate time. E.g. the CLI part of this

Related

Is it valid to share unix socket files across (docker) containers in general?

I'm having the issue (which seems to be common) that I'm dockerizing applications that run on one machine, and these applications, now, need to run in different containers (because that's the docker paradigm and how things should be done). Currently I'm having issues with postfix and dovecot... people have found this too painful that there are tons of containers running both dovecot and postfix in one container, and I'm doing my best to do this right, but the lack of inet protocol examples (over tcp) is just too painful to continue with this. Leave alone bad logging and things that just don't work. I digress.
The question
Is it correct to have shared docker volumes that have socket files shared across different containers, and expect them to communicate correctly? Are there limitations that I have to be aware of?
Bonus: Out of curiosity, can this be extended to virtual machines?
EDIT: I would really appreciate sharing the source of the information you provide.
A Unix socket can't cross VM or physical-host boundaries. If you're thinking about ever deploying this setup in a multi-host setup like Kubernetes, Docker Swarm, or even just having containers running on multiple hosts, you'll need to use some TCP-based setup instead. (Sharing files in these environments is tricky; sharing a Unix socket actually won't work.)
If you're using Docker Desktop, also remember that runs a hidden Linux virtual machine, even on native Linux. That may limit your options. There are other setups that more directly use a VM; my day-to-day Docker turns out to be Minikube, for example, which runs a single-node Kubernetes cluster with a Docker daemon in a VM.
I'd expect sharing a Unix socket to work only if the two containers are on the same physical system, and inside the same VM if appropriate, and with the same storage mounted into both (not necessarily in the same place). I'd expect putting the socket on a named Docker volume mounted into both containers to work. I'd probably expect a bind-mounted host directory to work only on a native Linux system not running Docker Desktop.

CoreOS alternative to /usr/lib/systemd/system-shutdown/

I recently stumbled across the fact that on shutdown/reboot any script in /usr/lib/systemd/system-shutdown will get executed before the shutdown starts.
Paraphrasing - https://www.freedesktop.org/software/systemd/man/systemd-halt.service.html
With the /usr filesystem being read only on CoreOS I cannot put any of my shutdown scripts in /usr/lib/systemd/system-shutdown. I'm hoping someone more knowledgeable about CoreOS and systemd knows an alternate directory path on CoreOS nodes that would give me the same results. Or a configuration that I can adjust to point the directory to /etc/systemd/system-shutdown or something else.
Optionally any pointers on creating a custom service that does the same thing as systemd-shutdown.
My use case is that I have a few scripts that I want to execute when a node shutsdown. For example remove the node from the monitoring system, unschedule the node in kubernetes and drain any running pods while allowing in flight transactions to finish.

do linux containers go to sleep?

I keep receiving randomly "refused connection" while trying to ssh into linux containers.
In order to try to find if it was a rogue computer that was impersonating the IP of the container, I ran an arping on the interface of the machine what was trying to reach the container and inside the container, but there's no duplicate IPs.
I double checked the SSH configuration of the container and on the "hypervisor" host to make sure that it wasn't a timeout or something.
The current solution that I found is to keep a crontab SSHing into the container and in that way I stopped receiving the "refused connection" error.
So I started to wonder: Does anybody knows if containers go to sleep when they're inactive?
Thank you, very much in advance for your kind answers!
Best regards!
No, not at all.
One thing you must realise is that processes running inside a container are similar to any other process running on the kernel. The kernel does see any difference between them, apart from the fact that it has different namespaces and cgroups to limit the resource usage.
If you using docker, you can easily find the PID of any container by running the following command
docker inspect --format '{{.State.Pid}}' container_name

How can I tell Puppet to stop a service on shutdown without keeping it running?

Context:
On a linux (RedHat family) system, I have an init-script-based service that is started/stopped manually most of the time, and in general is only run in response to specific, uncommon situations. The init scripts are thin wrappers around code that I do not have control over.
If the service is killed without running the stop action on its init script, it is aborted uncleanly and leaves the system in a broken state that requires manual intervention to fix.
When the systems running the service shut down, they kill it uncleanly. I can register the service with chkconfig such that it gets shut down via the init script when the host shuts down; this solves the problem on a per-host basis.
Question:
I would like to automate the configuration of this service to stop-at-shutdown via Puppet.
How can I tell Puppet to register a service with chkconfig such that the service will be stopped via the init script when the system shuts down, but will not otherwise be managed by Puppet?
What I've Tried:
I made a hokey exec statement in Puppet that calls chkconfig directly, but that feels inelegant (and will probably break in some way I haven't thought of).
I played around with the noop flag to the service type in Puppet, but it didn't seem to have the desired effect.
Puppet does not have any built-in support for configuring which runlevels a service runs in, nor any built-in, generalized support for chkconfig. Ordinarily it is a service-installation responsibility to register the service with chkconfig; services that are installed from the system RPMs are registered that way.
Furthermore, chkconfig recognizes structured comments at the top of initscripts to determine which runlevels the service will run in by default, according to LSB convention. A proper initscript need only be registered with chkconfig to have the default runlevels set -- in particular, for it to be set to be stopped in runlevels 0 and 6, which is what you're after.
If you're rolling your own initscripts and deploying them manually or directly via Puppet (as opposed to packaging them up and installing them via Yum) then your best bet is probably to build a defined type that manages the initscript and its registration. You do not need and probably do not want a Service resource for it, but a File resource to put the proper file in place and an Exec resource to handle registration sounds about right.

How to consist the containers in Docker?

Now I am developing the new content so building the server.
On my server, the base system is the Cent OS(7), I installed the Docker, pulled the cent os, and establish the "WEB SERVER container" Django with uwsgi and nginx.
However I want to up the service, (Database with postgres), what is the best way to do it?
Install postgres on my existing container (with web server)
Build up the new container only for database.
and I want to know each advantage and weak point of those.
It's idiomatic to use two separate containers. Also, this is simpler - if you have two or more processes in a container, you need a parent process to monitor them (typically people use a process manager such as supervisord). With only one process, you won't need to do this.
By monitoring, I mainly mean that you need to make sure that all processes are correctly shutdown if the container receives a SIGSTOP signal. If you don't do this properly, you will end up with zombie processes. You won't need to worry about this if you only have a signal process or use a process manager.
Further, as Greg points out, having separate containers allows you to orchestrate and schedule the containers separately, so you can do update/change/scale/restart each container without affecting the other one.
If you want to keep the data in the database after a restart, the database shouldn't be in a container but on the host. I will assume you want the db in a container as well.
Setting up a second container is a lot more work. You should find a way that the containers know about each other's address. The address changes each time you start the container, so you need to make some scripts on the host. The host must find out the ip-adresses and inform the containers.
The containers might want to update the /etc/hosts file with the address of the other container. When you want to emulate different servers and perform resilience tests this is a nice solution. You will need quite some bash knowledge before you get this running well.
In about all other situations choose for one container. Installing everything in one container is easier for setting up and for developing afterwards. Setting up Docker is just the environment where you want to do your real work. Tooling should help you with your real work, not take all your time and effort.

Resources