Should i shutdown lxc containers before shutting down the host? - linux

I am running an ubuntu precise host with some lxc containers in it.
Should or have i to shut down containers before shutting down the host ?
Or host shutdown is propagated to containers automatically ?
I know if i use lxc autostart feature, then that will shutdown containers, but i do not want to use autostart.

The containers themselves are usually running on the host system's filesystem, and don't have their own filesystem. This means that from filesystem point of view, you can just kill LXC without risking any filesystem corruptions.
But if you have services running inside LXC, like for example MySQL, or other services that require a clean shut down for their own data stores, then it's important that these processes get stopped cleanly. Otherwise you risk causing corruptions on the data store of these services.
If you use the script in /etc/init.d to start your LXCs, they should get the signal to shutdown automatically once you shut down your host system, because init will call the /etc/init.d/ stop. If you started them manually, like via lxc-start on the CLI, and you want to be sure that they get shut down cleanly, it's better if you do it manually before shutting down the host system.
hope that helps.

Related

how gcp cli works internally to stop instances

Team i need to know how gcp compute instances stop [instance name] command works internally.
Need to know it does graceful shutdown or non graceful shutdown?
Also need to know ,is there any way via cli we can do non graceful (VM)shutdown?
That command could be considered as a grateful shutdown if the VM responds appropiately. If in a given time frame it doesn't, it will be forced to shutdown. I don't think there is a way to do a non-grateful, forced shutdown without attempting a ACPI signal first.
Stopping a VM causes Compute Engine to send the ACPI shutdown signal to the VM. Modern guest operating systems (OS) are configured to perform a clean shutdown before powering off in response to the power off signal. Compute Engine waits a short time for the guest OS to finish shutting down and then transitions the VM to the TERMINATED state.
https://cloud.google.com/compute/docs/instances/stop-start-instance
https://cloud.google.com/sdk/gcloud/reference/compute/instances/stop

What's the difference between docker run --device and docker run --volume?

If everything is "just" a file in linux, how do files/nodes in /dev differ from other files such that docker must handle them differently?
What does docker do differently for device files? I expect it to be a shorthand for a more verbose bind command?
In fact, after just doing a regular bind mount for a device file such as --volume /dev/spidev0.0:/dev/spidev0.0 , the user get's a "permission denied" within the docker container when trying to access the device. When binding via --device /dev/spidev0.0:/dev/spidev0.0, it works as expected.
The Docker run reference page has a link to Linux kernel documentation on the cgroup device whitelist controller. In several ways, a process running as root in a container is a little bit more limited than the same process running as root on the host: without special additional permissions (capabilities), you can't reboot the host, mount filesystems, create virtual NICs, or any of a variety of other system-administration tasks. The device system is separate from the capability system, but it's in the same spirit.
The other way to think about this is as a security feature. A container shouldn't usually be able to access the host's filesystem or other processes, even if it's running as root. But if the container process can mknod kmem c 1 2 and access kernel memory, or mknod sda b 8 0 guessing that the host's hard drive looks like a SCSI disk, it could in theory escape these limitations by directly accessing low-level resources. The cgroup device limit protects against this.
Since Docker is intended as an isolation system where containers are restricted environments that can't access host resources, it can be inconvenient at best to run tasks that need physical devices or host files. If Docker's isolation features don't make sense, then the process might run better directly on the host, without involving Docker.

Is safe to restart Dbus daemon after system update?

Is safe to restart Dbus daemon after update on production system ?
Services and daemons use dynamically linked libraries and I use service Needrestart to determine which service should be restarted.
Updates in eg. libc6 causes restart for almost all daemons. I do not allow myself to restart Dbus (which provides communication interface across the system). Is there any potential risk to data loss, deadlock, ...?
My environment is based on Debian (Wheezy, Jessie) with services provides SMTP, HTTP, DNS, DHCP, SMB, FTP, etc... but no X applications.

Starting of a remote device manager

I am running a waveform that has devices on more than one computer. The domain manager and a device manager starts up on one GPP(1). A device manager starts up on the other GPP(2). The domain manager and the device managers are started when the GPP(s) are being boot up.
I can have a procedure for the operator, that states start GPP(2) up first, then GPP(1). But this approach is not preferable. I would like the ability to start the Device Manager on GPP(2) from GPP(1) after the domain manager has been started.
Does REDHAWK have a standard way of starting remote Device Managers?
The DeviceManager is designed to wait for the DomainManager on startup. So, the standard procedure is to add a script to /etc/init.d so that the remote DeviceManager will start up whenever the remote machine running it boots.
To clarify, let's elaborate using your example. Machine 1 will run the DomainManager and DeviceManager(1)/GPP(1). This machine could be at 192.168.1.1. Machine 2 will run DeviceManager(2)/GPP(2). This machine could be at 192.168.1.2.
The DomainManager will start up whenever machine 1 boots. It will wait happily for DeviceManagers to register with it.
Machine 2's /etc/omniORB.cfg file is set to point to 192.168.1.1. When it boots, the DeviceManger will attempt to register with the DomainManager. One of two things will happen:
The DomainManager at 192.168.1.1 is already up and running. In this case, DeviceManager(2) registers successfully and is ready to run applications.
The DomainManager at 192.168.1.1 is not yet running. In this case, DeviceManager(2) will hang out and wait for the DomainManager to come up.

Does windows azure virtual machine logs out when disconnecting

I use windows azure virtual machine. I have a few programs running, but in the future, they have to run non-stop.
If I disconnect from my virtual machine, will the virtual machine log out/shut down?
I have essential programs running, and if they stop running, it will be a huge problem for me.
No, your Virtual Machines don't shut down when you disconnect from rdp/ssh. However: There's always the possibility that your machine will restart, even if infrequently (e.g. failed hardware, Host OS maintenance, etc.). So... your essential programs should be set up to auto-start at boot time, without requiring you to rdp/ssh in to start them up again.

Resources