Does windows azure virtual machine logs out when disconnecting - azure

I use windows azure virtual machine. I have a few programs running, but in the future, they have to run non-stop.
If I disconnect from my virtual machine, will the virtual machine log out/shut down?
I have essential programs running, and if they stop running, it will be a huge problem for me.

No, your Virtual Machines don't shut down when you disconnect from rdp/ssh. However: There's always the possibility that your machine will restart, even if infrequently (e.g. failed hardware, Host OS maintenance, etc.). So... your essential programs should be set up to auto-start at boot time, without requiring you to rdp/ssh in to start them up again.

Related

Linux Kernel Config Scopes within VM or Hypervisor

In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!

Oracle Linux, prevent Network Adapter from sleep when running in VirtualBox on macOS

I have installed Oracle Linux 7 with the current version of VirtualBox, running on mac OS Sierra with a macbook. It therefore has a battery but is plugged in at all times.
For networking I use 2 adapters, one NAT for internet and one Host-Guest for ssh etc.
For some time now I was always wondering why I would get a broken ssh pipe, trial and error showed me that the VM will go to sleep (black screen), which causes the network adapter to break, telling me the name of the adapter and simply Reset adapter as soon as I wake it up again by typing into the vm itself.
I can then restart the network adapter via /etc/init.d/network restart and it will work again
Any ideas how I can change that? My Linux skills are very limited and I am not even sure what Oracle Linux is based on, most tips I find online do not work, no GUI also makes it difficult to just hop into power settings or something similar
This worked for me, on Windows host machine.
Configure your network adapter to
1) Allow the network adapter to wake the computer,
2) Allow a magic packet to wake the computer,
3) Allow IPV6
http://www.worldstart.com/dropped-internet-connection-in-sleep-mode/
Now, when I sleep my computer, and then wake it up, I get networking on both the host and guest, not just host.

Does vmware-tools restart have to be done in ESX as well as guest?

When restarting the vmware-tools service on the Linux Guest is it necessary to also restart the vmware-tools service on the ESX? I ask because I have 2 other guests running on this ESX/blade.
I'm trying to resolve an obscure issue with SNMP traps not indicating guest health and have to schedule all work accordingly since I manage hundreds of Linux guests on ESX hosts.
no. and there is no vmware-tools service on the ESX. since you are talking about SNMP, the corresponding service should be hostd. unless your change does not solve the issue there is no need to restart hostd.
Restarting the Management agents on an ESXi or ESX host (1003490)
https://kb.vmware.com/kb/1003490

Starting of a remote device manager

I am running a waveform that has devices on more than one computer. The domain manager and a device manager starts up on one GPP(1). A device manager starts up on the other GPP(2). The domain manager and the device managers are started when the GPP(s) are being boot up.
I can have a procedure for the operator, that states start GPP(2) up first, then GPP(1). But this approach is not preferable. I would like the ability to start the Device Manager on GPP(2) from GPP(1) after the domain manager has been started.
Does REDHAWK have a standard way of starting remote Device Managers?
The DeviceManager is designed to wait for the DomainManager on startup. So, the standard procedure is to add a script to /etc/init.d so that the remote DeviceManager will start up whenever the remote machine running it boots.
To clarify, let's elaborate using your example. Machine 1 will run the DomainManager and DeviceManager(1)/GPP(1). This machine could be at 192.168.1.1. Machine 2 will run DeviceManager(2)/GPP(2). This machine could be at 192.168.1.2.
The DomainManager will start up whenever machine 1 boots. It will wait happily for DeviceManagers to register with it.
Machine 2's /etc/omniORB.cfg file is set to point to 192.168.1.1. When it boots, the DeviceManger will attempt to register with the DomainManager. One of two things will happen:
The DomainManager at 192.168.1.1 is already up and running. In this case, DeviceManager(2) registers successfully and is ready to run applications.
The DomainManager at 192.168.1.1 is not yet running. In this case, DeviceManager(2) will hang out and wait for the DomainManager to come up.

Should i shutdown lxc containers before shutting down the host?

I am running an ubuntu precise host with some lxc containers in it.
Should or have i to shut down containers before shutting down the host ?
Or host shutdown is propagated to containers automatically ?
I know if i use lxc autostart feature, then that will shutdown containers, but i do not want to use autostart.
The containers themselves are usually running on the host system's filesystem, and don't have their own filesystem. This means that from filesystem point of view, you can just kill LXC without risking any filesystem corruptions.
But if you have services running inside LXC, like for example MySQL, or other services that require a clean shut down for their own data stores, then it's important that these processes get stopped cleanly. Otherwise you risk causing corruptions on the data store of these services.
If you use the script in /etc/init.d to start your LXCs, they should get the signal to shutdown automatically once you shut down your host system, because init will call the /etc/init.d/ stop. If you started them manually, like via lxc-start on the CLI, and you want to be sure that they get shut down cleanly, it's better if you do it manually before shutting down the host system.
hope that helps.

Resources