I am writing a packer but it did not work with programs which use TLS (Thread Local Storage ). So how windows loader deal with it ?
Related
I have some old vm's from Azure, that was disconnect so Windows and Linux (Ubuntu I believe). I am trying the Windows first, I am having problems attaching the vm
DiskPart has encountered an error: The requested operation could not be completed due to a virtual disk system limitation. Virtual hard disk files must be uncompressed and unencrypted and must not be sparse.
See the System Event Log for more information.
no compression
I am using a QEMU VM as my main OS.
But still, I have a Lubuntu machine that is serving as a host but technically is unecessary.
As well, I need to sign into the host machine at first and it is running some services like a window manager, networking etc. which is only consuming RAM despite being useless cause of PCIe Passthrough.
So why not running the VM bare metal? I like that I don't have to care about native hardware problems that would occur.
So I was wondering if there was some kind of pure "QEMU OS" that is serving as a compatibility layer between the VM and the hardware?
Maybe a pure linux kernel that is starting a X server and QEMU but nothing more...
Thanks in advance.
In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!
I know that vmware's Vsphere VM's can be encrypted using a KMS server but can the actual drive which vsphere is hosted on be encrypted? In Microsoft the hyper-visor host can be encrypted if bit-locker is enabled.
Not explicitly. You can, however, use Secure Boot to ensure that only signed code is ran: https://blogs.vmware.com/vsphere/2017/05/secure-boot-esxi-6-5-hypervisor-assurance.html
Based on Kyle Rudy's vmware link the following is good to note:
https://blogs.vmware.com/vsphere/2017/05/secure-boot-esxi-6-5-hypervisor-assurance.html
TPM and TXT
The question always comes up in customer conversations of “Does this require TPM or TXT??”. The answer is no. They are mutually exclusive. Secure Boot for ESXi is purely a function of the UEFI firmware and the validation of cryptographically signed code. Period.
Note that TPM 1.2 and TPM 2.0 are two vastly different implementations. They are not backwards compatible. There is support, via 3rd parties like HyTrust, for TPM 1.2 in ESXi 6.5.
TPM 2.0 is not supported in 6.5.
Standard BIOS firmware vs UEFI firmware
Typically, switching your hosts from their standard (legacy) BIOS firmware to UEFI firmware in some operating systems will cause issues. With ESXi, you can switch with no modification to ESXi. If you have installed 6.5 using standard BIOS and you want to try out Secure Boot then in the host firmware you can switch and ESXi will come up.
I want to install CloudStack 4.2.0 on my 32 bit ubuntu in virtual box. It is possible?
And advantages/disadvantages of this from real machine?
Thanks.
I presume that you're talking about running the Apache CloudStack management server in a 32-bit virtual machine that runs in Virtual Box.
To do anything meaningful with CloudStack, you need at least one hypervisor to control. To avoid the need for hardware, many CloudStack developers use DevCloud. DevCloud comes with configuration scripts that make it easier for a beginner to setup the Apache CloudStack management server.
One issue is memory. If the O/S running VirtualBox is 32-bit, you'll only have 3 gigs of RAM for user processes. Of this, DevCloud will use about 2 gigs, so memory can be quite tight.
Another issue is networking. Make sure that there is a network path from the management server to the hypervisors it is meant to control and the storage that it will use for templates (aka secondary storage).
Yes you can deploy apache cloudstack on a virtual machine; you can deploy even a whole cloudstack infrastructure on virtual machines provided that you have enough RAM.
You can deploy primary storage, secondary storage, mySQL server and cloudstack management server on Virtual Machines; however the host VMs which will provide the execution environment of your cloudstack instances need to provide nested virtualization which is not available in virtualbox, do use VMWare workstation instead as your type2 hypervisor .
good luck