As far as I understood, at the moment, Docker for Mac requires that I decide upfront how much memory and CPU cores to statically allocate to the virtualized linux it runs on.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
This is very suboptimal!
In Linux, it's ideal because a container is just another process. So it uses and releases the system memory as containers starts and stop.
Is my mental model correct?
Will one day Docker for Mac or Windows dynamically allocate CPU and Memory resources?
The primary issue here is that, for the moment, Docker can only run Linux containers on Linux. That means on OS X or Windows, Docker is running in a Linux VM, and it's ability to allocate resources is limited by the facilities provided by the virtualization software in use.
Of course, Docker can natively on Windows, as long as you want to run Windows containers, and in this situation may more closely match the Linux "a container is just a process" model.
It is possible that this will change in the future, but that's how things stand right now.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
I suspect that's true for memory. I believe that if the docker vm is idle it isn't actually using much in the way of CPU resources (that is, you are not dedicating CPUs to the VM; rather, you are setting maximum limits on how many resources the vm can consume).
Related
In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!
I have setup my development environment inside Virtual Machine running Ubuntu 14.04. My company doesn't allow me to run direct Linux Flavoured OS may be due to security reasons.
One thing I have observed is that in VM it only runs one task ast a time whereas if i run on my personal laptop it runs multiple tasks at a time.
Is there any way to configure poky in local.conf file for example or any other file for it to run multiple tasks at the same time. I have given more than 6 GB of RAM to the VM.
As it is running one task, build is taking a lot of time..
Thanks for your time
bitbake task executor enquires for number of CPUs dynamically, so it seems that you might have allocated 1 cpu to your VM. You might be able to see CPUs with below command in VM
lscpu
You might want to allocate more CPUs. VirtualBox lets you do that
Stop virtual machine
Click settings-> click system -> click processor -> Change the number of procesors.
I have a development environment running in a Vagrant VM (virtualBox). Considering I have 11Gb of spare RAM I thought I could run the VM completely in RAM.
Would anyone know of the approach to this and would I gain much performance from it?
If you have that much memory available, most probably you'll have the image cached in the host OS cache so you don't need to worry about that.
I've tried putting image files on to ramdisk on my Macbook and didn't see any improvement in 5 mins run (most of which was apt-get install stuff).
Traditionally, VirtualBox has opened disk image files as normal files, which results in them being cached by the host operating system like any other file. The main advantage of this is speed: when the guest OS writes to disk and the host OS cache uses delayed writing, the write operation can be reported as completed to the guest OS quickly while the host OS can perform the operation asynchronously. Also, when you start a VM a second time and have enough memory available for the OS to use for caching, large parts of the virtual disk may be in system memory, and the VM can access the data much faster. (c) 5.7. Host IO caching
Also the benefits you'll see greatly depend on the process you run there, if that's dominated by cpu / network, tinkering with the storage won't help you alot.
A friend asked for a command that can be used to find the real system
execution time for a program in Linux. I replied that the time
command is a great one.
He then asked, is the time of execution (via time) for a program which is returned by the virtual machine when you query for the execution time, the same as the real system execution time of the program?
My instinct was to say it depends:
If there are enough resources on the machine, then the VM time returned would be the same as the real/wall clock time of the program execution.
If the system does NOT have sufficient resources, then the VM time will be much different than the real (system) execution time. Moreover, the VM is an application on top of the OS, which has its own scheduler. This means that the VM needs to invoke systems calls which are then processed by the OS which in turn communicate with hardware and then provide a real (system) execution time. Hence, the time returned can be different than real time in this situation.
If the executed program is simple, then the VM time could be equal to real (system) time.
If the executed program is NOT simple, then the VM time could be much different.
Are my assumptions correct?
I now wonder: How could you find/calculate the real execution time of a program ran on a virtual machine? Any ideas?
The complexity of the program doesn't matter.
The guest OS doesn't have visibility into the host. If you run 'time' on the guest, the 'sys' value returned is describing guest system resources used, only.
Or to rephrase: in a typical virtual machine setup, you're going to allocate only a portion of the host CPU resources to the guest OS. But that's all the guest can see, and thus, that's all the 'time' command can report on. Since you allocated this when you launched the VM, the guest cannot exhaust all of the host resources.
This is true of pretty much any VM: it would be a major security issue if the guest had knowledge of the hypervisor.
So yes the sys time could absolutely differ for VM versus real hardware, because the guest won't have full resources. You could also see variability depending on whether you're dealing with hardware or software virtualization.
Some good reading here (sections 10.3 through 10.5):
https://www.virtualbox.org/manual/ch10.html#hwvirt
i am trying to measure code performance (basically speed-up when using threads). So far i was using cygwin via windows or linux on separate machine. Now i have the ability to set up a new system and i am not sure whether i should have dual boot (windows and ubuntu) or a virtual machine.
My concern is whether i can measure reliable speed up and possibly other stuff (performance monitors) via a linux virtual machine or if i have to go with with normal booting in linux.
anybody have an opinion?
If your "threading" relies heavily on scheduling, I won't recommend you to use VM. VM is just a normal process from the host OS's point of view, so the guest kernel and its scheduler will be affected by scheduling by the host kernel.
If your "threading" is more like parallel computation, I think it's OK to use VM.
For me, it is much safer to boot directly on the system and avoid using a VM in your case. Even when you don't use a VM, it is already hard to have twice the same results in multi-threading because the system being used for OS tasks, so having 2 OS running in the same time as for VM even increases the uncertainty on the results. For instance, running your tests 1000 times on a VM would lead to, let's say, 100 over-estimated time, while it would maybe be only 60 on a lonely OS. It is your call to know if this uncertainty is acceptable or not.