i am trying to measure code performance (basically speed-up when using threads). So far i was using cygwin via windows or linux on separate machine. Now i have the ability to set up a new system and i am not sure whether i should have dual boot (windows and ubuntu) or a virtual machine.
My concern is whether i can measure reliable speed up and possibly other stuff (performance monitors) via a linux virtual machine or if i have to go with with normal booting in linux.
anybody have an opinion?
If your "threading" relies heavily on scheduling, I won't recommend you to use VM. VM is just a normal process from the host OS's point of view, so the guest kernel and its scheduler will be affected by scheduling by the host kernel.
If your "threading" is more like parallel computation, I think it's OK to use VM.
For me, it is much safer to boot directly on the system and avoid using a VM in your case. Even when you don't use a VM, it is already hard to have twice the same results in multi-threading because the system being used for OS tasks, so having 2 OS running in the same time as for VM even increases the uncertainty on the results. For instance, running your tests 1000 times on a VM would lead to, let's say, 100 over-estimated time, while it would maybe be only 60 on a lonely OS. It is your call to know if this uncertainty is acceptable or not.
Related
As far as I understood, at the moment, Docker for Mac requires that I decide upfront how much memory and CPU cores to statically allocate to the virtualized linux it runs on.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
This is very suboptimal!
In Linux, it's ideal because a container is just another process. So it uses and releases the system memory as containers starts and stop.
Is my mental model correct?
Will one day Docker for Mac or Windows dynamically allocate CPU and Memory resources?
The primary issue here is that, for the moment, Docker can only run Linux containers on Linux. That means on OS X or Windows, Docker is running in a Linux VM, and it's ability to allocate resources is limited by the facilities provided by the virtualization software in use.
Of course, Docker can natively on Windows, as long as you want to run Windows containers, and in this situation may more closely match the Linux "a container is just a process" model.
It is possible that this will change in the future, but that's how things stand right now.
So that means that even when Docker is idle, my other programs will run on (N-3) CPU cores and (M-3)GB of memory. Right?
I suspect that's true for memory. I believe that if the docker vm is idle it isn't actually using much in the way of CPU resources (that is, you are not dedicating CPUs to the VM; rather, you are setting maximum limits on how many resources the vm can consume).
A friend asked for a command that can be used to find the real system
execution time for a program in Linux. I replied that the time
command is a great one.
He then asked, is the time of execution (via time) for a program which is returned by the virtual machine when you query for the execution time, the same as the real system execution time of the program?
My instinct was to say it depends:
If there are enough resources on the machine, then the VM time returned would be the same as the real/wall clock time of the program execution.
If the system does NOT have sufficient resources, then the VM time will be much different than the real (system) execution time. Moreover, the VM is an application on top of the OS, which has its own scheduler. This means that the VM needs to invoke systems calls which are then processed by the OS which in turn communicate with hardware and then provide a real (system) execution time. Hence, the time returned can be different than real time in this situation.
If the executed program is simple, then the VM time could be equal to real (system) time.
If the executed program is NOT simple, then the VM time could be much different.
Are my assumptions correct?
I now wonder: How could you find/calculate the real execution time of a program ran on a virtual machine? Any ideas?
The complexity of the program doesn't matter.
The guest OS doesn't have visibility into the host. If you run 'time' on the guest, the 'sys' value returned is describing guest system resources used, only.
Or to rephrase: in a typical virtual machine setup, you're going to allocate only a portion of the host CPU resources to the guest OS. But that's all the guest can see, and thus, that's all the 'time' command can report on. Since you allocated this when you launched the VM, the guest cannot exhaust all of the host resources.
This is true of pretty much any VM: it would be a major security issue if the guest had knowledge of the hypervisor.
So yes the sys time could absolutely differ for VM versus real hardware, because the guest won't have full resources. You could also see variability depending on whether you're dealing with hardware or software virtualization.
Some good reading here (sections 10.3 through 10.5):
https://www.virtualbox.org/manual/ch10.html#hwvirt
How much percentage of time CPU spends in user mode vs privilege mode for different programs/operations.
Different Operations could be:
- running application without I/O interaction.
- application with I/O interaction like copying a file to USB
I know for a fact that Network operating system spends most of the time in interrupt context. Does this hold true for general purpose OS like Ubuntu/Windows?
I'm not much of an OS expert but I imagine it will depend a great deal on what background processes are running on the system. On any OS you might or might not be running some system (i.e. non-user) processes that are heavy resource users. Or you might have put some effort into stripping the system down so that very little CPU time is being used by the system for background maintenance.
If your question is how things compare for "clean" installations of these operating systems then all I can tell you is that on my laptop running Ubuntu right now (running top from the command line to look at resource usage) only about 5-10% of CPU time is being used by non-user processes; in my case Xorg and compiz are the main ones. I don't really know how that compares to Windows, but I think most linux users have a knee jerk reaction that Windows is greedier for system resources than most linux distros.
So, I guess the short answer is that I doubt there is a short answer to your question.
I have access to a machine to which I can ssh. How to determine if my OS is running in fully-virtualized (where VMM does binary translation), para-virtualized or non-virtualized environment? I have some idea of how to go about it (some operations like accessing a memory page/disk will take longer time in a virtualized environment) but don't know how to proceed.
It does depends on the VMM you are running on top of. If it's a Xen or Microsoft VM, I believe CPUID with EAX value of 0x40000000 will give you a non-zero value in EAX. Not sure if that works on VMWare, VirtualBox or KVM. I expect that it will work there too...
Measuring access time is unlikely to ALWAYS show you the truth, since in a non-VM system those can vary quite a lot as well, and there is no REAL reason that you'd see a huge difference in an efficient implementation. And of course, you don't know if your VM is running with a REAL hard-disk controller passed through via the PCI, or if your NFS mounted disks are connected via a REAL network card passed through to the VM, or if they are accessed through a virtual network card.
A good VMM shouldn't show you much difference as long as the application is behaving itself.
I'm trying to compare the performance of threaded programs (on Linux). Since the programs use different thread synchronization methods and different lock granularity, running the programs on a shared server or desktop would not be good, since the other tasks may interfere with the scheduling of my programs. I don't have dedicated hosts, so I thought that using qemu would be a good option.
What I want to know is:
Are there any alternatives for this task?
I suppose that there is no way to reproduce scheduling done by guest Linux system on qemu, if
I - need to? (Suppose my program goes unusually skow or fast -- I'd like to know if I can run it again, but keeping exactly the same scheduling for its threads). Or is there a way?