Physical Cores vs Virtual Cores in Parallelism - multithreading

When it comes to virtualization, I have been deliberating on the relationship between the physical cores and the virtual cores, especially in how it effects applications employing parallelism. For example, in a VM scenario, if there are less physical cores than there are virtual cores, if that's possible, what's the effect or limits placed on the application's parallel processing? I'm asking, because in my environment, it's not disclosed as to what the physical architecture is. Is there still much advantage to parallelizing if the application lives on a dual core VM hosted on a single core physical machine?

Is there still much advantage to parallelizing if the application lives on a dual core VM hosted on a single core physical machine?
Always.
The OS-level parallel processing (i.e., Linux pipelines) will improve performance dramatically, irrespective of how many cores -- real or virtual -- you have.
Indeed, you have to create fairly contrived problems or really dumb solutions to not see performance improvements from simply breaking a big problem into lots of smaller problems along a pipeline.
Once you've got a pipelined solution, and it ties up 100% of your virtual resources, you have something you can measure.
Start trying different variations on logical and physical resources.
But only after you have an OS-level pipeline that uses up every available resource. Until then, you've got fundamental work to do just creating a pipeline solution.

Since you included the F# tag, and you're interested in parallel performance, I'll assume that you're using F# asynchronous IO, hence threads never block, they just swap between CPU bound tasks.
In this case it's ideal to have the same number of threads as the number of virtual cores (at least based on my experiments with F# in Ubuntu under Virtualbox hosted by Windows 7). Having more threads than that decreases performance slightly, having less decreases performance quite a bit.
Also, having more virtual cores than physical cores decreases performance a little. But if this is something you can't control, just make sure you have an active worker thread for each virtual core.

Related

Why Linux distributes threads among NUMA nodes almost equally?

I'm running an application with multiple threads and it seems Linux is distributing threads among NUMA nodes almost equally. Say my application spawns 4 threads and my machine has 4 sockets. I observe that each thread is assigned to a NUMA node distributing threads among all nodes almost equally.
Is there any reason for this? why not assign all on one socket and then fill the next one?
The best binding for an application is dependent of what the application does. It is often a good idea to spread thread on different NUMA nodes so to maximize the memory throughput as all NUMA nodes can theoretically be used in this case (assuming the application is well written and NUMA aware). If all threads are bound to the same NUMA node, then only the memory of the node can be efficiently accessed (access to memory of other NUMA node is possible but slower and pages will not be automatically efficiently map due to the first touch policy which is generally the default one on most machine). When some threads communicate a lot, it is often better to put them on the same NUMA node so not to pay latency overheads. In some cases, it can even be better to put them on the same core (but different hardware threads) so to speed up synchronization operations like locks and atomics.
If you want the scheduling and the binding to be efficient, you need to provide more information to the OS or do it yourself. I strongly advise you to bind threads to specific cores. This is easy with HPC runtimes/tools like OpenMP (but a pain if your application use low-level threads unless you do not care about platform portability). As for NUMA, you can specify the policy using numactl. More information is provided in this answer.
In practice, HPC applications generally use manual binding so to improve performance. OS scheduler are generally not very good to bind thread automatically efficiently. Few years ago, there was even bugs in the scheduler causing inefficient behaviours: see The Linux Scheduler: a Decade of Wasted Cores. To my knowledge, such problem is not so uncommon in this field and not restricted to Linux. Efficient NUMA-aware OS scheduling is far from being easy.

Azure VM pricing - Is it better to have 80 single core machines or 10 8-core machines?

I am limited by a piece of software that utilizes a single core per instance of the program run. It will run off an SQL server work queue and deposit results to the server. So the more instances I have running the faster the overall project is done. I have played with Azure VMs a bit and can speed up the process in two ways.
1) I can run the app on a single core VM, clone that VM and run it on as many as I feel necessary to speed up the job sufficiently.
OR
2) I can run the app 8 times on an 8-core VM, ...again clone that VM and run it on as many as I feel necessary to speed up the job sufficiently.
I have noticed in testing that the speed-up is roughly the same for adding 8 single core VMs and 1 8-core VM. Assuming this is true, would it better better price-wise to have single core machines?
The pricing is a bit of a mystery, whether it is real cpu usage time, or what. It is a bit easier using the 1 8-core approach as spinning up machines and taking them down takes time, but I guess that could be automated.
It does seem from some pricing pages that the multiple single core VM approach would cost less?
Side question: so could I do like some power shell scripts to just keep adding VMs of a certain image and running the app, and then start shutting them down once I get close to finishing? After generating the VMs would there be some way to kick off the app without having to remote in to each one and running it?
I would argue that all else being equal, and this code truly being CPU-bound and not benefitting from any memory sharing that running multiple processes on the same machine would provide, you should opt for the single core machines rather than multi-core machines.
Reasons:
Isolate fault domains
Scaling out rather than up is better to do when possible because it naturally isolates faults. If one of your small nodes crashes, that only affects one process. If a large node crashes, multiple processes go down.
Load balancing
Windows Azure, like any multi-tenant system, is a shared resource. This means you will likely be competing for CPU cycles with other workloads. Having small VMs gives you a better chance of having them distributed across physical servers in the datacenter that have the best resource situation at the time the machines are provisioned (you would want to make sure to stop and deallocate the VMs before starting them again to allow the Azure fabric placement algorithms to select the best hosts). If you used large VMs, it would be less likely to find a suitable host with optimal contention to accommodate many virtual cores.
Virtual processor scheduling
It's not widely understood how scheduling a virtual CPU is different than scheduling a physical one, but it is something worth reading up on. The main thing to remember is that hypervisors like VMware ESXi and Hyper-V (which runs Azure) schedule multiple virtual cores together rather than separately. So if you have an 8-core VM, the physical host must have 8 physical cores free simultaneously before it can allow the virtual CPU to run. The more virtual cores, the more unlikely the host will have sufficient physical cores at any given time (even if 7 physical cores are free, the VM cannot run). This can result in a paradoxical effect of causing the VM to perform worse as more virtual CPU cores are added to it. http://www.perfdynamics.com/Classes/Materials/BradyVirtual.pdf
In short, a single vCPU machine is more likely to get a share of the physical processor than an 8 vCPU machine, all else equal.
And I agree that the pricing is basically the same, except for a little more storage cost to store many small VMs versus fewer large ones. But storage in Azure is far less expensive than the compute, so likely doesn't tip any economic scale.
Hope that helps.
Billing
According to Windows Azure Virtual Machines Pricing Details, Virtual Machines are charged by the minute (of wall clock time). Prices are listed as hourly rates (60 minutes) and are billed based on total number of minutes when the VMs run for a partial hour.
In July 2013, 1 Small VM (1 virtual core) costs $0.09/hr; 8 Small VMs (8 virtual cores) cost $0.72/hr; 1 Extra Large VM (8 virtual cores) cost $0.72/hr (same as 8 Small VMs).
VM Sizes and Performance
The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance, ranging from 100 Mbps for Small to 800 Mbps for Extra Large.
Extra Small VMs are rather limited in CPU and I/O power and are inadequate for workloads such as you described.
For single-threaded, I/O bound applications such as described in the question, an Extra Large VM could have an edge because of faster response times for each request.
It's also advisable to benchmark workloads running 2, 4 or more processes per core. For instance, 2 or 4 processes in a Small VM and 16, 32 or more processes in an Extra Large VM, to find the adequate balance between CPU and I/O loads (provided you don't use more RAM than is available).
Auto-scaling
Auto-scaling Virtual Machines is built-into Windows Azure directly. It can be based either on CPU load or Windows Azure Queues length.
Another alternative is to use specialized tools or services to monitor load across the servers and run PowerShell scripts to add or remove virtual machines as needed.
Auto-run
You can use the Windows Scheduler to automatically run tasks when Windows starts.
The pricing is "Uptime of the machine in hours * rate of the VM size/hour * number of instances"
e.g. You have a 8 Core VM (Extra Large) running for a month (30 Days)
(30 * 24) * 0.72$ * 1= 518.4$
for 8 single cores it will be
(30 * 24) * 0.09 * 8 = 518.4$
So I doubt if there will be any price difference. One advantage of using smaller machines and "scaling out" is when you have more granular control over scalability. An Extra-large machine will eat more idle dollars than 2-3 small machines.
Yes you can definitely script this. Assuming they are IaaS machines you could add the script to windows startup, if on PaaS you could use the "Startup Task".
Reference

Should I disable NUMA if my application is not NUMA-Aware

Running Windows Server 2008 R2 SP1. The application I'm running was not designed with NUMA in mind. Would it be better to disable NUMA on my dual-socket system? My guess is yes, but I wanted to confirm. My server is a Westmere dual-socket system.
If your application is not multithreaded or is multithreaded but does not employ the threads to work simultaneously on the same problem (e.g. is not data parallel), then you can simply bind the program to one of the NUMA nodes. This can be done with various tools, e.g. with the "Set Affinity..." context menu command in Windows Task Manager. If your program is parallel, then you can still use half of the available process cores and bind to one NUMA node.
Note that remote memory accesses on Westmere systems are not that expensive - the latency is 1.6x higher than local access and the bandwidth is almost the same as the local on, therefore if you do a lot of processing on each memory value the impact would be minimal. On the other hand, disabling NUMA on such systems results in fine-mesh interleave of both NUMA domains which makes all applications perform equally bad as roughly 50% of all memory accesses will be local and 50% will be remote.
If I understand correctly, turning NUMA on cannot harm the performance.
If your application is not NUMA aware, accesses will be managed by the OS, so might be across NUMA nodes or might be on the same one - depending on what other pressures the OS has, how much memory / CPU you're using, etc. The OS will try to get your data fast.
If you have it turned off, the OS doesn't know enough to even try to keep each application's execution CPU close to it's memory.

Multi-threading in a VMWare virtual machine environment

We scale our single-threaded app by running it in separate vm's - each instance is configured to work on a particular partition of the overall workload. An idea has been circulating that we can get better performance by adding threads to some parts of the app, though we would not be eliminating the current vm dependence.
Is architecting threading for an app that has been designed for a vm environment different than for an app designed for a non-vm environment? My primary concern is that for every thread designed into the app the actual number of threads that may be spun up per machine is a function of the number of vm instances running on the machine and may actually lead to performance degradation.
Thanks in advance.
Edit: By vm above I mean a virtual machine as provided by VMWare.
I think your concerns about "performance degradation" are warranted. If you are running multiple VMs on a machine and add multiple threads to the VMs, you are most likely going to be increasing the context switching only -- not getting more work out of a VM.
It depends a lot on the jobs you are running of course. If they are IO bound, then adding threads may give you better parallelization. However, if they are CPU/computation bound, then you will most likely not get a win and most likely see a drop in performance.
Is architecting threading for an app that has been designed for a vm environment different than for an app designed for a non-vm environment?
Not IME, but then I don't tend to write CPU-intensive apps - I most often thread off to get stuff out of the GUI and to simplify design for multiple users/clients. I just design the apps as if I am on a native OS.
I don't know how the threads are mapped. I have an XP VM running now. The XP task manager shows 518 threads, the host, (Vista 64), task manager shows only 11 threads for 'VMware Workstation VMX', though there are some 22 other threads for NAT Sevice, VMnet DHCP, Tray Process etc. I have 2 'processors' assigned to the VM to give any multithreading bugs more chance of showing up.

Testing performance of parallel programs on a single core machine

I would like to start playing with concurrency in the programs I write (mostly for fun), but I don't own a multi-core system and can't afford one any time soon. I run linux. Is there a way to, for example with a Virtual Machine, compare the performance of a multi-threaded implementation of a program with a single-threaded version, without actually running it on hardware with multiple processors or cores?
That is, I would like to be able to implement parallel algorithms and be able to say that, yes, this multithreaded implementation is better-performing than the single-threaded.
Thanks
You can not test multithreaded programs reliably on a single core machine. Race conditions will show up very differently or even be totally hidden on a single core machine. The performance will decrease etc.
If you want to LEARN how to program multiple threads, you can do so on a single core machine for the first steps (i.e how works the API etc.). But you'll have to test on a multicore machine and its very likely that you will see faults on a multicore machine that you dont see on a single core machine.
Virtual machines are by my experience no help with this. They introduce new bugs, that didnt show up before, but they CANT simulate real concurrency with multiple cores.
Depending on what you're benchmarking you might be able to use an Amazon EC2 node. It's not free, but it's cheaper than buying a computer.
If you have only one core/cpu and your algorithm is cpu intensive, you will probably see multi-threaded program is actually slower than the single-threaded one. But if you have program use i/o in one thread and cpu in another for example, then you can see the multi-threaded program is faster.
To observe effects other than potentially improved locality, you'll need hardware or a simulator that actually models the communication/interaction that occurs when the program runs in parallel. There's no magic to be had.

Resources