Azure mutliple VM in a single VM - azure

I am wondering whether it is possible to run multiple VM's within a single VM. Basically I am looking at doing some research for an upcoming university subject. I want to run a mini blockchain style environment, and I need multiple VM's to test this.
What do you think is my best approach? A single cloud service with multiple machines? or am I required to purchase every single VM that I will require?

I suspect you can't. The virtualization software requires access to a set of specific CPU resources which are not in turn virtualized inside a VM it creates. Thus, you have three options
rent as many VM's as you need;
build a test environment yourself using any PC equipped with an Intel CPU supporting VT-x, or AMD
CPU supporting AMD-V;
run a full X86 emulator (as many instances
as needed) on a physical or virtual PC. One of such implementations
is open sourced project BOCSH.
The last option would have a very big performance issues of course.

Related

Most lightweight way to emulate a distributed system on linux

So I am taking this distributed systems class in which projects are done by simulating a distributed system using android and multiple emulators. This approach is terrible for multiple reasons:
Android emulators are too effin resource consuming that my poor laptop crashes mostly.
Poor networking support between emulators. Need to do port forwarding on TCP and what not.
So what is the way to emulate a distributed system on my Linux machine that consumes minimal resources, mostly RAM and CPU time?
Is Docker the answer to all of this? Maybe create multiple containers with separate IP for each? Is that even possible?
My team maintains several production distributed systems; and we have to unit test it in such a way that we can capture protocol bugs.
We have a stub implementation of clock and of the network that we inject into our classes. The network mimics the Message Passing model used in many distributed systems papers: pick a message at random and deliver it. This models network latencies and inconsistencies very well. We have other things built in: being able to block/release or drop messages to/from sets of hosts; and a simple tcp model.
With this simple addition our unit tests are now what we call interaction tests. We can very quickly add however many servers we want all running in single process on a laptop.
Oh, and after doing this, you'll know why global variables and singletons are a Bad Thing.
You can run several docker containers on one Linux machine. Each container will get its own IP address and it will also be able to talk to other containers on the same host. How many systems do you want to simulate?

LoadRunner testing over different domains

I have been requested to use LoadRunner to do load testing. While my LR servers are all physical servers, i will need to test a system that's not only on VM's, but i'll need to access through a VDI AND the system under test is in a completely different secure domain (diff OU's). This makes me believe there is going to be a large disparity and skewed performance results with all of the tokens and authentication that will have to happen. How can I measure this gap if at all?
Are you trying to assess performance for the application or the VDI terminal interface? This is an important consideration as to your path.
It does not matter if your target for AUT is on VM or Physical for your load generators. This does impact your monitoring strategy to collect monitoring stats for from the target AUT hypervisor rather than from the VM Guest OS.
You can have load generators and monitors behind a firewall. Take a look at the documentation for a path on this

Azure changing hardware

I have a product which uses CPU ID, network MAC, and disk volume serial numbers for validation. Basically when my product is first installed these values are recorded and then when the app is loaded up, these current values are compared against the old ones.
Something very mysterious happened recently. Inside of an Azure VM that had not been restarted in weeks, my app failed to load because some of these values were different. Unfortunately the person who caught the error deleted the VM before it was brought to my attention.
My question is, when an Azure VM is running, what hardware resources may change? Is that even possible?
Thanks!
Answering this requires a short rundown of how Azure works.
In each data centres there are thousands of individual machines. Each machine runs a hypervisor which allows a number of operating systems to share the same underlying hardware.
When you start a role, Azure looks for available resources - disk space CPU RAM etc and boots up a copy of the appropriate OS VM in thoe avaliable resources. I understand from your question that this is a VM role - so this VM is the one you uploaded or created.
As long as your VM is running, the underlying virtual resources provided by the hypervisor are not likely to change. (the caveat to this is that windows server 2012's hyper visor can move virtual machines around over the network even while they are running. Whether azure takes advantage of this, I don't know)
Now, Azure keeps charging you for even when your role has stopped because it considers your role "deployed". So in theory, those underlying resources still "belong" to your role.
This is not guaranteed. Azure could decided to boot up your VM on a different set of virtualized hardware for any number of reasons - hardware failure being at the top of the list, with insufficient capacity being second.
It is even possible (tho unlikely) for your resources to be provided by different hardware nodes.
An additional point of consideration is that it is Azure policy that disaster recovery (or other major event) may include transferring your roles to run in a separate data centre entirely.
My point is that the underlying hardware is virtual and treating it otherwise is most unwise. Roles are at the mercy of the Azure Management Routines, and we can't predict in advance what decisions they may make.
So the answer to your question is that ALL of the underlying resources may change. And it is very, very possible.

Multi-threading in a VMWare virtual machine environment

We scale our single-threaded app by running it in separate vm's - each instance is configured to work on a particular partition of the overall workload. An idea has been circulating that we can get better performance by adding threads to some parts of the app, though we would not be eliminating the current vm dependence.
Is architecting threading for an app that has been designed for a vm environment different than for an app designed for a non-vm environment? My primary concern is that for every thread designed into the app the actual number of threads that may be spun up per machine is a function of the number of vm instances running on the machine and may actually lead to performance degradation.
Thanks in advance.
Edit: By vm above I mean a virtual machine as provided by VMWare.
I think your concerns about "performance degradation" are warranted. If you are running multiple VMs on a machine and add multiple threads to the VMs, you are most likely going to be increasing the context switching only -- not getting more work out of a VM.
It depends a lot on the jobs you are running of course. If they are IO bound, then adding threads may give you better parallelization. However, if they are CPU/computation bound, then you will most likely not get a win and most likely see a drop in performance.
Is architecting threading for an app that has been designed for a vm environment different than for an app designed for a non-vm environment?
Not IME, but then I don't tend to write CPU-intensive apps - I most often thread off to get stuff out of the GUI and to simplify design for multiple users/clients. I just design the apps as if I am on a native OS.
I don't know how the threads are mapped. I have an XP VM running now. The XP task manager shows 518 threads, the host, (Vista 64), task manager shows only 11 threads for 'VMware Workstation VMX', though there are some 22 other threads for NAT Sevice, VMnet DHCP, Tray Process etc. I have 2 'processors' assigned to the VM to give any multithreading bugs more chance of showing up.

Deploying Projects on EC2 vs. Windows Azure

I've been working with Windows Azure and Amazon Web Services EC2 for a good many months now (almost getting to the years range) and I've seen something over and over that seems troubling.
When I deploy a .NET build into Windows Azure into a web role (or service role) it takes usually 6-15 minute for it to startup. In AWS's EC2 it takes about the same to startup the image and then a minute or two to deploy the app to IIS (pending of course its setup).
However when I boot up an AWS instance with SUSE Linux & Mono to run .NET, I get one of these booted and deploy code to it in about 2-3 minutes (again, pending it is setup).
What is going on with Windows OS images that cause them to take soooo long to boot up in the cloud? I don't want FUD, I'm curious about the specific details of what goes on that causes this. Any specific technical information regarding this would be greatly appreciated! Thanks.
As announced at PDC, Azure will soon start to offer full IIS on Azure web roles. Somewhere in the keynote demo by Don Box, he showed that this allows you to use the standard "publish" options in Visual Studio to deploy to the cloud very quickly.
If I recall correctly, part of what happens when starting a new Azure role is configuring the network components, and I remember some speaker at a conference mentioning once that that was very time consuming. This might explain why adding additional instances to an already running role is usually faster (but not always: I have seen this take much more than 15 minutes as well on ocassion).
Edit: also see this PDC session.
I don't think the EC2 behavior is specific to the cloud. Just compare boot times of Windows and Linux on a local system - in my experience, Linux just boots faster. Typically, this is because the number of services/demons launched is smaller, as is the number of disk accesses that each of them needs to make during startup.
As for Azure launch times: it's difficult to tell, and not comparable to machine boots (IMO). Nobody knows what Azure does when launching an application. It might be that they need to assemble the VM image first, or that a lot of logging/reporting happens that slows down things.
Don't forget, there is a Fabric controller that needs to check for fault zones and deploy your VMs across multiple fault zones (to give you high availability, at least when there are more than two instances). I can't say for sure, but that logic itself might take some extra time. This might also explain why network setup could be a little complicated.
This will of course explain the difference (if any) between boot times in the cloud and boot times for windows locally or in Amazon. Any difference in operating systems is completely dependent on the way the OS is built!

Resources