NPIV support on RHEL Hypervisor (KVM) - linux

We are planning to setup RHEL KVM hypervisor on a Physical machine and then create multiple Linux VMs on that hypervisor.
In order to directly map/present Storage LUNs to these VMs, we are planning to use NPIV technology of KVM.
I want to clarify few things:
Is KVM NPIV technology same as NPIV technology of Windows 2012 Server Hypervisor where once NPIV environment is set correctly, Storage LUNs can be directly mapped to VM using virtual WWN?
What are the supported guest VM OS type which are supported for NPIV like RHEL, SUSE etc.
Any particular Network switch types which supports KVM NPIV technology.
It will be great if somebody knows the answer of these questions and share them.
Thanks,
Nitin

1) NPIV is an ANSI Standard, so yes it is the same idea as 2012 Hyper-V Uses.
2) I'm looking for this info as well, will update if I get it. http://www.linuxtopia.org/online_books/rhel6/rhel_6_virtualization/rhel_6_virtualization_chap-Para-virtualized_Windows_Drivers_Guide-N_Port_ID_Virtualization_NPIV.html seems to indicate it's agnostic to the guest operating system, but I have a couple contacts I'm asking for details from.
3) Brocade and Cisco FC switches both support.

Related

Are linux VMs installed on Windows Host in VirtualBox "real" linux?

I have VirtualBox on my Windows 7 machine, and recently installed a Redhat linux VM. I'm planning to learn linux programming with some low-level stuff, such as kernel function calls and assembly.
My question is: is my Redhat VM a "real" linux environment for my purpose? I guess that whatever I do in the VM is done in a "linux simulator" in VirtualBox, and under the hood the "linux simulator" still does its job using functionalities provided by the Windows host (e.g. Windows function calls). Is this true?
VirtualBox is not a "Linux simulator", it is a "computer simulator". OS selections within such an simulator are for the purpose of deciding which virtual devices to make visible, and not for running a different simulator "core".
I think you should dual boot linux instead of VM because not only it saves resources ("Prevent Computer from going slow") But also give you better functionality and hardware support
Edit:
and you can also use Live Cd(also usb)

Installation CloudStack on Virtual Machine

I want to install CloudStack 4.2.0 on my 32 bit ubuntu in virtual box. It is possible?
And advantages/disadvantages of this from real machine?
Thanks.
I presume that you're talking about running the Apache CloudStack management server in a 32-bit virtual machine that runs in Virtual Box.
To do anything meaningful with CloudStack, you need at least one hypervisor to control. To avoid the need for hardware, many CloudStack developers use DevCloud. DevCloud comes with configuration scripts that make it easier for a beginner to setup the Apache CloudStack management server.
One issue is memory. If the O/S running VirtualBox is 32-bit, you'll only have 3 gigs of RAM for user processes. Of this, DevCloud will use about 2 gigs, so memory can be quite tight.
Another issue is networking. Make sure that there is a network path from the management server to the hypervisors it is meant to control and the storage that it will use for templates (aka secondary storage).
Yes you can deploy apache cloudstack on a virtual machine; you can deploy even a whole cloudstack infrastructure on virtual machines provided that you have enough RAM.
You can deploy primary storage, secondary storage, mySQL server and cloudstack management server on Virtual Machines; however the host VMs which will provide the execution environment of your cloudstack instances need to provide nested virtualization which is not available in virtualbox, do use VMWare workstation instead as your type2 hypervisor .
good luck

How to run an Hyper-V host on a Windows Server 2008 R2 VM on ESXi?

I have a server with ESXi. One of the virtual Machines has a WinSrv 2008 R2 and I want to host Hyper-V on it.
First of all: is it possible? I keep being told that no, it is not, due to hardware requirements, namely hardware enabled virtualization capabilities.
Second of all: if it is possible, how can I configure my VM to do so?
Thanks in advance!
Running a VM inside a VM is usually not possible due to hardware limitations. The exception is "pure emulation" implementation (like Bochs). But even when it is possible, it is not recommended due to severe performance degradation.
Hyper-V requires a CPU that supports hardware-assisted virtualization. Comercial hypervisors like ESXi do not emulate the CPU features that are required for Hyper-V.
ESXi have CPU features now.
I search also how to launch an hypervisor virtualbox (with features) inside ESXi with 2k8 host.

Running VMware in VMware?

We have a physical machine that runs VMware and hosts a VM we use for SharePoint deployment testing. That machine is old and dying, and my employer's network czars are heavily pushing hosted VMs as a replacement for outdated physical servers. I was curious about whether it's possible to run VMware inside VMware, and if so, whether there are severe performance implications. We don't require extreme performance from this setup, since it's just used for SharePoint testing and the associated SQL Server is on a different box. My guess is that we can't just use the primary hosted VM for our testing because we'll want to roll back occasionally and otherwise have more control over it, and getting buy-in for that from the network folks is unlikely. Does anyone have any experience with this?
edit: I know this nesting certainly isn't the preferred option, but (1) we want the flexibility of being able to use VMware snapshots at will and (2) the network folks will not allow us to arbitrarily roll back to a previous point in time because of the potential for removing mandated security updates. My guess is that a local desktop machine running VMware Workstation might just be the way to go. The hosted option seems attractive if it will work though since it's less machine maintenance for me to deal with.
The technical limitation with running VMware inside VMware is that VMware, Virtual PC, etc takes advantage of the Virtualization features present in modern CPUs.
If you have two or more hypervisors are both trying to control Ring 0 then there will be problems, this is something that I've encountered while trying to run both VMware and Virtual PC simultaneously on my desktop - one will error out/crash.
If your hypervisor can interact with the 'parent' hypervisor, then you'll be OK. Alternatively if the child hypervisor doesn't try to use the CPU virtualization features, or entirely emulates the CPU (such as QEMU) then you should also be OK.
Basically old-style hypervisors on old CPUs use Full virtualization (slow) which would be capable of nesting with a heavy, heavy performance hit. modern Hypervisors/CPUs use hardware assisted virtualization (near native performance) and you'd be hard pressed to find a hypervisor that is designed or capable of nested virtual machines.
Finally, I'd really advise against running dev/test VMs on the same physical server that is running production VMs. There's just too much to go wrong and security implications - you need to manage the dev/test environment and it sounds like you shouldn't have access to production environment. Likewise you probably don't want the operations team messing about with your test environment.
UPDATE: ESXi 4 now supports virtualizing itself. See this article for more information
I've never run VMware in VMware, but I've run VirtualPC inside VirtualBox without problems, so there's no fundamental reason it shouldn't work I suppose...
It sounds to me more like you have a problem with the inflexibility of your "network czars" than any technical one. If you're a developer or QA you need a testing environment where you can fool around with outdated (and potentially insecure) versions of the OS and applications, without putting the rest of the company network at risk.
Ex-VMware employee here.
Firstly, when you say Nested VMware I will assume you mean Nested ESXi. (You could also mean Workstation, Fusion, or Player).
Nested ESXi environments are unsupported and should not be used for production. These scenarios are not tested in QA and not guaranteed to work. In short, if you experience any kind of problem, VMware will not help you with this Nested ESXi setup.
With that said, yes you can do it and yes it does work. A lot of people use nested ESXi in their labs but not in production. Previously there were special configuration file edits that were necessary for nested ESXi to work. I have seen environments with even 3 layer nested ESXi servers (ESXi vm on and ESXi vm on a physical ESXi host). More recently there is the ESXi appliance which makes this much easier.
Have a look here:
http://www.virtuallyghetto.com/2015/12/deploying-nested-esxi-is-even-easier-now-with-the-esxi-virtual-appliance.html
I ran into this same problem. I work at a large company where our entire infrastructure is virtual, so if you need a server you get a VMware VM. So I had a couple of Windows 2003 Server Standard Edition based Guest VM's that had 6GB of memory and 200 GB of disk space, but I wanted to run linux and a LAMP stack on them. So I tried to install VMware Workstation on one and I got an error message saying it couldn't be installed within a VM. I also tried Microsoft Virtual PC and got a similar error message. I installed Sun's VirtualBox and that installed fine, but I couldn't get the networking to work w/in the guest Ubuntu OS. My next step is to try QEMU although performance might become an issue.
You ought to have a look at Mainframes - they are Virtualised from the word go:
Hardware - runs Hypervisor Type 1 - Level 1
on this you have zVM - Type 2 Hypervisor - Level 2
on this you have zOS - your main big operating system - Level 3
and/or
on this you have zLinux - Level 3
and/or
on this you have zVM for testing next version - Level 3
and/
on this you have zOS for testing zVM plus zOS both at next version - Level 4
So going down to level 4 is pretty common
Mind you on a Mainframe you can have 1000's of VMs running at the same time - and most sites who start using zVM/CMS and zVM/Linux usually do.
I can see two solutions for this (three if you count a VM inside a VM which is just crazy).
New hardware, which should be robust enough to handle several VM's used specifically for testing (sharpoint, etc.). In this situation your team could be given more rights without affecting non-testing VM's.
Sharepoint test VM's are moved to the main VM pool and those who need access are given the ability to checkout/deploy/rollback testing resources. This could be direct through VMWare tools or through an internal project that works through a VMWare API.
This should be a joint decision between Network/Dev/Testing.
JFYI:
I tried installing and running VMware ESXi server host(child ESXi server) as a virtual machine(on parent ESXi server) and it runs however you can not run any VMs under child ESXi server.
I am doing practice of VMware vSphere Data center virtualization on single Physical machine. There is VMware Workstation installed on Windows 8 OS. In VM Workstation, I have installed Windows Server 2008 OS, VMware ESXi OS and created the VMware Data center LAB. There is VMs running in LAB, and its confirm that We can user VMware in VMware. But it depends on your need, and Products which is chosen.
You can install ESXi on VMware Workstation, it's usefull to learn ESXi, so there in no reason run VMware in VMware.
Yes. You can run VMWare inside VMWare. Though its not officially supported, You can deploy VMs in the child ESX. I have checked for an advanced feature like PassThrough the HBA card but which was not available in child ESX, hence I could not provide a LUN from array.
So in production its better to not use this.
But for training and practices this can be used.
You can do that.
You can install vmware esxi inside virtual machine of another vmware esxi.
But the performance will be very bad.
Totally works.. totally can't do it other then for some kinda testing or some kind of educational purpose, because you won't get support. and from my limited experience it doesn't perform that well.
Yes, you can, VMware can even detect if it's running inside of another vmware machine and warn you that VMception will cause worse performance. which it will, trust me, just try to get the version the virtual machines work best in a physical machine, as to get as much performance possible.
"whether it's possible to run VMware inside VMware" What?
I can run Windows with Sharepoint in a VMWare machine that's hosted somewhere.
Or, I can run Windows with Sharepoint in a WMWare machine that's actually a VMWare machine that's hosted somewhere.
Why on earth would I add a level of nesting? Why not just go with Windows with Sharepoint hosted somewhere?
You can have any number of VMWares running on a single host. Lots of different versions doing lots of different things.
Nesting them doesn't make sense.

What is the current state of art in Linux virtualization technology? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What VM technologies exist for Linux, their pros and cons, and which is recommended for which application?
Since this kind of question can be asked for X other than "VM technologies for Linux", and since the answer changes with progress, I suggest to define a template for this kind of pages. Those pages will have the tag 'stateoftheart' and they will be revisited each month and each month there will be up-to-date list of technologies, up-to-date reviews and up-to-date recommendations.
This is a job for ... Wikipedia!
Types of Virtualization
Platform Virtualization
Comparison of Virtual Machines
Now that the obvious stuff is out of the way...
Linux runs fine as a guest on every VM host I've used, so I'm going to assume that you're referring to Linux as the host operating system. I'm also going to assume x86 or amd64 hardware.
Platform virtualization breaks down into two major forms: Desktop virtualization and Server virtualization. Both types will allow you to load and run multiple OS instances as guests that virtualize their I/O through the host OS. Desktop virtualization concentrates on providing a highly interactive console experience for each of the guest VMs, while Server virtualization concentrates on maximizing computing performance, generally while sacrificing console services and more exotic devices (Sound cards, USB, etc.) Server virtualization implementations typically include either RDP or VNC for remote access to a virtual console.
On Linux, your choices for Desktop Virtualization include:
VMware Workstation -- it's commercial, somewhat expensive, mature, and provides the most hardware, device, and guest OS support of any solution.
VMware Player -- it's commercial (freeware) and only supports VMs that were created elsewhere. Available with Ubuntu.
Parallels Workstation -- it's commercial, somewhat expensive, and not up to par with VMware. Doesn't support 64-bit guests.
VirtualBox -- available in commercial (freeware) and community versions (GPL). Fedora's preferred solution.
On Linux, your choices for Server Virtualization include:
VMware Server -- it's commercial (freeware), mature, and provides the most hardware, device, and guest OS support of any solution. Available with Ubuntu.
Xen -- it's open source. A para-virtualization solution, it has only recently added hardware-virtualization, so Windows guest support depends upon specific CPU support.
Virtual Iron -- a commercialized version of Xen that adds native virtualization.
KVM -- it's open source. It depends upon QEMU for the last mile. Ubuntu's preferred solution.
Linux-VServer -- it's open source. It provides virtual jails based on the host OS kernel, so no Windows guests.
For myself, I stick with VMware Workstation (7+ years) and VMware Server for my Linux-hosted virtualization needs. At work, it's VMware Workstation (on Windows), VMware Server (on Windows), and VMware ESX (on bare metal). I'll probably have another look at Xen, KVM, and VirtualBox at some point, but for right now compatibility between work and home is paramount.
2008 Oct
To be filled in at October to reflect the market status then.
2008 Sept
Products/services/technologies currently existing
VMware
Xen
VirtualBox
VServer
???
Comparisons
???
Recommendations for particular application areas
Home multi-boot replacement
Small business which has MS-Windows legacy applications
Datacenter of multinational corporation
???
W Craig Trader answer is great, but just to add there is also User-mode Linux (UML) which has been around for a while - it has been in the mainline kernel tree since 2.6.0 . Note that I haven't used it myself.
Ubuntu prefers KVM, and I believe Red Hat is moving to it over Xen now as well. Both KVM and Xen can be managed by libvirt, optionally through the virtual machine manager GUI. The virtual machine manager can manage remote instances through ssh connections.
In addition, a good comparison can be found here (pdf). Lots of performance tests done. The short version is that xen and linux-vserver were generally the best on performance grounds.

Resources