Azure - what is the difference between DSVM on CentOS vs Ubuntu - azure

on Azure there are Virtual Machines pre-configured for Data Science activities. There are images on Windows and on Linux - CentOS and Ubuntu. My question is - are there any important differences between image on CentOS vs image on Ubuntu? Of course - despite the OS itself ;)
From what I can see in specifications, there are mostly the same, but maybe there are some bits and pieces that have an important impact on using one of them.

The theory behind DSVM is that there is a build with all of the tools and drivers you need (to run on Azure Nvidia instances)that will work regardless of OS. So the difference is purely an OS one, simply because some organisations have infrastructure geared towards Ubuntu, others towards Redhat/Centos (and some even do Windows!)
The DSVM is an image concept that starts above the OS, so even the Windows editions will have basically the same toolset. if you look here there is a rundown of what the goals of DSVM are.

Related

Does the OCF spec mean that Docker is no longer Linux centric?

When I first heard that Microsoft was working to run docker containers it didn't make sense.
For a while it seemed that Docker was Linux-centric, with its dependency on Linux Containers.
Now it seems Docker has switched from LXC to an implementation of the Open Containers Format (OCF) spec in runc.
My question is: Does the OCF spec mean that Docker is no longer Linux centric? (ie is that how this will work? Does that mean there exists the theoretical capability to do this on OSX as well?)
There are a few points of interest here.
Containers can only be supported natively on platforms that have support for OS virtualization. OSX (so far) does not have such a capability. So it cannot support containers natively. You have to use a VM.
A standardized container format does not mean that the same container will be able to run on different platforms. The container and the host necessarily have to run on the same kernel. So a particular container can only run on a compatible platform.
What the standardized container format specification does is to enable richer container ecosystem technology from varied sources, all able to interwork because of the standard container format. This technology still has to be implemented for each different host platform.
Docker's adoption of OCF does not necessarily mean that it will automatically start targeting platforms other than Linux. It just means that the container format it will use on Linux will be the OCF instead of it's own proprietary format.
+1 to Ziffusion. You might want to reword Item 1), but basically you are correct on all four points.
To answer the OP's question: I do not believe OCF "deprecates" Linux. On the contrary, I believe it better supports Linux AND, AT THE SAME TIME, opens Docker functionality to better support other OS's, too.
Specifically:
https://www.opencontainers.org/faq
In the past two years, there has been rapid growth in both interest in
and usage of container-based solutions. Almost all major IT vendors
and cloud providers have announced container-based solutions, and
there has been a proliferation of start-ups founded in this area as
well. While the proliferation of ideas in this space is welcome, the
promise of containers as a source of application portability requires
the establishment of certain standards around format and runtime.
While the rapid growth of the Docker project has served to make the
Docker image format a de facto standard for many purposes, there is
widespread interest in a single, open container specification, which
is:
a) not bound to higher level constructs such as a particular client or
orchestration stack,
b) not tightly associated with any particular commercial vendor or
project, and
c) portable across a wide variety of operating systems, hardware, CPU
architectures, public clouds, etc.
The FAQ further states:
What are the values guiding the specification?
Composable. All tools for downloading, installing, and running containers should be well integrated, but independent and composable.
Container formats and runtime should not be bound to clients, to
higher level frameworks, etc.
Portable: The runtime standard should be usable across different hardware, operating systems, and cloud environments.
Open. The format and runtime should be well-specified and developed by a community. We want independent implementations of tools to be
able to run the same container consistently. ...
The Open Container Initiative is a working towards making a container format and a runtime that can run on many platforms, although a lot of the concepts and requirements will be based on the linux foundation they have been built from. An OCF container still specifies a platform so don't expect to be able to execute a Windows container on a Linux host. But expect to be able to manage Linux and Windows and "Y" containers in the same manner and ecosystem.
Docker moved away from LXC a while ago, to using libcontainer which is still linux centric. runC is the next runtime that is already able to run current docker containers on Linux, but aims to support the Open Container Format spec on many platforms.
The goal of runC is to make standard containers available everywhere
Linux, obviously, has been building up the OS features to support containers over the last 10 years. Microsoft has included a lot of the OS components in Windows 10 to run containers natively and has thrown support behind docker. So expect runC to be running on Windows soon.
BSD does support a lot of the functionality via it's Jails setup but never matured as much as the Linux space so I believe additional OS support will be required for it, or OSX to be able to run an OCF container natively. Although recent FreeBSD 11 does allow you to run Docker via it's 64bit Linux compatibility layer so I'm guessing runC would be close to doing the same, at some possible performance cost.

Cygwin vs Linux Virtual Machine for Development?

< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.

How to verify cross platform installation steps

I have to check installation steps of my application on different production machines. I want to check how can I install my application on HP UX. I have only linux/windows machines but dont have real physical HP unix machine. Is there any way i can check installation steps of HP unix. I am thinking of any virtual environment or any flavour that run on linux or windows which gives accessiblity and functionality of HP unix.
I am looking something to cross check platfrom installation steps.
The short answer is no. HP-UX is as different from Linux as Linux is from Windows (almost). There would be many differences in libraries, patches, installed utilities, build tools, etc.
A few examples:
HP-UX does not come pre-installed with the bash shell
HP-UX uses a proprietary software packager and installer called swinstall (analogous to RPM but completely different)
Partition layout is different
Many common utilities behave differently. "echo" is one of many examples. This will affect things if your build process uses shell utilities
Even if you can test the install, don't you need to test the product's operation on HP-UX?
Not saying it's impossible. If your application uses basic, nonspecific utilities for install, it might work. There is no way to know without a running installation. Unfortunately you need Itanium hardware and the O/S.
My recommendation would be to get your application working on Solaris and any other Unixes first. The more platforms you test on, the more portable your code will become on all of them. Then, put out some feelers and find someone with a system you can borrow time on.
Worst case, find an Itanium server like an rx2620 on eBay, should not cost too much. Even better if the seller forgets to wipe the O/S :). You'll need a terminal and possibly null modem. 11.31 (11iv3) is the latest version of the O/S.

Running VMware in VMware?

We have a physical machine that runs VMware and hosts a VM we use for SharePoint deployment testing. That machine is old and dying, and my employer's network czars are heavily pushing hosted VMs as a replacement for outdated physical servers. I was curious about whether it's possible to run VMware inside VMware, and if so, whether there are severe performance implications. We don't require extreme performance from this setup, since it's just used for SharePoint testing and the associated SQL Server is on a different box. My guess is that we can't just use the primary hosted VM for our testing because we'll want to roll back occasionally and otherwise have more control over it, and getting buy-in for that from the network folks is unlikely. Does anyone have any experience with this?
edit: I know this nesting certainly isn't the preferred option, but (1) we want the flexibility of being able to use VMware snapshots at will and (2) the network folks will not allow us to arbitrarily roll back to a previous point in time because of the potential for removing mandated security updates. My guess is that a local desktop machine running VMware Workstation might just be the way to go. The hosted option seems attractive if it will work though since it's less machine maintenance for me to deal with.
The technical limitation with running VMware inside VMware is that VMware, Virtual PC, etc takes advantage of the Virtualization features present in modern CPUs.
If you have two or more hypervisors are both trying to control Ring 0 then there will be problems, this is something that I've encountered while trying to run both VMware and Virtual PC simultaneously on my desktop - one will error out/crash.
If your hypervisor can interact with the 'parent' hypervisor, then you'll be OK. Alternatively if the child hypervisor doesn't try to use the CPU virtualization features, or entirely emulates the CPU (such as QEMU) then you should also be OK.
Basically old-style hypervisors on old CPUs use Full virtualization (slow) which would be capable of nesting with a heavy, heavy performance hit. modern Hypervisors/CPUs use hardware assisted virtualization (near native performance) and you'd be hard pressed to find a hypervisor that is designed or capable of nested virtual machines.
Finally, I'd really advise against running dev/test VMs on the same physical server that is running production VMs. There's just too much to go wrong and security implications - you need to manage the dev/test environment and it sounds like you shouldn't have access to production environment. Likewise you probably don't want the operations team messing about with your test environment.
UPDATE: ESXi 4 now supports virtualizing itself. See this article for more information
I've never run VMware in VMware, but I've run VirtualPC inside VirtualBox without problems, so there's no fundamental reason it shouldn't work I suppose...
It sounds to me more like you have a problem with the inflexibility of your "network czars" than any technical one. If you're a developer or QA you need a testing environment where you can fool around with outdated (and potentially insecure) versions of the OS and applications, without putting the rest of the company network at risk.
Ex-VMware employee here.
Firstly, when you say Nested VMware I will assume you mean Nested ESXi. (You could also mean Workstation, Fusion, or Player).
Nested ESXi environments are unsupported and should not be used for production. These scenarios are not tested in QA and not guaranteed to work. In short, if you experience any kind of problem, VMware will not help you with this Nested ESXi setup.
With that said, yes you can do it and yes it does work. A lot of people use nested ESXi in their labs but not in production. Previously there were special configuration file edits that were necessary for nested ESXi to work. I have seen environments with even 3 layer nested ESXi servers (ESXi vm on and ESXi vm on a physical ESXi host). More recently there is the ESXi appliance which makes this much easier.
Have a look here:
http://www.virtuallyghetto.com/2015/12/deploying-nested-esxi-is-even-easier-now-with-the-esxi-virtual-appliance.html
I ran into this same problem. I work at a large company where our entire infrastructure is virtual, so if you need a server you get a VMware VM. So I had a couple of Windows 2003 Server Standard Edition based Guest VM's that had 6GB of memory and 200 GB of disk space, but I wanted to run linux and a LAMP stack on them. So I tried to install VMware Workstation on one and I got an error message saying it couldn't be installed within a VM. I also tried Microsoft Virtual PC and got a similar error message. I installed Sun's VirtualBox and that installed fine, but I couldn't get the networking to work w/in the guest Ubuntu OS. My next step is to try QEMU although performance might become an issue.
You ought to have a look at Mainframes - they are Virtualised from the word go:
Hardware - runs Hypervisor Type 1 - Level 1
on this you have zVM - Type 2 Hypervisor - Level 2
on this you have zOS - your main big operating system - Level 3
and/or
on this you have zLinux - Level 3
and/or
on this you have zVM for testing next version - Level 3
and/
on this you have zOS for testing zVM plus zOS both at next version - Level 4
So going down to level 4 is pretty common
Mind you on a Mainframe you can have 1000's of VMs running at the same time - and most sites who start using zVM/CMS and zVM/Linux usually do.
I can see two solutions for this (three if you count a VM inside a VM which is just crazy).
New hardware, which should be robust enough to handle several VM's used specifically for testing (sharpoint, etc.). In this situation your team could be given more rights without affecting non-testing VM's.
Sharepoint test VM's are moved to the main VM pool and those who need access are given the ability to checkout/deploy/rollback testing resources. This could be direct through VMWare tools or through an internal project that works through a VMWare API.
This should be a joint decision between Network/Dev/Testing.
JFYI:
I tried installing and running VMware ESXi server host(child ESXi server) as a virtual machine(on parent ESXi server) and it runs however you can not run any VMs under child ESXi server.
I am doing practice of VMware vSphere Data center virtualization on single Physical machine. There is VMware Workstation installed on Windows 8 OS. In VM Workstation, I have installed Windows Server 2008 OS, VMware ESXi OS and created the VMware Data center LAB. There is VMs running in LAB, and its confirm that We can user VMware in VMware. But it depends on your need, and Products which is chosen.
You can install ESXi on VMware Workstation, it's usefull to learn ESXi, so there in no reason run VMware in VMware.
Yes. You can run VMWare inside VMWare. Though its not officially supported, You can deploy VMs in the child ESX. I have checked for an advanced feature like PassThrough the HBA card but which was not available in child ESX, hence I could not provide a LUN from array.
So in production its better to not use this.
But for training and practices this can be used.
You can do that.
You can install vmware esxi inside virtual machine of another vmware esxi.
But the performance will be very bad.
Totally works.. totally can't do it other then for some kinda testing or some kind of educational purpose, because you won't get support. and from my limited experience it doesn't perform that well.
Yes, you can, VMware can even detect if it's running inside of another vmware machine and warn you that VMception will cause worse performance. which it will, trust me, just try to get the version the virtual machines work best in a physical machine, as to get as much performance possible.
"whether it's possible to run VMware inside VMware" What?
I can run Windows with Sharepoint in a VMWare machine that's hosted somewhere.
Or, I can run Windows with Sharepoint in a WMWare machine that's actually a VMWare machine that's hosted somewhere.
Why on earth would I add a level of nesting? Why not just go with Windows with Sharepoint hosted somewhere?
You can have any number of VMWares running on a single host. Lots of different versions doing lots of different things.
Nesting them doesn't make sense.

Is Ubuntu JeOS good for production purpose?

Actually I will want to use that JeOS for our webserver. Is it a good choice?
Thanks for piquing my interest. From the Ubuntu website:
Ubuntu Server Edition JeOS (pronounced
"Juice") is an efficient variant of
our server operating system,
configured specifically for virtual
appliances. Currently available as a
CD-Rom ISO for download, JeOS is a
specialised installation of Ubuntu
Server Edition with a tuned kernel
that only contains the base elements
needed to run within a virtualized
environment.
It looks promising to me - I run several full Ubuntu 8.04 VMs so I'll certainly check it out. Why not just try it?
Be aware that the kernel it installs is striped down to only have the stuff required for virtual machines, therefore you might have problems accessing the network from a real machine. (Note that the install-CD kernel isn't the same as the installed kernel as well).
If you can bypass that (IIRC I booted from the CD, and downloaded the normal server kernel and it all worked fine), then you end up with an absolutely minimal Linux system, but backed by the full Ubuntu repositories, so it's an excellent base for a server.
Also note minimal really means minimal - no cron by default for example.
Is it a good choice?
If you plan to run JeOS in a virtual machine, then yes, this is a good choice.

Resources