Is running a Linux container on windows AWS instance possible? - linux

I'm trying to run a Linux (ubuntu LTS) container inside a windows server 2019 OS. The problem is that the windows OS runs as an AWS instance.
There have been problem for me trying to achieve this and I've been reading somewhat different opinions on the internet regarding whether or not it is possible. Some say it will be possible on a .metal instance which is bare metal. Currently I've been trying running it on a regular t3 instance with has virtualization type HVM.
To sum up my questions are:
Is running a linux container on windows aws instance possible?
If yes, how?
If not, will it be possible on a bare metal instance?
Please keep in mind that I need the container to run in a Windows environment due to multiple tasks the the OS needs to achieve (and I don't want multiple instances)

In order to use Docker Desktop on Windows, you need either Hyper-V or Windows Susbsystem for Linux enabled (which at its turn requires Hyper-V). Both solutions demand of VT-x capabilities, but you're running inside a VM, which means that is not so easy to achieve.
It is called "nested virtualization", and it is not supported in common EC2 virtual machines. (source)
You can certainly run Linux containers on a bare metal Windows instance (but why you should? it is way cheaper and simpler to create a Linux virtual machine on EC2 and communicate it with your Windows host). Should still that be your purpose, you can install Windows Server 2019 with Hyper-V. (tutorial)
Another alternative for SMALL, SMALL things, that could work without nested virtualization (I haven't tried), would be using WSL1. (more info)
WSL1 uses a compatibility layer between Windows and Linux system calls, without actually virtualizing the operating system. Some folks have been able to install Docker 17.09 on WSL1, but this is a very adventurous path I would not recommend taking.

Related

In Docker Desktop for windows 10 with WSL2, where does docker containers live & how Linux containers can run a java app, but not windows nanoserver?

I have Windows 10 Enterprise Version and I have installed Docker Desktop, enabled WSL2 backend, and downloaded and installed the Linux kernel update package.
I am learning Docker and I have some doubts about how Docker works behind the scenes.
I have drawn a basic architecture diagram of Docker on windows with
WSL2, is this correct?
Whenever we create a new Linux container it gets created in the same lightweight utility VM provided
by WSL2?
And if we create a windows container it gets created on windows os?
Can these containers access windows and Linux kernels both when
required? Like when running a java app in a Linux container it requires windows kernel, right?
So, by default docker runs Linux containers, when do we need windows containers? I can containerize a java application by using
openjdk:8, but I am not able to pull windows nanoserver image when I
run Linux containers, it works only when I switch to Windows
Containers. What is going on here? Does this mean the openjdk:8 image is a Linux image(i do not know how to say it), and windows nanoserver a windows image?
How Linux Containers can run my java application? It must need the
windows kernel, right?
If the docker containers reside within the lightweight utility VM
created by WSL2, can it access both the Linux kernel that it ships
with and the Windows Kernel?
I have the default Linux container mode and I tried these two
commands.
docker run --platform=linux -d ubuntu /bin/sh -c "while true; do echo
hello world; sleep 1; done"
docker pull mcr.microsoft.com/windows/nanoserver:1903
The first one worked for the second one I got the following error.
1903: Pulling from windows/nanoserver no matching manifest for Linux/amd64 in the manifest list entries
But when I switch to windows containers it works.
So what is the difference between my java app on openjdk:8 image and windows nanoserver?
Do these not require windows kernel to run?
How is the java thing running on Linux containers then?
Edits :- Need more clarification on this-
Copying the question from comment section.
And one more thing, the containers do not access windows and Linux kernels simultaneously in WSL2 right? After all they are just isolated spaces in an OS, so either they can be in windows or Linux? Please correct me if I am wrong. The Linux images are built in such a way that it has everything to run my java and as java is a cross platform language so it can run on Linux kernel, is this the concept?
About the architecture diagram that I have made here- the containers(isolated processes in an operating system with app files) , in case of Linux containers all of them(multiple containers) runs on the same WSL2 VM, right?
Firstly, good question.
I hope I can answer it as best as possible.
So, by default docker runs Linux containers, when do we need windows containers?
you don't need windows containers. You should always consider what your application needs. For instance, if you are working on a java app, you would pull a java image and not an entire host OS. The only time I ever pulled a windows image was when I dockerized an ASP.NET application that can only be run on windows.
How Linux Containers can run my java application? It must need the windows kernel, right?
In the context of docker:
Docker for Windows allows you to simulate running Linux containers on Windows, but under the hood a Linux VM is created, so still Linux containers are running on Linux, and Windows containers are running on Windows.
if the docker containers reside within the lightweight utility VM created by WSL2, can it access both the Linux kernel that it ships with and the Windows Kernel?
Containers are using the underlying Operating System resources and drivers, so Windows containers can run on Windows only, and Linux containers can run on Linux only. Docker for Windows allows you to simulate running Linux containers on Windows, but under the hood a Linux VM is created, so still Linux containers are running on Linux, and Windows containers are running on Windows.
So what is the difference between my java app on openjdk:8 image and windows nanoserver?
The openJdk image and windows nano server core difference is the very base image that they use. openJdk is probably using some very bare unix os as the base where as the nanoserver is an entire os which is windows.
Do these not require windows kernel to run?
The openjdk image does not require windows to run as it is built from linux. Docker for windows will use the WsL to run. The nanoserver will only run on windows (as windows images can only run on windows).
How is the java thing running on Linux containers then?
I understand this question to be "How does the openjdk image run on linux and windows?"
if so, because it uses a linux os as its base image, it can run by default on linux. But because the WsL2 exists, a VM is created and simulates a linux OS in windows. That is why we can run windows images and linux images on Docker for windows.
I hope this helped, here are some extra tips from the questions for you to consider.
The images will always perform best when the image is the same type as the OS. This is because docker will utilise resources of the host and performance is better when the host and container are of the same os.
Use images that are best fit for purpose. Don't use an entire os image just to run a java app. Rather use the java image. This applies to a wide range of frameworks and languages.
Read this This is the crediting article if you want to read more.
The diagram is not quite correct. Both the Windows Kernel and a lightweight VM that hosts WSL2 KVM sit on top of the Hyper-V hypervisor. In other words, WSL2 leverages Hyper-V. (An alternative would be to use only Hyper-V, but with WSL2 it is more seamless.) WSL2 uses docker-desktop as the main bootstrap VM and docker-desktop-data for storing images and containers data. 9p network protocol is used for seamless host-to-guest and guest-to-host file access:
https://wiki.qemu.org/File:9pfs_topology.png
This way, docker commands can be run from both Windows and from within a distro installed under WSL2 such as Ubuntu etc. In both cases, containers run under Linux. A rationale for this architecture is that Linux Docker cannot be installed on Hyper-V VM nor inside a WSL2 Linux.
Hi,
In practice exist two great use-cases/types of hypervisors:
a)Hyper-V is hypervisor(software which controls the containers=VMs) which is type-1 so it worked directly(bare-metal) on machine=PC=hardware:
Hyper-V(hypervisor type-1) <---->PC-machine
In this first type The Hypervisor take control over hardware directly(it avoid using of the Host-OS because the control taken from machine's BIOS)
That means that not use Host-OS but not means that Host-OS not exist!!!!
b)Virtual Box(VB) is hypervisor type 2(heavy software). So VB worked with machine via Host-OperationSystem(Host-OS):
VB(hypervisor type2) <---> Host-OS <----> PC-machine
So in this last case the control of hardware(PC-machine) is heavy because the control over machine=PC=hardware is exercised via supplemental or tertial-part=component which is Host-OS
Also need to know that the containers=VMs(regardless of type 1 or 2) have each the OS one but it is named OS-guest(fr. invite).
So in both case(type 1 and type 2) the Hypervisor work as backend for containers/VMs(which are frontends).
For more details read about in this tutorial/article:
https://www.nakivo.com/blog/hyper-v-virtualbox-one-choose-infrastructure/
PS: One Virtual-Machine (VM) can controls many Containers like in the image
Thanks
or use this google images search(firstly 3 images)
Another explanation about communication from VM and its isolated-Containers can be fount in this article.

Amazon EC2 Image with VNC enabled?

I want to fire up an linux EC2 instance that has VNC installed by default. Something like Ubuntu, where I can fire up, and VNC right in to configure.
I've observed all of the AMI's that are available, and the closest thing I can come to, is:
SuSE Linux Enterprise Server 11 sp3 (HVM) - ami-xxxxxxx
SuSE Linux Enterprise Server 11 Service Pack 3 (HVM), EBS-backed. Nvidia driver installs automatically during startup for GPU instances.
I assume that this has graphic capabilities, but you know what they say about assumptions.
Does anybody know of an image that has this? I just want to fire up an instance that I can get right into, not through SSH.
The images you are looking at are designed for the largest (HVM) instances. The second one is intended for GPU instances. Probably not what you are looking for if you just want VNC.
Often most AMI's don't have any desktop systems installed as they are generally intended to be run as headless servers. But you should be able to take a base Ubuntu instance, install a desktop system and vnc and then repackage it as an AMI for your own use.

Any way to run a Linux inside a virtual machine, inside my application?

I want to be able to distribute a Linux running inside my application. The reason is that I need to add software functionality which is most easily added inside a Linux container and distributed with the application.
Is there any way to run a VM inside a C/C++ application on Windows, OSX, Linux?
VirtualBox has an API for creating/running VMs. The program Vagrant uses this to give developers a simple cross-platform way to develop. You can run vagrant up from Windows, Linux or Windows, and it does the same thing.
You can also script adding ports to your VM, so your C++ program could say "VirtualBox, boot me this image", then just connect to a TCP port to talk to the "Linux program". But debugging problems will be hard.
But if your goal is to sell a Linux program to non-Linux desktop people, it's probably best for you and your sanity to bite the bullet and port it to Windows/Mac. (Or go Cloud and sell it as a service.)
Two frameworks come to mind:
User mode Linux runs the Linux kernel as an application. This give you ultimate control over launching and managing the virtual machine from within a Linux application.
libvirt provides a toolkit for programmatically managing all manner of virtual machines.
These may both requires a Linux host. For other host operating systems, it may be necessary to manage the virtual machine manually -- or using ad hoc scripting.
QEMU can run a VM and it can be compiled on Windows and Linux and OSX. http://wiki.qemu.org/Main_Page
QEMU can be compiled as it is written in C++.
So in theory, QEMU could be embedded in a C++ program and used to run a Linux VM.
An example QEMU running Puppy Linux http://www.erikveen.dds.nl/qemupuppy/

Cygwin vs Linux Virtual Machine for Development?

< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.

Running VMware in VMware?

We have a physical machine that runs VMware and hosts a VM we use for SharePoint deployment testing. That machine is old and dying, and my employer's network czars are heavily pushing hosted VMs as a replacement for outdated physical servers. I was curious about whether it's possible to run VMware inside VMware, and if so, whether there are severe performance implications. We don't require extreme performance from this setup, since it's just used for SharePoint testing and the associated SQL Server is on a different box. My guess is that we can't just use the primary hosted VM for our testing because we'll want to roll back occasionally and otherwise have more control over it, and getting buy-in for that from the network folks is unlikely. Does anyone have any experience with this?
edit: I know this nesting certainly isn't the preferred option, but (1) we want the flexibility of being able to use VMware snapshots at will and (2) the network folks will not allow us to arbitrarily roll back to a previous point in time because of the potential for removing mandated security updates. My guess is that a local desktop machine running VMware Workstation might just be the way to go. The hosted option seems attractive if it will work though since it's less machine maintenance for me to deal with.
The technical limitation with running VMware inside VMware is that VMware, Virtual PC, etc takes advantage of the Virtualization features present in modern CPUs.
If you have two or more hypervisors are both trying to control Ring 0 then there will be problems, this is something that I've encountered while trying to run both VMware and Virtual PC simultaneously on my desktop - one will error out/crash.
If your hypervisor can interact with the 'parent' hypervisor, then you'll be OK. Alternatively if the child hypervisor doesn't try to use the CPU virtualization features, or entirely emulates the CPU (such as QEMU) then you should also be OK.
Basically old-style hypervisors on old CPUs use Full virtualization (slow) which would be capable of nesting with a heavy, heavy performance hit. modern Hypervisors/CPUs use hardware assisted virtualization (near native performance) and you'd be hard pressed to find a hypervisor that is designed or capable of nested virtual machines.
Finally, I'd really advise against running dev/test VMs on the same physical server that is running production VMs. There's just too much to go wrong and security implications - you need to manage the dev/test environment and it sounds like you shouldn't have access to production environment. Likewise you probably don't want the operations team messing about with your test environment.
UPDATE: ESXi 4 now supports virtualizing itself. See this article for more information
I've never run VMware in VMware, but I've run VirtualPC inside VirtualBox without problems, so there's no fundamental reason it shouldn't work I suppose...
It sounds to me more like you have a problem with the inflexibility of your "network czars" than any technical one. If you're a developer or QA you need a testing environment where you can fool around with outdated (and potentially insecure) versions of the OS and applications, without putting the rest of the company network at risk.
Ex-VMware employee here.
Firstly, when you say Nested VMware I will assume you mean Nested ESXi. (You could also mean Workstation, Fusion, or Player).
Nested ESXi environments are unsupported and should not be used for production. These scenarios are not tested in QA and not guaranteed to work. In short, if you experience any kind of problem, VMware will not help you with this Nested ESXi setup.
With that said, yes you can do it and yes it does work. A lot of people use nested ESXi in their labs but not in production. Previously there were special configuration file edits that were necessary for nested ESXi to work. I have seen environments with even 3 layer nested ESXi servers (ESXi vm on and ESXi vm on a physical ESXi host). More recently there is the ESXi appliance which makes this much easier.
Have a look here:
http://www.virtuallyghetto.com/2015/12/deploying-nested-esxi-is-even-easier-now-with-the-esxi-virtual-appliance.html
I ran into this same problem. I work at a large company where our entire infrastructure is virtual, so if you need a server you get a VMware VM. So I had a couple of Windows 2003 Server Standard Edition based Guest VM's that had 6GB of memory and 200 GB of disk space, but I wanted to run linux and a LAMP stack on them. So I tried to install VMware Workstation on one and I got an error message saying it couldn't be installed within a VM. I also tried Microsoft Virtual PC and got a similar error message. I installed Sun's VirtualBox and that installed fine, but I couldn't get the networking to work w/in the guest Ubuntu OS. My next step is to try QEMU although performance might become an issue.
You ought to have a look at Mainframes - they are Virtualised from the word go:
Hardware - runs Hypervisor Type 1 - Level 1
on this you have zVM - Type 2 Hypervisor - Level 2
on this you have zOS - your main big operating system - Level 3
and/or
on this you have zLinux - Level 3
and/or
on this you have zVM for testing next version - Level 3
and/
on this you have zOS for testing zVM plus zOS both at next version - Level 4
So going down to level 4 is pretty common
Mind you on a Mainframe you can have 1000's of VMs running at the same time - and most sites who start using zVM/CMS and zVM/Linux usually do.
I can see two solutions for this (three if you count a VM inside a VM which is just crazy).
New hardware, which should be robust enough to handle several VM's used specifically for testing (sharpoint, etc.). In this situation your team could be given more rights without affecting non-testing VM's.
Sharepoint test VM's are moved to the main VM pool and those who need access are given the ability to checkout/deploy/rollback testing resources. This could be direct through VMWare tools or through an internal project that works through a VMWare API.
This should be a joint decision between Network/Dev/Testing.
JFYI:
I tried installing and running VMware ESXi server host(child ESXi server) as a virtual machine(on parent ESXi server) and it runs however you can not run any VMs under child ESXi server.
I am doing practice of VMware vSphere Data center virtualization on single Physical machine. There is VMware Workstation installed on Windows 8 OS. In VM Workstation, I have installed Windows Server 2008 OS, VMware ESXi OS and created the VMware Data center LAB. There is VMs running in LAB, and its confirm that We can user VMware in VMware. But it depends on your need, and Products which is chosen.
You can install ESXi on VMware Workstation, it's usefull to learn ESXi, so there in no reason run VMware in VMware.
Yes. You can run VMWare inside VMWare. Though its not officially supported, You can deploy VMs in the child ESX. I have checked for an advanced feature like PassThrough the HBA card but which was not available in child ESX, hence I could not provide a LUN from array.
So in production its better to not use this.
But for training and practices this can be used.
You can do that.
You can install vmware esxi inside virtual machine of another vmware esxi.
But the performance will be very bad.
Totally works.. totally can't do it other then for some kinda testing or some kind of educational purpose, because you won't get support. and from my limited experience it doesn't perform that well.
Yes, you can, VMware can even detect if it's running inside of another vmware machine and warn you that VMception will cause worse performance. which it will, trust me, just try to get the version the virtual machines work best in a physical machine, as to get as much performance possible.
"whether it's possible to run VMware inside VMware" What?
I can run Windows with Sharepoint in a VMWare machine that's hosted somewhere.
Or, I can run Windows with Sharepoint in a WMWare machine that's actually a VMWare machine that's hosted somewhere.
Why on earth would I add a level of nesting? Why not just go with Windows with Sharepoint hosted somewhere?
You can have any number of VMWares running on a single host. Lots of different versions doing lots of different things.
Nesting them doesn't make sense.

Resources