Running server from inside Chroot in ubuntu - linux

Due to the peculiar nature of the application, I'm thinking of running servers such as Apache, Tomcat from within a chroot environment.
Using schroot and debootstrap, I'm able to create a clone of my 10.04 ubuntu(minimal ubuntu) inside chroot directory. I've install tomcat and apache inside chroot . But how do I access these two servers?
Can I access them like a normal apache/tomcat installed on parent server?
Can the parent OS access the apache/tomcat of chroot os?
First, which of these options is possible. Second, any caveats that I should handle with each of these options.
I want something like
Internet ---> [Main host Ubuntu 10.04 Apache ----> (chroot ubuntu Tomcat) ]

chrooting is one of the simplest forms of virtual machines. If your application is security-sensitive, you might consider running a more full-featured solution, such as OpenVZ, Xen, KVM, VirtualBox or commercial solutions, such as VMware and a few others.
That being said, you should really consider to view your chrooted OS as just another host in your network. When you'll be using just chroot, you can access it as localhost (127.0.0.1) with some port number you'll assign to it (chrooted system will effectively share port assignations with parent system), while using other virtualization solutions allows you to assign a normal separate IP to each virtual machine and run it much as you would run a separate physical box.
chrooting is fairly "weak" security solution, is parent and child share a lot of resources almost without limitations (i.e. memory, CPU, process pool, disc space, privileges, sockets, etc). They only limitation in fact is limited filesystem access (i.e. chrooted applications can access only a portion of whole file system), although it provides some degree of isolation.

Related

Is running a Linux container on windows AWS instance possible?

I'm trying to run a Linux (ubuntu LTS) container inside a windows server 2019 OS. The problem is that the windows OS runs as an AWS instance.
There have been problem for me trying to achieve this and I've been reading somewhat different opinions on the internet regarding whether or not it is possible. Some say it will be possible on a .metal instance which is bare metal. Currently I've been trying running it on a regular t3 instance with has virtualization type HVM.
To sum up my questions are:
Is running a linux container on windows aws instance possible?
If yes, how?
If not, will it be possible on a bare metal instance?
Please keep in mind that I need the container to run in a Windows environment due to multiple tasks the the OS needs to achieve (and I don't want multiple instances)
In order to use Docker Desktop on Windows, you need either Hyper-V or Windows Susbsystem for Linux enabled (which at its turn requires Hyper-V). Both solutions demand of VT-x capabilities, but you're running inside a VM, which means that is not so easy to achieve.
It is called "nested virtualization", and it is not supported in common EC2 virtual machines. (source)
You can certainly run Linux containers on a bare metal Windows instance (but why you should? it is way cheaper and simpler to create a Linux virtual machine on EC2 and communicate it with your Windows host). Should still that be your purpose, you can install Windows Server 2019 with Hyper-V. (tutorial)
Another alternative for SMALL, SMALL things, that could work without nested virtualization (I haven't tried), would be using WSL1. (more info)
WSL1 uses a compatibility layer between Windows and Linux system calls, without actually virtualizing the operating system. Some folks have been able to install Docker 17.09 on WSL1, but this is a very adventurous path I would not recommend taking.

Linux Kernel Config Scopes within VM or Hypervisor

In production we're going to deploy a redis server and need to set the overcommit_memory=1 and disable Transparent Huge Pages in Kernel.
The issue is currrently we only have one giant server, and it is to be shared by many other apps. We only want those kernel configs in the redis server. I wonder if we can achieve it by spinning up a dedicated VM for redis. Doing so in docker certainly doesn't make sense. My questions is:
Will those Kernel configs take actual effect in the redis VM even if the host OS doesn't have the same configs? I doubt it since the hardware resource is allocated by the host machine in the end.
Will the kernel config in the redis VM affect other VMs that run other apps? I think it won't, just want to confirm.
To achieve the goal, what kind of VM or hypervisor should we use?
If there's no way to do it in VM, is having a separate server (hardware) for redis the only way to go?
If you're running a real kernel on a virtual machine, the VM should be able to correctly handle overcommitted memory.
The host server will grant a fixed chunk of memory to the VM. The VM should manage that memory as it sees fit, including overcommitting its own address space.
This will not affect other applications running on the host (apart from the fact that is has less memory available). If it does, there is a problem with your hypervisor.
This should work with any Hypervisor. KVM is a good place to start.
Note that I have not actually tried this -- experiment results are welcome!

Any way to run a Linux inside a virtual machine, inside my application?

I want to be able to distribute a Linux running inside my application. The reason is that I need to add software functionality which is most easily added inside a Linux container and distributed with the application.
Is there any way to run a VM inside a C/C++ application on Windows, OSX, Linux?
VirtualBox has an API for creating/running VMs. The program Vagrant uses this to give developers a simple cross-platform way to develop. You can run vagrant up from Windows, Linux or Windows, and it does the same thing.
You can also script adding ports to your VM, so your C++ program could say "VirtualBox, boot me this image", then just connect to a TCP port to talk to the "Linux program". But debugging problems will be hard.
But if your goal is to sell a Linux program to non-Linux desktop people, it's probably best for you and your sanity to bite the bullet and port it to Windows/Mac. (Or go Cloud and sell it as a service.)
Two frameworks come to mind:
User mode Linux runs the Linux kernel as an application. This give you ultimate control over launching and managing the virtual machine from within a Linux application.
libvirt provides a toolkit for programmatically managing all manner of virtual machines.
These may both requires a Linux host. For other host operating systems, it may be necessary to manage the virtual machine manually -- or using ad hoc scripting.
QEMU can run a VM and it can be compiled on Windows and Linux and OSX. http://wiki.qemu.org/Main_Page
QEMU can be compiled as it is written in C++.
So in theory, QEMU could be embedded in a C++ program and used to run a Linux VM.
An example QEMU running Puppy Linux http://www.erikveen.dds.nl/qemupuppy/

Is it safe to use a virtual server as dev environment, symlinking to files on the host?

I used to use MAMP (or just a local Apache/PHP/MySQL stack) to work on web projects. I've since graduated to a live Ubuntu server which is much closer to the production environments for the sites I work on.
Now I'm trying to take this a step further to optimize my workflow. My goal is to have a Linux server running in VirtualBox that automounts a local folder share (from the host) and uses a symlink to gain access to the files (i.e. client:/var/www/dev is a symlink to host:/Users/charlie/dev/).
I don't want to keep my files stored on the virtual server if it can be avoided. I prefer having direct local access to the files and not having to wait for buffering issues between the host and the client. i.e., if I have several files that are located on the client open in my IDE and I close my laptop, as soon as I open it there's a bit of a buffer issue. My IDE has open project(s) that reference folders and files located on a network share that isn't yet available. In the few seconds it takes for the virtual machine to wake up, OSX is already reporting that the share can't be found and was disconnected, the IDE chokes up, etc.
So what am I asking? Well, is this safe / are there obvious pitfalls I'm not seeing / better ways to do this?
Edit: For anyone that stumbles upon this post, the final setup is a Linux virtual machine running in VirtualBox on a Mac with NFS and a symlink from my Apache web root to my mount.
I used NFS Manager (http://www.bresink.com/osx/NFSManager.html) to setup the NFS Server on my host computer with user mapping to my primary account. This ensures that when my VM mounts the NFS share it can do whatever it needs (reading, writing, modifying). Then I added this line to /etc/fstab on my VM to automount the share on boot: "123.456.89.1:/Users/charlie/nfs_share /mnt/nfs_share nfs" (where 123 is my host IP on the virtual NAT).
The result is a killer development environment where I can use Finder, Aptana (or whatever your editor of choice is) Photoshop, etc to work on files locally and simultaneously test them out in my "real" Apache/Lighttpd/MySQL/PHP environment!
I am using the exact same setup for accessing my documents folder between my Ubuntu host and the windows guest. Idem on my iMac. The only issues are when editing on the 2 platforms are the CR/LS, but that will be no issue on your setup.

Access files on Linux mint VirtualBox guest

I have a Linux virtualbox that I use for development. The stuff I'd like to share to the host operating system resides in /var/www. I tried setting up a samba share, but I can't seem to see my virtualbox on the network. Does anyone know how I'd go about doing this? I searched, but the only thing I've found is virtualbox's shared folders which isn't quite what I'm looking for.
By default VirtualBox networking defaults to an internal NAT implementation, which only allows the guest to access the network and not the other way around.
To access the guest from the host you have to use a different networking mode.
My preferred solution is host-only networking, because the guest appears as a proper networked-machine on the host, without being exposed to the public network.
Bridged networking would also do, but you'd have to secure the guest as if it was a separate machine and there may be networks where having two MAC addresses for a single physical PC is not advised or even allowed.
Why aren't shared folders what you are looking for, anyway?

Resources