Is Ubuntu JeOS good for production purpose? - linux

Actually I will want to use that JeOS for our webserver. Is it a good choice?

Thanks for piquing my interest. From the Ubuntu website:
Ubuntu Server Edition JeOS (pronounced
"Juice") is an efficient variant of
our server operating system,
configured specifically for virtual
appliances. Currently available as a
CD-Rom ISO for download, JeOS is a
specialised installation of Ubuntu
Server Edition with a tuned kernel
that only contains the base elements
needed to run within a virtualized
environment.
It looks promising to me - I run several full Ubuntu 8.04 VMs so I'll certainly check it out. Why not just try it?

Be aware that the kernel it installs is striped down to only have the stuff required for virtual machines, therefore you might have problems accessing the network from a real machine. (Note that the install-CD kernel isn't the same as the installed kernel as well).
If you can bypass that (IIRC I booted from the CD, and downloaded the normal server kernel and it all worked fine), then you end up with an absolutely minimal Linux system, but backed by the full Ubuntu repositories, so it's an excellent base for a server.
Also note minimal really means minimal - no cron by default for example.

Is it a good choice?
If you plan to run JeOS in a virtual machine, then yes, this is a good choice.

Related

Cygwin vs Linux Virtual Machine for Development?

< skippable part >
I work in IT (mostly desktop support and network administration) in a Windows environment, and I occasionally program.
A couple weeks ago, I decided I couldn't be as effective as I want to be without a Bash environment for my command prompt needs. This is especially true when I am using Ruby and git. I used Msysgit for a while, but I just didn't like how it wasn't extensible like Linux. So, I installed Cygwin and played around with that for a couple weeks.
As great as Cygwin is, it seems like it is meant to be a suped up command prompt, and its compatibility with Linux is just a pleasant side effect. This especially became evident when I tried to upgrade Ruby to 1.9.3 (it worked, but it wasn't straightforward), install rvm (never worked), and install RMagick (may or may not work, but looks like a headache).
So, now I'm considering running Linux in a virtual machine. But I'm worried that might be another can of worms and I'll have wasted hours before I find that out. I like that Cygwin runs in Windows and I get to use my IDE, user folder, and more with it. But I don't like that support for it is not as thorough as for a major distro.
< /skippable part >
Does anyone here have insight on using Cygwin vs running a Linux virtual machine?
Any advice on setting up a Linux development environment in a virtual machine within Windows?
I have faced common issues before, and the best solution according to my experience is just 2 workstations :).
Apart from that having Linux running in a virtual environment is way better.
First of all, you will have full Linux capabilities (except 3d acceleration, but you probably don't need that).
You will have the capability of creating snapshots and revert back to them when things go wrong!
You can start multiple environment using templates, which is very convenient.
The only downfall I can think of is performance issues of the host machine.
If it's a normal workstation/PC, an IDE + one virtual machine + a 100+tabs browser just makes it slow.
1: cygwin is good for quick hacks, and for being able to acces host-os resources(you can run IE for example in a bash script). For something tightly integrated and some "real" word, go to a vm. It will emulate everything and separate development from the real machine, and this may be a good thing in some cases... as a plus it simulates a real server:)
2: in virtualbox at least, you have shared folders, and you can share a local folder, and see it in the vm as a local folder(local or as a windows share..it actually depends). Then you can use that "entry point" to symlink stuff into the vm, and do the things you need with the real files being located in the real(host) machine
SSH into a linux box. This is what everyone does. Why isn't this the answer?
There is something I have heard of called Cooperative Linux. It runs Linux alongside with Windows kernel so you can use them at the same time. I've never used it, but here:
http://www.colinux.org/
What I think now is getting the pros of 2 options is using
Docker
, it is giving you cygwin simplicity and VM functionality with better performance.
Linux in a virtual machine will give you the experience you want more than cygwin or any mock shell as I like to call them.
Running VM's though require a lot of ram depending on whether you want a desktop version of linux or just a command line version.
Myself in work I have a pc with 8gb of ram and I run ubuntu 64bit as main OS, two ubuntu servers (these are for dev environments two different projects) and a windows 7 VM and a win XP VM.
I can run the two ubuntu servers and one other VM at the same time, key here is more ram if you want to be able to do VM's.
If you're going to be working with Ruby then get an Ubuntu virtual machine up and running :) I've not tried Ruby, etc on Windows but I have heard that it is a pain to setup and configure. I use a Mac for all my Rails development so I cannot comment on the Windows side for that.
As for virtual machine creation, I prefer VMware Workstation, however there are free alternatives such as Virtualbox and VMware Server.
I'm using a Linux VM within a Windows seven environment as this VM is as representative as possible of the final production environment. The whole setup is binded to the Eclipse IDE under ms-Windows seven. So this is really great for local full testing, before committing or tagging the tested version to the production servers.
As you mentioned as well, this takes some time to get properly setup and fully configured. So if your need is only for little tricks or tasks, you may keep using cygwin. For example, I faced significant issues to configure perl and compile mysql within cygwin. So it's ok for basic usages, but not to fully take advantage of a full linux environment.
Your choice strongly depends on the final server setup purpose. A VM will do it whatever your need is. The setup cost for it is higher, so this time investment must be used often to get returned.

building linux kernels

i Just got the book Linux Kernel Development by Robert Love . It has lots of places where you are required to modify and build the kernel . So how should i go with it . Is it better to use a VM , or should i somehow get a proper test machine for it , since i dont want to goof up on my system and data.
A VM has the advantage of offering snapshots. These allow you to save the state of the machine - if the kernel build doesn't work you simply restore the snapshot, and you are able to take as many snapshots as you have disk space to store them. You are also able to clone and re-deploy the VM image, so you have many identical systems to test on.
The same experiment on a physical machine would require far greater effort (ghosting/cloning the disk, re-installing the OS etc).
VirtualBox is free, cross-platform virtualisation software.
There are a lot of tutorials on the web about this topic, e.g. here:
http://www.digitalhermit.com/linux/Kernel-Build-HOWTO.html
http://www.kernel.org/pub/linux/kernel/people/gregkh/lkn/lkn_pdf/ch04.pdf
You could do either or both. An alternative somewhere in between is to setup a dual boot. This is a little riskier than a VM, but not too much.
coLinux
or run linux iso image using QEMU on windows

Is ubuntu 9.04 good choice for embedded linux application development?

I want to change linux distro my Development(Host) Machine which I use for embedded development.
I cross-compile applications for many different processors. It is required for me to download different different libraries to evaluate their functionality/Performance/Stability on different devices , as well as on PC.
So Is ubuntu 9.04 a good choice for me?
Thanks,
Sunny.
If you are using gcc or other source based compiler that runs on linux then I would say yes, you want a linux distro, and ubuntu is currently the most popular/best. I would try to avoid distro specific things, drive down the middle of the road and you should be able to use any distro equally well.
That will largely depend on your needs. For an embedded system, I'd go with any distribution that sports a very small footprint and supports the necessary hardware.
Depending on your hardware, Debian might work fine. You could create your image with debootstrap which allows for fairly small customized installs. It still includes apt and other things which might not be desirable, although that could be to your benefit if you need to push out updates.
If you did go with Debian, you could most likely do all your development on Ubuntu and then push to your embedded system.
i use ubuntu for my host system and a chrooted gentoo install for building apps for an embedded target. I found gentoo was a good choice as it is source distributed and easy to select what version of a particular library is installed.
One thing that is good to know is that ubuntu and derivatives uses dash and not bash as /bin/sh. This confuses crosstools and can give you severe headaches.

Which linux server distro to use for tomcat?

ubuntu 9.04, fedora 11, redhat...
what are the differences from a web server/development standpoint?
None. They differ only in how they package things, but they're all essentially the same - same operating system, same software. Some people get quite emotional about this choice, but I've used several, and there's nothing to pick between them these days.
I like to choose linux distros based on whichever ones have the most help available online. I'd probably go with CentOS or Ubuntu for that reason.
Use whatever your hardware vendor is happy to support. If you're serious about running a production system, you will use a supported OS.
Having said that, most vendors don't officially support Centos, however it is sufficiently similar (i.e. almost identical) to Redhat Enterprise that they ignore the difference.
Your code might run anywhere, but your hardware vendor's tools probably won't. You'll want to use those.
For tomcat there is not any difference but as a server: Ubuntu is more cutting edge in terms of kernel and packages. Ubuntu package management is superior and easier. If you will prefer Ubuntu then use server edition it is optimized as a server. CentOS is said to be solid but I haven't got much experience with it. If you are considering a virtual server different distros have different level of support for different virtualization technologies just keep it in your mind.
If you are new to linux, then I defiantly recommend Ubuntu. You can be up and running in now time with apt-get.

Cross Compiling Linux Kernels and Debugging via VMware

I'm considering doing some Linux kernel and device driver development under a vmware VM for testing ( Ubuntu 9.04 as a guest under vmware server 2.0 ) while doing the compiles on the Ubuntu 8.04 host.
I don't want to take the performance hit of doing the compiles under the VM.
I know that the kernel obviously doesn't link to anything outside itself so there shouldn't be any problems in that regard, but
are there any special gotcha's I need to watch out for when doing this?
beyond still having a running computer when the kernel crashes are there any other benefits to this setup?
Are there any guides to using this kind of setup?
Edit
I've seen numerous references to remote debugging in VMware via Workstation 6.0 using GDB on the host. Does anyone know if this works with any of the free versions of VMWare such as Server 2.0.
I'm not sure about ubuntu thing. Given that you are not doing a real cross compilation (i.e. x86->arm), I would consider using make-kpkg package. This should produce an installable .deb
archive with kernel for your system. this would work for me on debian, it might for for you
on ubuntu.
more about make-kpkg:
http://www.debianhelp.co.uk/kernel2.6.htm
I'm not aware of any gotchas. But basically it depends what kind of kernel part you
are working with. The more special HW/driver you need, the more likely VM won't work for you.
probably faster boots and my favorite is the possibility to take screenshot (cut'n'paste) of panic message.
try to browse to vmware communities. this thread looks very promising, although it dicusses
topic for MacOS:
http://communities.vmware.com/thread/185781
Compiling, editing, compiling is quite quick anyway, you don't recompile you whole kernel each time you modify the driver.
Before crashing, you can have deadlock, bad usage of resource that leads to unremovable module, memory leak etc ... All kind of things that needs a reboot even if your machine did not crash, so yes, this can be a good idea.
The gotchas can come in the form of the install step and module dependency generation, since you don't want to install your driver in the host, but in the target machine.

Resources