Deploying Haskell web application to low-spec server - haskell

I'm looking for a way to deploy a Haskell web application on a low-spec toy server. The server specs:
OS: debian stable (squeeze) i386
CPU: 1 GHz Pentium IV
RAM: 512 MB
Storage: 512 MB compact flash (mounted on /var), 4 GB USB compact flash (mounted on /)
The server runs fine, it doesn't see much traffic (it's mainly used by myself, friends and family members), and I can afford to run it from my living room because it's completely silent and draw very little power (around 10 W idle, 40 W peak).
Quite obviously, I would like to avoid installing the entire Haskell platform and compile on the server - I'd run out of disk space fairly quickly, and compilation is bound to take ages due to slow storage. I can't just deploy binaries from my development machine though, because that one runs debian testing amd64, so the binaries won't be compatible. My ideas so far:
install a VM with debian/i386 to build on
figure out a way to build i386 binaries on amd64
compile to C on the development machine, copy C sources to server, finish build there (installing gcc or clang on the server is probably acceptable)
other ideas?
Which one sounds the most promising? Are option 2 and 3 even possible?
Also, I'm a bit concerned about libraries; the application depends on a few system libraries such as libcairo; installing them on the server is not a problem, but I wonder whether, especially for option 2, this would work (library versions etc.)

Not tried with haskell, but with similar requirements in the past I've found it simplest to just set up a vm with the same version of debian as the target system. Means you don't need to worry about library versioning etc.

Related

Deploy 32 or 64 bit Linux as Host machine for embedded development

Of course my target Linux is running on a 32 bit MCU. So are there any restrictions if my host environment is running on a 64 bit system?
Or should I just take the 32bit host version and not care about the 64 bit version. I mean the only thing I am doing is build applications for my embedded device. Or are there any speed advantages on a 64 bit host system regarding GCC compilation, QT programming , etc...
What is your personal point of view on this?
Technically, no real change happens.
But, from my personal experience, you may encounter problems if you have to copy the development environment to another computer and you don't want to do a complete setup (for example: a broken host, or having to do maintenance after several months to the code, possibly not done by the same author of the original code and/or nobody remembers how they did the setup when the development was started). Using a 64 bits limits this "portability". For now there are multiarch distributions like Debian, you can face even more problems if you need to quickly set up another machine.
This is why I ended with using virtual machines running 32 bits GNU/Linux: if you need to quickly setup a new computer, just copy a bunch of files, make sure you have virtualbox or vmware installed on the new pc and you're almost done.

Compiling for many linux distributions

in a short I'm gonna release an application written in OCaml and I was planning to distribute it by source code.
The problem is that the OCaml development system is not something light neither so common to have installed so I would like to release it also in a binary way for various operating systems.
Windows makes no problem since I can compile it through cygwin and distribute it with the required dlls
OS X is not a problem too since I can compile it and distribute it easily (no external dependencies from what I've tried)
When arriving to Linux problems arrive since I don't really know which is the best way to compile and distribute it. The program itself doesn't depend on anything (everything is statically linked) but how to cover many distributions?
I have an ubuntu server 10 virtualized with an amd64 architecture, I used this machine to test the program under Linux and everything works fine. Of course if I try to move the binary to a 32bit ubuntu it stops working and I haven't been able to try different distributions... are there tricks to manage this kind of issue? (that seems recurring)
for example:
can I compile both 32 bit and 64 from the same machine?
will a binary compiled under ubuntu run also on other distributions?
which "branches" should I consider when wanting to cover as many distros as possible?
You can generally produce 64 and 32 bit binaries on a 64 bit machine with relative ease - i.e. the distribution usually has the proper support in its compiler packages and you can actually test your build. Note that the OS needs to be 64 bit too, not just the CPU.
Static binaries generally run everywhere, provided adequate support from the kernel and CPU - keep an eye on your compiler options to ensure this. They are your best bet for compatibility. Shared libraries can be an issue. To deal with this, binaries linked with shared libraries can be bundled with those libraries and run with a loader script if necessary.
You should at least target Debian/Ubuntu with dpkg packages, Redhad/Fedora/Mandriva with RPM and SUSE/OpenSUSE with RPM again (I mention these two RPM cases separately because you might need to produce separate packages for these "families" of distributions). You should also provide a .tar.bz2 or a .run installer for the rest.
You can have a look at the options provided by e.g. Oracle for Java and VirtualBox to see how they provide their software.
You could look at building it in the openSUSE Build Service. Although run by openSUSE, this will build packages for:
openSUSE SUSE
Enterprise variants
Mandiva
Fedora
Red Hat Enterprise/CentOS
Debian
Ubuntu
The best solution is to release the source code under a free license. You can package it for a couple distributions yourself (e.g. Debian, Fedora), then cooperate with other people porting it to others. The maintainers will often do most of this work with only a few required upstream changes.
Yes you can compile for both 32 and 64 bits from the same machine :
http://gcc.gnu.org/onlinedocs/gcc/i386-and-x86_002d64-Options.html
Most likely a binary running on Ubuntu will run on other distributions, the only thing you need to worry about if it you are using shared libraries (especially if you use some GUI framework or things like that).
Not sure what you mean by branch, but if you are talking about distribution, I would use the most vanilla Ubuntu distribution...
I'd recommend you just package a 32 and 64-bit binary for .deb and RPM, that way you can hit most of the major distros (Debian, Fedora, openSUSE, Ubuntu).
Just give clear installation instructions regarding dependencies, command-line fu for other distros, etc. and you shouldn't have much a problem just distributing a source tarball.

What could prevent from running a binary on linux distribution compiled on a different platform?

We have 2 different compilation machine: red hat as4 and as5. Our architects require us, developers, to compile our program on those 2 platforms each time before copying them on their respective machine in production.
What could prevent us from compiling our application on one machine only (let say the red has as 4 for instance) and deploy that binary on all target platform instead ?
What is your point of view and could you pinpoint specific issues you've encountered with doing so ? What issues may I face with doing so ?
You could run into shared library incompatibilities by building on one system and running on another. It's not especially likely going between successive versions of Red Hat, but it is possible.
Another problem you could run into is if one system is 32 bits and the other 64 bits. In that case, the app compiled on the 64 bit machine wouldn't run on the 32 bit machine.
Prevent? Nothing. An application that runs on EL4 should run on EL5 as well, barring things like external applications being different versions or libraries being obsoleted. However, Red Hat likes to do all sorts of tweaks to gcc that involve security and code optimization, and you'll miss out on any improvements in EL5 if you just copy the EL4-compiled binary.
Besides, everyone needs a break.
Whoa, compiling stuff then developers copying the binaries into production? That's some pretty whacky process you've got there.
Yes of course you can run the same binaries on RHEL 4 and 5 provided they are the same architecture and you have the dependencies installed.
I would strongly recommend that you build your binaries into a RPM, then you can have dependencies created for it, which will ensure it can only be installed when dependencies are satisified.
Moreover, it will enable your QA team** to install the same binary and carry out testing on a non-production system and such like, which they will definitely want to do before they let the software anywhere near the Ops team*** who will then deploy them, knowing that the relevant processes, including testing, have been carried out.
** You have one of those, right
*** surely you have one of those!

Building linux application through Vmware

I used to develop using Visual Studio on windows... (C++)
we recently migrated our app to linux (red-hat) , and currently each employee is building his own app is his own virtual machine using Vmware. out native OS is still Windows.
At first, it seemed that building using g++ was faster then using VS compiler, however, after some time, it seems like it tured out to be pretty slow. Is it beacuse we're using Vmware ?
are there things we can do to accelerate the building process ?
g++ is not a speed daemon, but it performs well. Yes, a VM can have unsteady performance, especially on disk access. You can always try ccache to avoid recompiling the parts you don't need to.
Or, ditch VMWare (and windows underneath) and do it all on Linux. either with a dedicated build box, or on your own machine. if you have to have a full featured GUI for writing, QtCreator is quite up to the task (no, it's not tied to only writing Qt applications).
I never really noticed that g++ was slower than VS or the opposite, but there is ways to make g++ go a lot faster.
ccache for instance. I tried it and it really speeds up the compilation.
ccache is a compiler cache. It speeds up recompilation of C/C++ code by caching previous compilations and detecting when the same compilation is being done again
If you're working on a multicore machine you probably want to do multiprocess compilation, if you're using make you can do make -jX where X is your number of cores. Note you'll have to enable multicore on your virtual machines.
Disable compiler optimizations.
That said, whatever you do, compilation on a virtual machine wont be as efficient as compilation on a real machine.

Cross Compiling Linux Kernels and Debugging via VMware

I'm considering doing some Linux kernel and device driver development under a vmware VM for testing ( Ubuntu 9.04 as a guest under vmware server 2.0 ) while doing the compiles on the Ubuntu 8.04 host.
I don't want to take the performance hit of doing the compiles under the VM.
I know that the kernel obviously doesn't link to anything outside itself so there shouldn't be any problems in that regard, but
are there any special gotcha's I need to watch out for when doing this?
beyond still having a running computer when the kernel crashes are there any other benefits to this setup?
Are there any guides to using this kind of setup?
Edit
I've seen numerous references to remote debugging in VMware via Workstation 6.0 using GDB on the host. Does anyone know if this works with any of the free versions of VMWare such as Server 2.0.
I'm not sure about ubuntu thing. Given that you are not doing a real cross compilation (i.e. x86->arm), I would consider using make-kpkg package. This should produce an installable .deb
archive with kernel for your system. this would work for me on debian, it might for for you
on ubuntu.
more about make-kpkg:
http://www.debianhelp.co.uk/kernel2.6.htm
I'm not aware of any gotchas. But basically it depends what kind of kernel part you
are working with. The more special HW/driver you need, the more likely VM won't work for you.
probably faster boots and my favorite is the possibility to take screenshot (cut'n'paste) of panic message.
try to browse to vmware communities. this thread looks very promising, although it dicusses
topic for MacOS:
http://communities.vmware.com/thread/185781
Compiling, editing, compiling is quite quick anyway, you don't recompile you whole kernel each time you modify the driver.
Before crashing, you can have deadlock, bad usage of resource that leads to unremovable module, memory leak etc ... All kind of things that needs a reboot even if your machine did not crash, so yes, this can be a good idea.
The gotchas can come in the form of the install step and module dependency generation, since you don't want to install your driver in the host, but in the target machine.

Resources