What are the Yesod system requirements? - haskell

I'm currently looking for a VPS to deploy a Yesod site on, I was wondering what the system requirements are for running Yesod? I will be using Nginx with Warp as the system configuration.

There are no hard-and-fast rules here, but I comfortably run about 5 Yesod-powered sites with Nginx and PostgreSQL and a micro EC2 instance (micro being the instance size, not a random adjective).

I had a VPS and I had trouble with the glibc version, mainly because a lot of hosting companies are quite conservative and don't offer the latest and greatest versions of the common Linux distributions. GHC won't work with older versions of glibc, although I haven't found anywhere an exact definition of how old is too old.
So one system requirement is: a recent Linux that doesn't have an ancient version of glibc.

I currently run one yesod app on Debian Lenny on VDS, with 500MHz CPU and 196Mb RAM. I do not compile app on the VDS, instead I upload compiled binary. It only needs recent libgmp, but I put one (libgmp*.so) from my desktop to the same directory as application and run
LD_LIBRARY_PATH=. ./my-yesod-app

Related

NodeJs on an Ubuntu docker container?

I am moving my windows hosted SPA app into a Linux container. I am somewhat familiar with Ubuntu, so I was going to use that.
The NodeJs page on docker hub shows containers for several Debian versions and Alpine.
But nothing for Ubuntu.
Is Ubuntu not recommended for use with NodeJs by the NodeJs Team?
Or is it just too much work to keep lots of Linux distros of NodeJs preped, so the Node team stopped at Debian and Alpine?
Or is there some other reason?....
Ubuntu is too heavy to have it as a base container for running a node application as a server. Debian and Alpine are much more lightweight compared to Ubuntu.
Just on top of that, having some knowledge of Ubuntu, debian and alpine wouldn't be a big change. At the end of the day Ubuntu is somewhat built on top of debian, and they're linux distros so you should be fine. Especially that you'd need to do your configure steps ones, save them as part of the container image and you're done. Every time it will make the same container with the right setup. The beauty of containers.
Ubuntu is just a really heavy base and is going to add a ton of packages into the container that most likely are unnecessary. If you're going to be building production grade containers, Alpine is usually the go-to. It has a minimal amount of libraries installed, reducing the overall size of the container, and should be closest to "bare-minimum" that your application needs to run. I'd start there.

Distributing and Updating Software Applications to Linux envaranment

Currently I'm manually distributing and updating two applications over 50 computers running CentOS 6.5 and Ubuntu 14.04. Each time the new version is available for either of my applications,i have to copy all files and update it in all the computers by manually.its very time consuming and frustrating.
to avoid this manual process over 50 computers,I like to maintain a central server that contain the latest version of the applications and whenever need to install or update just type a command in client pc like we use in CentOS and Ubuntu to install a software
in Ubuntu
sudo apt-get install vlc
and in Cent OS
sudo yum install vlc
one of the programs written in java and other is written in python
I google it and can't find any good and useful source about how to do this.
some one alrady done this or knows how to achive this please help.
You need to create packages to make this happen.
Ubuntu uses the Debian package format, so you can use Debian's New Maintainer's Guide, which is the canonical tutorial on how to create a Debian package. It makes the assumption that you're going to upload the package to Debian, which in your case isn't true, but that just means you need to skip some sections of the document.
For RPM, there isn't such a document AFAIK, but there is the book 'max rpm' (which unfortunately is somewhat outdated), and fedora has augmented that with some guidelines and best practices which they've put on their wiki. Since RHEL is created by forking fedora and stabilizing that, and since CentOS is based on RHEL, what goes for fedora goes for CentOS, too.
These methods will create packages manually, which is always the best way and will result in the least problems afterwards. However, they take time. If you don't want to spend that time, there are also a few options to generate packages which will automate part or all of the job for you. Personally, however, I'm not a fan of these methods and therefore wouldn't recommend them.
Finally, another option is to not create packages, but to use a config management system like puppet to automate the deployment. It's even available in Ubuntu and EPEL.
edit I notice you may actually be asking about creating a repository instead. That's a different thing. There are several tools to help you do that; at core, all they do is run createrepo for RPM packages, or dpkg-scanpackages for debian packages. You can do that yourself, or investigate time in a tool like reprepro or aptly or some such.

Application and libraries "boxing"

I'm running Ubuntu, and found a library that I'd like to run. The problem is that this library is only compatible with RedHat and Suse.
I'm looking for a way to run a Python application using this library in some kind of "box" with RedHat/Suse libraries/structure, but who would run faster (than virtualbox) because of just running CLI, and why not with the host's kernel. It would start automatically, run the application and close after that.
I think I have seen an application like this before, but I can't remember the name.
It is called container, notable examples are lxc and docker (later is build atop of the former and more user friendly)

Free Linux Cluster Build for Small Scale Reseach

I need to build a small cluster for my research. It's pretty humble and I'd like to build a cluster just with my other 3 laptops at home.
I'm writing in C++. My codes in MPI framework are ready. I can simulate them using visual studio 2010 and they're fine. Now I want to see the real thing.
I want to do it free (I'm a student). I have ubuntu installed and I wonder:
if I could build a cluster using ubuntu. I couldn't find a clear answer to that on the net.
if not, is there a free linux distro that I can use at building cluster?
I also wonder if I have to install ubuntu, or the linux distro on the host machine to all other laptops. Will any other linux distribution (like openSUSE) work with the one at the host machine? Or do all of them have to be same linux distro?
Thank you all.
In principle, any linux distro will work with the cluster, and also in principle, they can all be different distros. In practice, it'll be a enormously easier with everything the same, and if you get a distribution which already has a lot of your tools set up for you, it'll go much more quickly.
To get started, something like the Bootable Cluster CD should be fairly easy -- I've not used it myself yet, but know some who have. It'll let you boot up a small cluster without overwriting anything on the host computer, which lets you get started very easily. Other distributions of software for clusters include Rocks and Oscar. A technical discussion on building a cluster can be found here.
I also liked PelicanHPC when I used it a few years back. I was more successful getting it to work that with Rocks, but it is much less popular.
http://pareto.uab.es/mcreel/PelicanHPC/
Just to get a cluster up and running is actually not very difficult for the situation you're describing. Getting everything installed and configured just how you want it though can be quite challenging. Good luck!

LINUX: Upgrading a production machine

Our production machines are running on debian etch. Now, they finally released lenny, the day will come we need to upgrade these systems. How can I do this with minimal risk? Are there any premises, preparations of fall-back scenarios and do I need a plan B in case something goes wrong? Besides the binary packages handled by the debian installer there are a couple of compiled applications running on the machines.
Personally I wouldn't upgrade any OS on an important server. OS upgrades always have the potential for subtle bugs, whether it's Windows, Linux or anything else. Debian has got better than it used to be in this regard; dist-upgrade doesn't hose the machine nearly as often as it used to back in the day. But for production machines there is no point in risking it.
Set up new servers with a fresh OS and application deployment and swap them in as needs arise. There is no need to hurry to replace Etch companywide in one go. It will be supported with security updates for a while yet.
Having just gone through that transition for some dev boxes, I wanted to point out that you'll probably want to recompile any custom libraries that you'll be linking against. Lenny uses GCC 4.3, whereas Etch uses 4.1. The output from either compiler isn't very compatible with the other. You may need to install the gcc-4.1 package to do things like compile custom kernel modules.
If you're using 3rd party tools that have a plugin interface, you may have challenges there. I've been having troubles getting Matlab plugins (mex files) to work.
I'd suggest starting with a test system. After hammering it for a while and verifying that everything's working, switch it to be a production box.
Most people don't update production servers for exactly this reason - if it's working correctly, you wouldn't update unless you had a compelling reason.
Assuming you have a dev box built similarly to the production machine, you can simulate the update on the dev box.

Resources