I need to build a small cluster for my research. It's pretty humble and I'd like to build a cluster just with my other 3 laptops at home.
I'm writing in C++. My codes in MPI framework are ready. I can simulate them using visual studio 2010 and they're fine. Now I want to see the real thing.
I want to do it free (I'm a student). I have ubuntu installed and I wonder:
if I could build a cluster using ubuntu. I couldn't find a clear answer to that on the net.
if not, is there a free linux distro that I can use at building cluster?
I also wonder if I have to install ubuntu, or the linux distro on the host machine to all other laptops. Will any other linux distribution (like openSUSE) work with the one at the host machine? Or do all of them have to be same linux distro?
Thank you all.
In principle, any linux distro will work with the cluster, and also in principle, they can all be different distros. In practice, it'll be a enormously easier with everything the same, and if you get a distribution which already has a lot of your tools set up for you, it'll go much more quickly.
To get started, something like the Bootable Cluster CD should be fairly easy -- I've not used it myself yet, but know some who have. It'll let you boot up a small cluster without overwriting anything on the host computer, which lets you get started very easily. Other distributions of software for clusters include Rocks and Oscar. A technical discussion on building a cluster can be found here.
I also liked PelicanHPC when I used it a few years back. I was more successful getting it to work that with Rocks, but it is much less popular.
http://pareto.uab.es/mcreel/PelicanHPC/
Just to get a cluster up and running is actually not very difficult for the situation you're describing. Getting everything installed and configured just how you want it though can be quite challenging. Good luck!
Related
There are many websites providing cloud coding sush as Cloud9, repl.it. They must use server virtualisation technologies. For example, Clould9's workspaces are powered by Docker Ubuntu containers. Every workspace is a fully self-contained VM (see details).
I would like to know if there are other technologies to make sandboxed environment. For example, RunKit seems to have a light solution:
It runs a completely standard copy of Node.js on a virtual server
created just for you. Every one of npm's 300,000+ packages are
pre-installed, so try it out
Does anyone know how RunKit acheives this?
You can see more in "Tonic is now RunKit - A Part of Stripe! " (see discussion)
we attacked the problem of time traveling debugging not at the application level, but directly on the OS by using the bleeding edge virtualization tools of CRIU on top of Docker.
The details are in "Time Traveling in Node.js Notebooks"
we were able to take a different approach thanks to an ambitious open source project called CRIU (which stands for checkpoint and restore in user space).
The name says it all. CRIU aims to give you the same checkpointing capability for a process tree that virtual machines give you for an entire computer.
This is no small task: CRIU incorporates a lot of lessons learned from earlier attempts at similar functionality, and years of discussion and work with the Linux kernel team. The most common use case of CRIU is to allow migrating containers from one computer to another
The next step was to get CRIU working well with Docker
Part of that setup is being opened-source, as mentioned in this HackerNews feed.
It uses linux containers, currently powered by Docker.
I am writing some webservices in Go on a linux machine, so the Go executable needs to keep running
which is the best way to do it?
should I setup the Go executable as a service on the linux machine?
many thanks
The short answer: use the system service manager if you want to keep things super-simple. CentOS currently uses Upstart, and it's well documented and can handle most Go applications without too many problems. There are some good examples of Upstart + Go here and here
The long answer: personal preference. Supervisord, Monit and Circus are good options as well, but bring differing levels of complexity. I personally like supervisord, since it has a fairly clear syntax and a good heap of options.
There's also a good run-down here: http://tech.cueup.com/blog/2013/03/08/running-daemons/
I want to sharpen my skills in terms of gnu/linux and get a better understanding of how servers work. So I thought I'd set up an apache webserver with ftp, ssh, svn etc. Since I use Adobe products everyday in my line of work installing a linux dist. straight on my machine isn't an option. Yes, I could probably do a dualboot with linux and vista. But since I am a novice I don't want to risk something happening to my machine.
So I thought of start of installing a dist. with a pretty steep learning curve with a lot of manual configuration. To maximize the familiarization of command line operations and such. The goal is to make it working and have a safe setup.
So before I write a WOT;
I was curious of, what pros and cons there are in terms of security to have a setup like this?
Thank you!
None, there are no difference if the *nix system is on a VM or physical hardware if you give it access to resources.
In the case of the VM if you don't want it to have access to the hosts hard drive then don't add the physical hard drive. Same for the Network and any other resource.
I am running a bunch of virtual servers on my single server. I'm using OpenVZ but the basic pros and cons are the same.
Pros
I enjoy the fact that I get to experiment a lot. I can install stuff, screw things up royally, and then just wipe out the entire virtual server and start over. It beats re-installing the OS in real-life. I can also easily compare and contrast competing products this way. I'm also able to monitor the running of the system and understand how it works in a more intimate way.
Cons
Resource consumption, which is the reason why I chose OpenVZ - it doesn't consume that much as compared to VirtualBox.
Security wise, you need to take the same precautions as you do for a real system. The difference is that if your machine is compromised, you can just wipe it out easily.
We have a business application that basically runs on an os-independent stack (tomcat+java+mysql) but we have always run it redhat or centos.
There is a customer that is insisting to run it on opensolaris for his own reasons (an expensive everything-is-included support agreement with Sun).
How painful can such a migration be? We have a lot of configuration file and support scripts such as:
apache
apache/tomcat connector
email interaction with postfix
customized service start/stop
a couple of cron jobs (backup, monitoring)
different users and permissions (java, mysql, email, backup...)
Our build process outputs a .tar.gz file with our business code + some shell scripts that edit all the os-configuration files.
Any previous experience on this.
The biggest issues will be with the non-POSIX (non-standard) options you've used to the GNU tools provided on Linux that are not in the Solaris standard commands. You might decide that porting the relevant tools from the GNU set is simpler than modifying your system. If you've laced the code with absolute pathnames for commands (/usr/bin/ls) but you decide to use the GNU versions instead, you've got to find a way of fixing those. I'd be extremely cautious about replacing the OpenSolaris versions with the GNU versions; you don't know when you would break something that the system relies on. So, you would put the GNU commands in a separate directory - probably not /usr/local because that is for the machine owners to populate, not you as an application-monger - and arrange for that to be used in place of the system commands. (Note: on Solaris, /bin is a symlink to /usr/bin; I assume the same is true of OpenSolaris.) AFAIK, Postfix is not standard on OpenSolaris, so you'd have to ensure you get that installed, too.
All of this is doable - there's nothing insuperable. But a lot depends on your code base.
We run both, though we don't use OpenSolaris as a web servers.
The good:
OpenSolaris comes with the gnu tools, so, get your path right and that's ok.
Most things just build and run just fine.
The not so good:
Make sure that you've installed and are using bash. Otherwise all those bashisms that you are using that you didn't think you were using will bite you.
Make sure that you're not using hard coded paths to /usr/bin or /bin. These tools are not the GNU ones and therefore have different options. Use /usr/gnu as mentioned above.
You don't have the huge number of packages that you can install straight off as you do with yum or apt. Yes, you have a package manager, it's just not quite so well populated.
As a result you probably will be installing packages by hand. They should install, it's just a bit more work for your system admins.
Are you sure that OpenSolaris runs well on your hardware? It's worth a check. You might find that some of the hardware drivers aren't as well tested.
Otherwise we find OpenSolaris to be nice. It has a lot of good ideas.
Have you looked at Nexenta - http://www.nexenta.org/os It's the OpenSolaris kernel with a Ubuntu userland.
OpenSolaris includes all the GNU utilities already, just point your scripts at /usr/gnu/bin
Installing Postfix shouldn't present any problems, and Apache/MySQL are present in a base OpenSolaris install (in truth, the Cool Web Stack stuff makes it about as easy to administer as WAMP/Instant Rails). Beyond which, SMF manifests (SMF is a replacement for rc scripts sort of like OSX's launchd, though you can still use regular init scripts) may make your life easier, since specifying dependencies and run order is somewhat nicer (it'll recursively start/stop all dependent services also).
Tomcat certainly works, though everybody I know on OpenSolaris uses GlassFish. YMMV, but deploying a .war is pretty much the same everywhere.
It may not be a bad first step to deploy into a LX branded zone (think FreeBSD jails or Linux vServer for a comparison), as the LX branded zones can run Linux binaries, and are explicitly CentOS/RHEL based.
Other than that, OpenSolaris is a Xen dom0 since b77 or something, and putting CentOS/RHEL into a domU is dead simple, if that's an option.
You also get all the Solaris goodies along with it (DTrace, ZFS, network virtualization [via CrossBow], etc). Who knows? You may even like it! Java is Java, so that shouldn't pose any issues.
you'll probably have to rewrite a big part of your scripts (user creations, service launch) as it is probably different in CentOS and OpenSolaris.
as previously written, ask your customer to install the GNU tools so you'll have less work to rewrite your scripts.
os configuration files may also not be in the same format, you'll need to check.
your tar.gz file should be extractable without troubles, but again you will have less surprises if you use GNU tools. some unix OS have tar with some limitations
Any previous experience on this.
(maybe a little offtopic)
we package and distribute our java/tomcat/postgresql/unix application with all binaries referenced in our scripts. this implies to have 1 build system for each OS we support, this implie we support our application but also external binaries, but in the end we do not have bad surprises # customers.
we also ask them to do all root operations (user creation, directory creation, sendmail config, system tuning) before we install the application.
we have written shutdown / startup scripts for all supported OS, and their installation is the only thing we do in root on the customer machine.
Beside the fact that you're a troll, somebody just said above that (Open)Solaris has:
- ZFS
- DTrace
We can understand that you are afraid of not losing your RHCE job, but you just proved me once again that my decision as an employer to ignore all the certifications when interviewing people was a good one. It seems that a large percentage of such people (especially in the Microsoft world) are not so... open-minded, to put it nicely.
Regards,
Alex
Our production machines are running on debian etch. Now, they finally released lenny, the day will come we need to upgrade these systems. How can I do this with minimal risk? Are there any premises, preparations of fall-back scenarios and do I need a plan B in case something goes wrong? Besides the binary packages handled by the debian installer there are a couple of compiled applications running on the machines.
Personally I wouldn't upgrade any OS on an important server. OS upgrades always have the potential for subtle bugs, whether it's Windows, Linux or anything else. Debian has got better than it used to be in this regard; dist-upgrade doesn't hose the machine nearly as often as it used to back in the day. But for production machines there is no point in risking it.
Set up new servers with a fresh OS and application deployment and swap them in as needs arise. There is no need to hurry to replace Etch companywide in one go. It will be supported with security updates for a while yet.
Having just gone through that transition for some dev boxes, I wanted to point out that you'll probably want to recompile any custom libraries that you'll be linking against. Lenny uses GCC 4.3, whereas Etch uses 4.1. The output from either compiler isn't very compatible with the other. You may need to install the gcc-4.1 package to do things like compile custom kernel modules.
If you're using 3rd party tools that have a plugin interface, you may have challenges there. I've been having troubles getting Matlab plugins (mex files) to work.
I'd suggest starting with a test system. After hammering it for a while and verifying that everything's working, switch it to be a production box.
Most people don't update production servers for exactly this reason - if it's working correctly, you wouldn't update unless you had a compelling reason.
Assuming you have a dev box built similarly to the production machine, you can simulate the update on the dev box.