how to keep Go webservice running - linux

I am writing some webservices in Go on a linux machine, so the Go executable needs to keep running
which is the best way to do it?
should I setup the Go executable as a service on the linux machine?
many thanks

The short answer: use the system service manager if you want to keep things super-simple. CentOS currently uses Upstart, and it's well documented and can handle most Go applications without too many problems. There are some good examples of Upstart + Go here and here
The long answer: personal preference. Supervisord, Monit and Circus are good options as well, but bring differing levels of complexity. I personally like supervisord, since it has a fairly clear syntax and a good heap of options.
There's also a good run-down here: http://tech.cueup.com/blog/2013/03/08/running-daemons/

Related

Linux SystemD Services - Simple vs Forked - Downsides?

A lot of programs you download can be run in a blocking manner or in the background (usually by start/stop/etc commands). Some good examples are HA Proxy and Spring Boot apps built to be Linux services... both can be run in either manner.
In system-d unit files you can use "forked" type to allow you to map to start/stop/etc commands for managing a program that runs in the background/as a daemon. Alternatively, you can just use the "simple" type and call the app itself in a blocking manner.
Is there any particular reason to prefer "forked" where it is an option? Having done both options on numerous things, it seems "simple" is lighter on config and more obvious in terms of usage.
This is answered in https://www.freedesktop.org/software/systemd/man/daemon.html section "sysv daemons" there are mostly only downsides of choosing the "forking" method, because most software out there, DO NOT perform the "15 steps" either correctly or at all, in particular, steps 12 and 14 are seldom correctly implemented.

Is using 'forever' still the suggested approach to run nodejs as a linux/unix service?

In the past couple of years NodeJS became a major player in the server landscape - and I really find it hard to believe that there is no decent way to have nodejs run as a service on a linux box. On Windows we have iisnode - but for non Windows environments the forever package is suggested as the way to go - instead of a real solution.
Is there maybe a servicized version of nodejs out there that I could not locate?
There isn't a "servicized" version of Node.js available in the sense you are thinking. Keeping your Node application running (for example in the event of a fatal error) is up to you entirely.
As suggested in the first comment, this is fairly subjective, but really there are two big packages (and one or two alternative methods) for making a service out of your Node application. As you've mentioned, forever is a popular choice. If you've never taken a look at pm2, I suggest doing so, as it offers some services that forever does not. Alternatively, you could search for information on supervisord, which I've had success with in the past. Finally, daemonizing Node with upstart is something to look at if the others don't fit well for you.

How do I run a serve as daemon on linux?

I created a server using c++ and want to run this server as daemon on linux..
How do I do this?
Thanks in advance...
There are many ways to daemonize a process. It is quite common that server implementations provides a switch to daemonize it at startup.
If you do not wish to implement such a feature, command-line tools exists such as this one : http://software.clapper.org/daemonize/.
I don't mean to sound condescending but did you try a google search, there is a heap of info on this out there, the first link I found: (http://www.enderunix.org/docs/eng/daemon.php)
You can use dup2() on Linux to make the FD's a bit easier to handle.
You may also want to look into using something like inetd to manage your server

Free Linux Cluster Build for Small Scale Reseach

I need to build a small cluster for my research. It's pretty humble and I'd like to build a cluster just with my other 3 laptops at home.
I'm writing in C++. My codes in MPI framework are ready. I can simulate them using visual studio 2010 and they're fine. Now I want to see the real thing.
I want to do it free (I'm a student). I have ubuntu installed and I wonder:
if I could build a cluster using ubuntu. I couldn't find a clear answer to that on the net.
if not, is there a free linux distro that I can use at building cluster?
I also wonder if I have to install ubuntu, or the linux distro on the host machine to all other laptops. Will any other linux distribution (like openSUSE) work with the one at the host machine? Or do all of them have to be same linux distro?
Thank you all.
In principle, any linux distro will work with the cluster, and also in principle, they can all be different distros. In practice, it'll be a enormously easier with everything the same, and if you get a distribution which already has a lot of your tools set up for you, it'll go much more quickly.
To get started, something like the Bootable Cluster CD should be fairly easy -- I've not used it myself yet, but know some who have. It'll let you boot up a small cluster without overwriting anything on the host computer, which lets you get started very easily. Other distributions of software for clusters include Rocks and Oscar. A technical discussion on building a cluster can be found here.
I also liked PelicanHPC when I used it a few years back. I was more successful getting it to work that with Rocks, but it is much less popular.
http://pareto.uab.es/mcreel/PelicanHPC/
Just to get a cluster up and running is actually not very difficult for the situation you're describing. Getting everything installed and configured just how you want it though can be quite challenging. Good luck!

Use "apt" or compile from scratch for a web service?

For the first time, I am writing a web service that will call upon external programs to process requests in batch. The front-end will accept file uploads and then place them in a queue. The workers on the backend will take that file, run it through ffmpeg and the rest of my pipeline, and send an email when the process is complete.
I have my backend process working on my computer (Ubuntu 10.04). The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
Not all hosting services uses Ubuntu, and not all give me root access. (I haven't chosen a host yet.) However, they will let me upload binaries to execute, and many give me shell access with gcc.
Usually this would be a no-brainier and I'd compile it all from scratch. But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider? Should I try creating my own .deb, which contains Ubuntu's version of ffmpeg (and other tools) with the configurations I need?
Short of a setup where I manage my own servers/VMs (which may very well be what I have to do), how might I accomplish this?
The question is: should I try to re-create that pipeline using binaries that I've compiled from scratch? Or is it okay to use apt when configuring in The Real World?
It is in reverse: it is not okay to deploy unpackaged in The Real World IMHO
and not all give me root access
How would you be deploying a .deb without root access. Chroot jails?
But doing so - not to mention trying to figure out how to create a platform-independent .tar.gz binary - will be quite a task which ultimately doesn't really help me ship my product.
+1 You answer you own question. Don't meddle unless you have to.
Do you have any thoughts on the best way to set up my stack so that I'm not tied to a specific hosting provider?
Only depend on wellpackaged standard libs (such as ffmpeg). Otherwise include them in your own deployment. This problem isn't too hard too solve for 10s of thousand Linux applications over decades now, so it would probably be feasible for you too.
Out of the box:
Look at rightscale and other cloud providers/agents that have specialized images/tool chains especially for video encoding.
A 'regular' VPS provider (with Xen or Virtuozzo) will not normally be happy with these kinds of workload, but EC2, Rackspace and their lot will be absolutely fine with that.
In general, I wouldn't believe that a cloud infrastructure provider that doesn't grant root access will allow for computationally intensive workloads. $0.02

Resources