Deploy Haskell code that uses the Snap Framework - haskell

What's your experience with deploying Haskell code for production in Snap in a stable fashion?
If the compilation fails on the server then I would like to abort the deployment and if it succeeds then I would like it to turn of the snap-server and start the new version instead.
I know there are plenty of ways. Everything from rsync to git-hooks (git pull was a nightmare). But I would like to hear your experiences.

Where I work, we use Happstack and deploy on Ubuntu linux. We actually debianize the web app and all the dependencies, and then build them in the autobuilder.
To actually install on the server, we just run apt-get update && apt-get install webapp-production
The advantage of this system is that it makes it easy for all developers to develop against the same version of the dependencies. And you know that all the source code is checked in properly and can be rebuilt anywhere .. not just on one particular machine. Additionally, it provides a mechanism to make patches to libraries from hackage when needed.
The downside is that apt-get and cabal-install do not get along well. You either have to build everything via apt-get or do everything via cabal-install.

Here's what we do. First off, our servers are all the same version of ubuntu, as well as our development machines. We write code, test, etc. in whatever os we care to use and when we're ready to push we build on the devel machine(s). As long as that compiled cleanly, we stop (number of frontend servers)/2, rsync the resources directory and a new copy of the binary, and then use scripts start it back up. Then repeat for the other half.
In my opinion, I think you should question the logic of maintaining a full toolchain on your frontend server(s) when you can easily transfer just the binary and static assets - provided that the external libraries (database, image, etc) versions match the build environment. Heck, you could just use a virtualbox instance to do the final compile, again, so long as the release of the os and libraries match.

Related

NodeJS app for end-user distribution

I'm looking for the proper way to distribute/deploy a node.js app that would run as a small webserver on the user's machine.
Is there a stub method or install script or a "install wizard" that would download all node_modules dependencies, download the latest nodejs binary, set up the environment, etc... or I have to distribute it bulk with everything packed? Is there any guide for that purpose?
Edited :
You could install node and npm, download your dependencies by running npm install in the command line (first declare them within your package.json) only then users can run your script. This is how you do development in Node.js, or deploy to a development server. See using npm. You could automate that with a shell script if that is what you are after.
However, when distributing programs to end-users that might not be the best approach. Linux users are used to a package (.deb for instance) and Windows users are used to an .exe or a setup wizard.
That is why I recommended the tools below. I also assumed you were targeting Windows as this is less of a problem is unix-like environments.
If you want a single file (.exe), pkg and nexe are made for that purpose. These Node.js tools are used by the developer to compile JavaScript code into a single executable binary that is convenient for end-users and Windows deployment. The resulting .exe file is very light and does not require node to be installed on the end-user’s computers.
Electron along with electron-packager can produce setup wizards, but it installs a lot of files even for the smallest program. Your program will include all of node and webkit, that is why it produces heavy installs.
NSIS can also create a setup wizard, it is simple and does common stuff well (copying files, running shell scripts).
Original answer:
Short answer is: not really.
You have to keep in mind that Javascript is and has always been interpreted, so until recently never compiled to binary as you might do with other languages. Some exploration has been going on, but essentially you won’t get a "good practice" answer.
The long answer is, maybe, for some limited use cases:
There is the fresh new pkg that does exactly this, and it looks promising.
There has been nexe for a while, it works great for some use cases (maybe yours). Native/compiled modules are still an issue however.
Electron might work for a full blown app with a significant user interface, but it is not light or compact.
You could always use browserify to concatenate and uglify all your code with the modules you use and then make an installer with something like NSIS to setup node and your script. Native modules would still be a problem however.

Testing elixir release build with exrm

I am building phoenix application with exrm.
Good practice suggests, that I should make tests against the same binary, I'll be pushing to production.
Exrm gives me the ability to deploy phoenix on machines, that don't have Erlang or Elixir installed, which makes pulling docker images faster.
Is there a way to start mix test against binary built by exrm?
It should be noted that releases aren't a binary file. Sure they are packaged into a tarball, but that is just to ease deployment, what it contains is effectively the binary .beam files generated with MIX_ENV=prod mix compile, plus ERTS (if you are bundling it), Erlang/Elixir .beam files, and the boot scripts/config files for starting the application, etc.
So in short your code will behave identically in a release as it would when running with MIX_ENV=prod (assuming you ran MIX_ENV=prod mix release). The only practical difference is whether or not you've correctly configured your application for being packaged in a release, and testing this boils down to doing a test deployment to /tmp/<app> and booting it to make sure you didn't forget to add dependencies to applications in mix.exs.
The other element you'd need to test is if you are doing hot upgrades/downgrades with your application, in which case you need to do test deploys locally to make sure the upgrade/downgrade is applied as expected, since exrm generates default .appup files for you, which may not always do the correct thing, or everything you need them to do, in which case you need to edit them as appropriate. I do this by deploying to /tmp/<app> starting up the old version, then deploying the upgrade tarball to /tmp/<app>/releases/<new version>/<app>.tar.gz, and running /tmp/<app>/bin/<app> upgrade <version> and testing that the application was upgraded as expected, then running the downgrade command for the previous version to see if it rolls back properly. The nature of the testing varies depending on the code changes you've made, but that's the gist of it.
Hopefully that helps answer your question!

Provide Node.JS webapp "key in hand"

I am building a simple Node.JS application for a client. The webapp should be easy to deploy on each server instance (which are RedHat EL 6.3), "key in hand".
What is the best way to package a Node.JS app? Basically, I need an "installer" or "package" to:
Install Node.JS
Install the dependencies (npm install)
Populate the application files (CSS, JS, HTML, etc.)
You should deliver a self-contained package. Please check out the great site The Twelve-Factor App, specifically the build, release, run section. There is a lot of hard-won wisdom from experienced operations engineers embodied in that site.
In your app's repo, write a script (shell, node, whatever) that can generate a distributable archive
RPM or tar archive are the 2 most sensible choices for you. tar is more portable and simpler. RPM would integrate nicely with an RPM-based distribution. I would recommend starting with tar if you have not done a lot of software packaging/management work. RPM is significantly more complex than tar.
The tar archive should embed the node.js files within it. This will make your app easy to install and avoid sharing a system-wide node install thus creating artificial coupling. If you go the RPM route, you can specify node as a dependency in your RPM spec file (but you probably shouldn't - see below).
The archive should embed all of the npm dependencies as well. Don't run npm install at package install time. Consider using the npm shrinkwrap tool to manage your dependencies during development, but at deployment time they should be pre-bundled and ready to run.
Specifically, these are bad ideas you should avoid:
Do not download anything from the Internet during installation. This is brittle, slow, and potentially can throw you bad surprises including security problems
Do not build artifacts at install time that can be built at build time. So ship pre-build CSS files, requirejs optimized files, pre-compiled binaries, etc.
As to whether your application RPM should list node.js as a dependency or embed node into the RPM, here are some points to consider.
Embed node.js into your RPM
Single .rpm file to distribute
Allows your application to tightly control the node version it uses. (see below)
Higher reliability. The fact is your app is probably coupled fairly tightly to at least the minor version of node.js you develop on (0.8.x for example) or even a patch release (>= 0.8.12 < 0.9 for example). It's best to allow node.js to decouple your app from the OS, but don't be fooled into thinking your app will work reliably on a different version of node.js without testing & adjustment. Most commonly these days there's just 1 app running on the OS, and the notion of sharing node between apps incorrectly values conservation of disk space over proper decoupling and operational independence of applications.
It's unclear whether there are any official/reliable pre-built RPMs out there in yumland that will "just work".
Specify node.js as a dependency of your PRM
Follows the general philosophy of OS package management (avoid duplication, conserve disk space, etc)
RPM provides capabilities beyond TAR around inventory management, uninstallation, upgrade, etc. Since you are asking this question, you are probably not ready to address these properly yet, so you might want to start with tar and once you have a solid understanding of that, consider RPM upgrade scripts, etc.
The "single file to distribute" nice point can quickly become untenable once your app starts using a database or 3, supporting daemons for email, log aggregators, etc.

Deploy repository code to multiple machines at once

My question is: How do you guys deploy the same code from whatever [D]VCS you use on multiple machines? Do you have an automated deployment system and if so what's that? Is it built in-house? Are there out there any tools that can do this automatically? I am asking because I am pretty bored updating up to 20 machines every time I make some modifications.
P.S.: Probably this belongs on ServerFault, but I am asking here because I am thinking at writing my own custom-made deployment system.
Roll your own rpm/deb/whatever for your package, set up your own repo, and have your machines pull on a regular basis. Its really not that hard to do and its already built-in to your system, is well tested, and loaded with features. You could use something like Func if you needed to push instead.
Depending on your situation deploying straight from the versioning system might not always be the best idea. You can only so much by just updating files, and mixing deployment and development probably will make the development use of the versioning system less free.
I see two alternatives that might be interesting.
Deploy from your continuous integration server. (add a task that runs after every successful build, copies over files and executes some remote commands, I'm using this to deploy to a testserver and would find it to tricky to upgrade production in such a way)
Deploy using an existing package manager. You can set up your own apt (or equivalent) repository and package the updates using apt. Have your continuous build system build apt packages but let an admin decide if the should be pushed to the update server. I think this is the only safe solution for production machines.
We use Capistrano for deployment & Puppet for maintaining the servers and avoiding the inevitable 'configuration drift' when many developers/engineers tinker with the package lists and configuration files.
Both of these programs are written in Ruby, but we use them for our PHP codebase stored in a git repository.
I use a combination of deb packages with puppet to deploy code and configure a bunch of machines.
In most projects i have been involved with the final stage has always been an scripted rsync deployment to live. so the multiple targets are built into this process.

Please recommend a way to deploy into a Linux box in a LAN environment

have you struggled with Linux deployment before?
I need to deploy an application into a dedicated Linux box with no outside network access.
The configuration should be as simple as possible, robust for different configurations (missing libraries and build tools) and preferably automatic. Another difficulty I need to consider is that I need to connect to an Oracle database.
What would you recommend as the best way for deployment? I have some ideas, but not sure which is the best.
I can use Java
I will need to install JDK, and that solves mostly everything
Another big problem is that the code we presently have in Java is poorly written and slow.
I'm not sure if I need to install Instantclient to connect to Oracle under linux
I can use C (I do have the source code for a very well-written LGPL program)
And use dpkg to deploy
The Linux box is most likely a Ubuntu server, but I'm not sure which version is installed
I can't use apt-get, but I can copy all the packages I need
I know I can use dpkg -s to check which packages they are, but I'm really not sure if I might miss dependencies.
I guess I will need build-essentials and pcap or such
And use static linking
I configured it with ./configure LDFLAGS=-static with no errors and it works on my computer now
I have chroot into this directory and run it without problems, does this mean this is okay?
I really need to test this on a new Linux box to make sure
And use Statifier
I browsed stackoverflow and found this app, haven't tried it out yet.
Seems like people have used it with mixed success.
And create a build environment and make
I have no confidence that this is going to work
Using C leaves some problems
But the program is incomplete, I have to process this data, preferably not in C.
I have to install Instantclient, which is difficult to deploy
I can use Perl
I can't use CPAN
I have already downloaded the libraries, so maybe I could just copy them into the deployed machine, I am not sure how or whether this works
Perl is slow
I have to install Instantclient anyways
Please share your similar experience.
C with static linking solves a lot of the portability problems at the expense of a larger executable. To make sure that everything is truly getting statically linked and not secretly depending on any outside libraries, run ldd on your executable and make sure it isn't dynamically loading everything. Note that this won't be 100% portable amoung various linux machines because Oracle instantclient has some dependencies on kernel versions, but it should work on any reasonably new kernel.
Edit: If the box has LAN access and and just no internet access, why not run your own apt repository on the local network. You could even create a .deb for your application and put it on the same server, than on that machine you just need to execute apt-get myApplication and it will pull down your app and any noninstalled dependencies as well. Setting up an apt mirror is actually pretty easy and this would be pretty slick. If network access is missing alltogether, you can still create an install dvd that has all the debs including your app and set up apt-get to pull from there.

Resources