My question is: How do you guys deploy the same code from whatever [D]VCS you use on multiple machines? Do you have an automated deployment system and if so what's that? Is it built in-house? Are there out there any tools that can do this automatically? I am asking because I am pretty bored updating up to 20 machines every time I make some modifications.
P.S.: Probably this belongs on ServerFault, but I am asking here because I am thinking at writing my own custom-made deployment system.
Roll your own rpm/deb/whatever for your package, set up your own repo, and have your machines pull on a regular basis. Its really not that hard to do and its already built-in to your system, is well tested, and loaded with features. You could use something like Func if you needed to push instead.
Depending on your situation deploying straight from the versioning system might not always be the best idea. You can only so much by just updating files, and mixing deployment and development probably will make the development use of the versioning system less free.
I see two alternatives that might be interesting.
Deploy from your continuous integration server. (add a task that runs after every successful build, copies over files and executes some remote commands, I'm using this to deploy to a testserver and would find it to tricky to upgrade production in such a way)
Deploy using an existing package manager. You can set up your own apt (or equivalent) repository and package the updates using apt. Have your continuous build system build apt packages but let an admin decide if the should be pushed to the update server. I think this is the only safe solution for production machines.
We use Capistrano for deployment & Puppet for maintaining the servers and avoiding the inevitable 'configuration drift' when many developers/engineers tinker with the package lists and configuration files.
Both of these programs are written in Ruby, but we use them for our PHP codebase stored in a git repository.
I use a combination of deb packages with puppet to deploy code and configure a bunch of machines.
In most projects i have been involved with the final stage has always been an scripted rsync deployment to live. so the multiple targets are built into this process.
Related
I'm currently working on a github project mainly focused on windows users, written in Java. Install4j allows for easy .deb/.rpm etc. package conversion...
We could just ditribute the .deb on the download side, but when looking at gitlab a while ago, I saw, that Gitlab is using packagecloud.io as a hosting service for their packages (usingtheir own domain), so they can be updated using apt-get.
My question is, if there is a free service working just like packagecloud.io (not launchpad or similar with baazar and that advanced stuff) which can either be hosted on our own server or a public server. Or if there even is a downloadable version of packagecloud.io which we could use on our own server.
You can configure Travis CI to run extra commands when the build succeeds. You can put in some conditions, so that the deploy stage will only be run if commit happens to have a tag name. See the deployment documentation to get going.
A number of providers are officially supported, among which PackageCloud.io.
You might find the dpl utility useful, as it assists with writing and testing deployment settings.
Check out OpenRepo: https://github.com/openkilt/openrepo
I think this is what you're asking for. This is a package hosting server that can make packages available for both Debian (APT) and Red Hat (RPM) files.
I am building phoenix application with exrm.
Good practice suggests, that I should make tests against the same binary, I'll be pushing to production.
Exrm gives me the ability to deploy phoenix on machines, that don't have Erlang or Elixir installed, which makes pulling docker images faster.
Is there a way to start mix test against binary built by exrm?
It should be noted that releases aren't a binary file. Sure they are packaged into a tarball, but that is just to ease deployment, what it contains is effectively the binary .beam files generated with MIX_ENV=prod mix compile, plus ERTS (if you are bundling it), Erlang/Elixir .beam files, and the boot scripts/config files for starting the application, etc.
So in short your code will behave identically in a release as it would when running with MIX_ENV=prod (assuming you ran MIX_ENV=prod mix release). The only practical difference is whether or not you've correctly configured your application for being packaged in a release, and testing this boils down to doing a test deployment to /tmp/<app> and booting it to make sure you didn't forget to add dependencies to applications in mix.exs.
The other element you'd need to test is if you are doing hot upgrades/downgrades with your application, in which case you need to do test deploys locally to make sure the upgrade/downgrade is applied as expected, since exrm generates default .appup files for you, which may not always do the correct thing, or everything you need them to do, in which case you need to edit them as appropriate. I do this by deploying to /tmp/<app> starting up the old version, then deploying the upgrade tarball to /tmp/<app>/releases/<new version>/<app>.tar.gz, and running /tmp/<app>/bin/<app> upgrade <version> and testing that the application was upgraded as expected, then running the downgrade command for the previous version to see if it rolls back properly. The nature of the testing varies depending on the code changes you've made, but that's the gist of it.
Hopefully that helps answer your question!
I've got a VS2013 solution with a mix of NodeJS (using TypeScript) and C# class library projects (they're bound together by EdgeJS). Where the NodeJS projects are concerned, one can be considered a library (for a RabbitMQ bus implementation), two are applications which are meant to be hosted as part of a fourth project with both using the bus.
So, one project (host) which will depend on three projects (bus, app1 and app2) (it starts the bus, and passes it to app1 and app2).
Of course, I could just lump all these projects together and be done with it - but that's a horrible idea.
How do I package these projects up for proper reuse and referencing (like assemblies in traditional .NET)?
Is that best done with NPM? If so, does VS provide anything in this area? If not, then what?
Note that, aside from the Bus project, I'm not looking to release these publicly - I'm not sure if that changes anything.
In general, if something can be bundled together as an independent library, then it's best to consider this a Node package and thus, refactor that logic out to it's own project. It sounds like you've already done this to some extent, separating out your bus, app1, and app2 projects. I would recommend they each have their own Git repositories if they are separate packages.
Here's some documentation to get you started with Node packages:
https://www.npmjs.org/doc/misc/npm-developers.html
The host project, if it's not something you would package but instead deploy, probably does not need to be bundled as a Node package. I would instead just consider this something that would be pulled down from Git and started on some server machine.
With all that said, your last line is important:
I'm not looking to release these publicly
GitHub does have private repositories, but as of now npmjs.org does not have private repositories. There are options to create your own private repository (Sinopia and Kappa offer different ways of accomplishing this), but if you don't want this code available for everyone do not deploy it do npmjs.org. You can still package it up in the way I've outlined it above, just not deploy it as of yet.
What's your experience with deploying Haskell code for production in Snap in a stable fashion?
If the compilation fails on the server then I would like to abort the deployment and if it succeeds then I would like it to turn of the snap-server and start the new version instead.
I know there are plenty of ways. Everything from rsync to git-hooks (git pull was a nightmare). But I would like to hear your experiences.
Where I work, we use Happstack and deploy on Ubuntu linux. We actually debianize the web app and all the dependencies, and then build them in the autobuilder.
To actually install on the server, we just run apt-get update && apt-get install webapp-production
The advantage of this system is that it makes it easy for all developers to develop against the same version of the dependencies. And you know that all the source code is checked in properly and can be rebuilt anywhere .. not just on one particular machine. Additionally, it provides a mechanism to make patches to libraries from hackage when needed.
The downside is that apt-get and cabal-install do not get along well. You either have to build everything via apt-get or do everything via cabal-install.
Here's what we do. First off, our servers are all the same version of ubuntu, as well as our development machines. We write code, test, etc. in whatever os we care to use and when we're ready to push we build on the devel machine(s). As long as that compiled cleanly, we stop (number of frontend servers)/2, rsync the resources directory and a new copy of the binary, and then use scripts start it back up. Then repeat for the other half.
In my opinion, I think you should question the logic of maintaining a full toolchain on your frontend server(s) when you can easily transfer just the binary and static assets - provided that the external libraries (database, image, etc) versions match the build environment. Heck, you could just use a virtualbox instance to do the final compile, again, so long as the release of the os and libraries match.
I'm new to both of these tools, and I'm also very new to Linux system administration, so I apologize ahead of time for what may seem like a total n00b question.
Basically, I'm starting a whole new project from scratch. Yaaay! Exciting! However, I'm a little lost on how to set up the project. I've installed both git and maven on my dev machine and run through some tutorials. I've also set up git on my server, and have successfully pushed code to it and pulled code from it.
So, first question : Is it even a good idea to use git and maven together? Git seems like the best source control system, and Maven seems like the best build system. Are they known to work well together? Or am I needlessly creating trouble for myself at this early (and precarious) stage of the project? I've used ant enough to know that I don't want to use it, and I'm not really a fan of svn, although I'll use it if I have to.
Second question : Given that these two tools work well together, what's the Best Practices way of setting them up? I know that git is "peer-to-peer", although I suppose nothing is stopping you from setting up a single repository for the git user and having all the devs sync up with that repo when it's time to do a build. Is that the right way to go? How about Maven? Maven seems kinda single-user oriented. Like, everybody sets up Maven on their own machine and has their own Maven repo, right? Or wrong? Would it make sense to create a "Maven user" on my server, and have that user do all my builds from the "main" git repo?
Apologies if I'm totally mistaken on how to use these tools. As I said, I'm pretty new to these things. Any help you have is appreciated.
(also, I'm working on Linux, doing Java dev work in Eclipse, using Spring for the framework, mysql for the data store, and Hibernate as an ORM. Don't know of any of that matters)
Thanks!
Q1: Yes, git will work well with any build systems. Usually your VCS is well abstracted with any modern build system. Ensure that you set up your .gitignore file so that you are not tracking any artifacts from builds.
Q2: The best practice is to have an integration branch to build from. While developing, use topic or feature branches. When ready, merge into the integration branch and push that up to the central repository where maven can build from. Google git-flow for more ideas. You generally want a central build server if you are working on a team to ensure you are building on the same machine. This is not the case if you are working alone or maybe just one developer.
Hope this helps.