I'm looking to re-organize the way we release our internal software. All of the code (PHP webapps, some Java apps and Perl scripts) is checked into Subversion repositories but there are no branches or tags, everything is checked into trunk (only around 1-3 devs per app). On the production linux servers, the software is just directly run from a working svn copy (actually most of the changes happen there as well).
Since we have a lot of small apps and release very often small changes to the running system, I'm looking for a very lean or transparent way to do some release engineering and to clean up this mess abit.
Are there any tools out there that may help me to do so in a heterogenous environment (language-wise) like that?
Or has anyone an idea how to do this in a proper way?
Otherwise I'd thought of writing some release (shell) scripts that automatically create subversion tags from trunk and then do a checkout of the corresponding tag to the production servers. But that sounds kinda hack'ish as well to me.
Thanks,
Haes.
Continuous Integration is definitely the way to go - any CI (even minimalist batch files) is better than none - but it'll only be as good as the policies you have in place. Since your files don't really end up as a 'binary' or 'distributable', marking a release might merely require only that you tag the repository, or even just stash the Subversion revision number somewhere. The important policy that you need is that any release can be reconstructed whenever you need it - so you can compare current and previous releases, or go back to an older release if something goes wrong. Don't worry about the 'overhead' of creating tags in svn - that's very efficient.
A release script that does the subversion tag sounds fine. A CI implementation (I'd recommend CruiseControl since it's ideal for heterogeneous work, although heterogeneity requires a bit more configuration overhead) is great, since you can automatically kick the process off on a subversion checkin, and run automated tests that determine whether it's good enough to tag or not.
I'd definitely not auto-deploy to a release server. A 'staging area' (call it 'nightly build', 'beta test', whatever) would be better. Let your users bang away on that before you decide it's good enough to roll out onto the production servers. And, as long as you've got the policy in place of being able to rollback to an earlier version, you've mitigated the possibility of a bad roll-out.
The auto-checkout onto production servers is the only 'hackish' part - an automated checkout, test, tag, beta deploy is slick enough. Rolling-out to production shouldn't have an easy button, though.
Use tags and branches; make it a part of the development cycle. When you update that "stable-1.0" branch, have tested the change(s) and tagged it "release-1.0.5", you simply do "svn switch" on the server to the new tag. Didn't work, despite having tested it? Switch back, and figure out what's wrong.
But beware, branching in subversion can be a pain, at least pre version 1.5. If you or your developers are not experienced with branches, expect a bit of hassle and/or mistakes in the beginning. But as long as you've committed no code should be lost (at worst simply difficult to merge).
Your developers really should learn how to use branching; it can be very useful for a variety of purposes (not just for release engineering).
Do not automatically switch over code on your production servers; somebody might accidentally hit the wrong button. Production updates should always be done with care. Scripts for adding new tags is, imho, unnecessary due to the simplicity of it, but your mileage may vary.
One last thing, don't allow anyone to have changes on your production server. It might cause conflicts, and those tend to take time to resolve. Not to mention, it destroys your ability to reproduce a given release on different workstations (works fine here! why not on the server? hmm).
Some Continuous Integration Servers do this sort of thing, Hudson, for example, has subversion integration. It can tag, run test, and deploy for you.
i'd use Hudson. in addition to fetching from and tagging in svn (ref sblundy), it can be useful in release management with the proper plugins. f.ex., you could try a plugin to "promote" the builds you deploy to production, and keep a list of both the promoted builds themselves and a change/commit log for the various versions.
Related
Using a lot of (official and non official) terraform providers, I'm looking for a tool to perform security analysis on terraform providers before executing terraform plan/apply commands (and so executing providers code). I want to prevent malicious code from providers to be executed blindly.
I'm basically executing terraform providers mirror command to save local copies of required providers and I'm wondering if I can security scan that result.
I tested kics, checkov and tfsec but they are all looking for security issues in my terraform static code but not in providers.
Do you have any good advices regarding this topic ?
This is actually quite a good question. There are many other problems that can be reduced to same generic question - how to make sure that the thing you downloaded from the internet does not do anything malicious to you like e.g.:
How to make sure that a minecraft plugin does not hack you?
How to make sure that a spring boot dependency does not hack you?
How to make sure that a library xxx you attach to your project does not do harm to you?
Should you use docker image yyy in your project?
Truth is: everything you use has the potential to explode right in your face (or more correctly: right into the face of the system owner). That's why the system owner (usually a company) defines a set of rules to follow what is allowed and what is not allowed. No set of rules you are aware of? Below a set of rules we came up with ourselves when thinking about on-boarding a new library for some projects to use:
Do not take random stuff from github. Take only products with longer history, small bug backlog, little to none past issues in the CVE list, actively maintained.
Do static code analysis yourself. Sometimes it is possible to have tools that work on binaries level do that for you. Sometimes you can do it on source level only. In case of Java libraries, check what tools like Dependency Track think about the library and version you are about to use.
Run the code and see how it works: what does it write, what does it read, what URLs does it communicate with (do a TCP dump if necessary).
Document everything you have done somewhere.
This gives you no 100% confidence that things will not go terribly wrong. But this is a systematic approach that will reduce the risk of doing something stupid.
Deployment
I currently work for a company that deploys through github. However, we have to log in to all 3 servers to update them manually with a shell script. When talking to the CTO he made it very clear that auto-deployment is like voodoo to him. Which is understandable. We have developers in 4 different countries working remotely. If someone where to accidentally push to the wrong branch we could experience downtime, and with our service we cannot be down for more than 10 minutes. And with all of our developers in different timezones, our CTO wouldn't know till the next morning and we'd have trouble meeting with the developers who had the issue because of vast time differences.
My Problem: Why I want auto-deploy
While working on my personal project I decided that it may be in my best interest to use auto-deployment, but still my project is mission critical and I'd like to mitigate downtime and human error as much as possible. The problem with manual deployment is that I simply cannot manually deploy on up to 20 servers via SSH in a reasonable amount of time. The problem perpetuates when I consider auto-scaling. I'd need to spin up a new server from an image and deploy to it.
My Stack
My service is developed on the Node.js Express framework. These environments are very rich in deployment and bootstraping utilities. My project uses npm's package.json to uglify my scripts on deploy, and also runs my service as a daemon using forever-monitor. I'm also considering grunt.js to further bootstrap my environments for both production and testing environments.
Deployment Methods
I've considered so far:
Auto-deploy with git, using webhooks
Deploying manually with git via shell
Deploying with npm via shell
Docker
I'm not well versed in technologies like Docker, but I'm interested and I'd definitely give points to whoever gave me a good description as to why I should or shouldn't use Docker, because I'm very interested in its use. Other methods are welcome.
My Problem: Why I fear auto-deploy
In a mission critical environment downtime can put your business on hold, and to make matters worse there's a fleet of end users hitting the refresh button. If someone pushes something that's not build passing to the production branch and that's auto-deployed, then I'm looking at a very messy situation.
I love the elegance of auto-deployment, but the risks make me skeptical. I'm very much in favor of making myself as productive as possible. So I'm looking for a way to deploy to many servers with ease, and in very efficient manner.
The Answer I'm Looking For
Explain to me how to mitigate the risks of auto-deployment, or explain to me an alternative which is better suited to my project. Feel free to ask for any missing details in the comments.
No simple answer here. I offer a set of slides published by Mike Brittain from Etsy, a company that practices continuous deployment:
http://www.slideshare.net/mikebrittain/mbrittain-continuous-deploymentalm3public
Selected highlights:
Deploy frequently and in small batches
Use config/feature flags to control system behaviour and "dark release" major features
Code review all changes to the production branch
Invest in monitoring and improve the feedback loop
Manage "services" separately to the "application" and be mindful of run-time version and backwardly compatible changes.
Hope this helps
Say I've got a \\Repo\... repo. Currently devs generally tend to do all their work directly in there, which normally isn't a problem for small pieces of work. Every so often, this approach fails for various reasons, mainly because they're unable to submit the incomplete change to Live.
So, I was wondering, is there a way to enforce on the server that:
1) no files can be directly checked out from \\Repo\...
2) users then branch to a private area (\\Projects\...)
3) dev, test, submit, dev, test, submit, ...
4) on dev complete, they can re-integrate back into \\Repo\...
I guess the last part is the problem, as files need to be checked out! Has anyone implemented something similar? Any suggestions are much appreciated.
There is no way (that I know of) to enforce this type workflow in P4. You could try to enforce it by setting commit triggers, restricting permissions, or locking files however I believe it would only result in more work (micro-management) and frustrate you and your team.
The best way to establish and enforce any SCM workflow is to set as company/studio policy. Your team should be responsible/able to follow the set procedure and determine (by themselves or through discussion) if an issue is able to be fixed in the main line.
One note about the proposed workflow; creating a new branch for every issue will eventually cause issues and at some point you will need to perform maintenance on the server to conserve disk space and depot browsing speed.
For more information (over) branching on Perforce read this Perforce blog entry from 2009: Perforce Anti-Patterns Part 2: Overuse of branching.
In many studios using Perforce, most developers have their own "working" branch which they continually re-use whenever there are changes that are not safe or able to be performed in the main line.
if i understand your questions properly, you should try with shelving features and working offline features of Perforce. Process is main thing to achieve success in this senario. So you might need to setup a right process to execute this.
For more Info about shelving and working offline with perforce, you can try following links...
http://www.perforce.com/perforce/doc.current/manuals/cmdref/shelve.html
A friend of mine and I are developing a web server for system administration in perl, similar to webmin. We have a setup a linux box with the current version of the server working, along with other open source web products like webmail, calendar, inventory management system and more.
Currently, the code is not under revision control and we're just doing periodic snapshots.
We would like to put the code under revision control.
My question is what will be a good way to set this up and software solution to use:
One solution i can think of is to set up the root of the project which is currently on the linux box to be the root of the repository a well. And we will check out the code on our personal machines, work on it, commit and test the result.
Any other ideas, approaches?
Thanks a lot,
Spasski
Version Control with Subversion covers many fundamental version control concepts in addition to being the authority on Subversion itself. If you read the first chapter, you might get a good idea on how to set things up.
In your case, it sounds like you're making the actual development on the live system. This doesn't really matter as far as a version control system is concerned. In your case, you can still use Subversion for:
Committing as a means of backing up your code and updating your repository with working changes. Make a habit of committing after testing, so there are as few broken commits as possible.
Tagging as a means of keeping track of what you do. When you've added a feature, make a tag. This way you can easily revert to "before we implemented X" if necessary.
Branching to developt larger chunks of changes. If a feature takes several days to develop, you might want to commit during development, but not to the trunk, since you are then committing something that is only half finished. In this case, you should commit to a branch.
Where you create a repository doesn't really matter, but you should only place working copies where they are actually usable. In your case, it sounds like the live server is the only such place.
For a more light-weight solution, with less overhead, where any folder anywhere can be a repository, you might want to use Bazaar instead. Bazaar is a more flexible version control system than Subversion, and might suit your needs better. With Bazaar, you could make a repository of your live system instead of setting up a repository somewhere else, but still follow the 3 guidelines above.
How many webapp instances can you run?
You shouldn't commit untested code, or make commits from a machine that can't run your code. Though you can push to backup clones if you like.
Weird question, perhaps. We have a number of simple utilities written in-house that need to be run on an automated basis. These are not build jobs. Just things like running SendOutHourlyEmailAlarms.exe, KeepFoldersInSynch.exe and such. I would normally set these things up as simple scheduled tasks/AT commands (or a Windows Service if more granular control is needed over the scheduling), but a co-worker has set up a number of these tasks as build projects on the CruiseControl.NET server. I asked him why he set these up this way and his response was that the executions (and their logs, return values, thrown exceptions) were all tracked and logged and that this information was accessible through an organized interface on the build server website. I couldn't argue with this.
But this just has a smell that I can't quite identify. Is this a proper use of CruiseControl.NET? If not, what are the dangers? Even if it may fit the bill, aren't there other products better suited for this type of thing?
We have all sorts of non-build related tasks for the exact same reason as your coworker had, I want one spot to look up any and all jobs I need run.
Some Examples of our CC.NET projects:
FTP installers to Remote QA
Creating Source Code Documentation
Create VM's with the installers
installed for QA in the morning
Archiving Installers
Pretty much anything I have to do by hand more than once, becomes a project. IMHO it is much better than a scheduled task for one other reason as well. Our config files are in source control, so we have 1 place to make adjustments. We do not have to log into multiple servers and make adjustments or wonder which server did that.
I think your coworker has made a good argument. If these tasks are related to the development process, then placing them in CruesControl.Net as a project seems acceptable. I would draw the line at utilizing a development server to run production processes though. Although it is true that "If the only tool you have is a hammer, you tend to see every problem as a nail," it doesn't mean that the hammer isn't capable of solving a lot of problems!
Just because a tool is designed to solve a particular problem does not mean that it will not have equal facility at solving similar problems outside the scope originally concieved by the tool creator. If CruiseControl.NET solves these problems well, then it is absolutely the appropriate tool to use.