Partial packages in Continuous Delivery - sharepoint

Currently we are running a C# (built on Sharepoint) project and have implemented a series of automated process to help delivery, here are the details.
Continuous Integration. Typical CI system for frequent compilation and deployment in DEV environment.
Partial Package. Every week, a list of defects accompanied fixes is identified and corresponding assemblies are fetched from the full package to form a partial package. The partial package is deployed and tested in subsequent environments.
In this pipeline, there are two packages going through are being verified. Extra effort is used to build up a new system (web site, scripts, process, etc) for partial packages. However, some factors hinder its improvement.
Build and deploy time is too long. On developers' machines, every single modification on assemblies triggers around 5 to 10 minute redeployment in IIS. In addition, it takes 15 minutes (or even more) to rebuild the whole solution. (The most painful part of this project)
Geographical difference. Every final package is delivered to another office, so manual operation is inevitable and package size is preferred to be small.
I will be really grateful to have your opinions to push the Continuous Delivery practices forward. Thanks!

I imagine the reason that this question has no answers is because its scope is too large. There are far too many variables that need to be eliminated, but I'll try to help. I'm not sure of your skill level either so my apologies in advance for the basics, but I think they'll help improve and better focus your question.
Scope your problem to as narrow as possible
"Too long" is a very subjective term. I know of some larger projects that would love to see 15 minute build times. Given your question there's no way to know if you are experiencing a configuration problem or an infrastructure problem. An example of a configuration issue would be, are your projects taking full advantage of multiple cores by being built parallel /m switch? An example of an infrastructure issue would be if you're trying to move large amounts of data over a slow line using ineffective or defective hardware. It sounds like you are seeing the same times across different machines so you may want to focus on configuration.
Break down your build into "tasks" and each task into the most concise steps possible
This will do the most to help you tune your configuration and understand what you need to better orchestrate. If you are building a solution using a CI server you are probably running using a command like msbuild.exe OurProduct.sln which is the right way to get something up and running fast so there IS some feedback. But in order to optimize, this solution will need to be broken down into independent projects. If you find one project that's causing the bulk of your time sink it may indicate other issues or may just be the core project that everything else depends on. How you handle your build job dependencies is dependent up your CI server and solution. Doing it this way will create more orchestration on your end, but give faster feedback if that's what's required since you're only building the project that had the change, not the complete solution.
I'm not sure what you mean about the "geographical difference" thing. Is this a "push" to the office or a "pull" from the offices? This is a whole other question. HOW are you getting the files there? And why would that require a manual step?
Narrow your scope and do multiple questions and you will probably get better (not to mention shorter and more concise) answers.
Best!

I'm not a C# developer, but the principles remain the same.
To speed up your builds, it will be necessary to break your application up in smaller chunks if possible. If that's not possible, then you've got bigger problems to attack right now. Remember the principles of API's, components and separation of concerns. If you're not familiar with these principles, it's definitely worth the time to learn about them.
In terms of deployment - great that you've automated it, but it sounds exactly the same as you are building a big-bang deployment. Can you think of a way to deploy only deltas to the server(s), are do you deploy a single compressed file? Break it up if possible.

Related

Has anyone used OpenAM/OpenDJ/OpenIDM suite without using ForgeRock's Support plans?

We are looking to implement an open source identity management system and have identified ForgeRock's stack as the best technology to implement.
The high cost of ForgeRock support and its per-User pricing model, however, is a potential roadblock. Our current User base is ~45K, but we expect to ramp up to 1M in the next 2 years.
So we're looking into scenarios where we proceed without FR Support. The lack of FR Maintenance releases would seem to put a damper on that, so we're curious if others have gone that route.
What has been your experience?
What kind of projects have you done this for? Size, etc.
In the absence of FR's Maintenance releases, have you been able to easily create your own patches?
What are some potential pitfalls?
If there are blogs or other communities that deal with this topic, please point me in their general direction.
Thanks.
As a community user I did use OpenAM(/OpenSSO) and OpenDJ for the past 6 years or so, but it was a very small deployment (10k users only 1 server instance from both products).
1) In the early stages we did have reliability issues with OpenAM, which we mostly resolved by restarting the server instances - clearly wasn't preferred, but we didn't really spend too much development effort on actually trying to resolve it (plus lacked the necessary knowledge for investigation back then). After spending some actual effort on trying to learn the product it turned out that the most of our issues were either self-inflicted (badly written customizations, or misconfigurations), or was actually something that got recently resolved in the OpenAM project and was relatively simple to backport to our version.
Of course the experience itself largely depends on how often you want to make configuration changes in the deployment though, since we weren't changing a lot of things over the years, OpenAM just worked nicely for long intervals without requiring any kind of maintenance.
3) Since we didn't really ran into new issues (the config barely changed), there weren't too many surprises after a while. The security patches were mostly simple to backport and didn't cause too much trouble (It did help that after 1,5 years I became a FR employee and I actively worked on OpenAM issues though :) )
4) I think running without subscription has its risks, but they mostly relate to:
are you planning to roll out new features based on OpenAM functionality during that 2 years (i.e. are you planning to constantly make changes to the deployment)?
do you have good developers to work on these features? Working with OpenAM for example can quite easily require you to have a look at the source code to figure out how things work, the quality of the documentation has improved a lot over the years though. Regardless, backporting fixes are going to be more and more difficult over time, as the releases will differ a lot more (since the development team is getting bigger and bigger for each projects) - and even then you can't just assume that all the issues you run into are by definition already resolved in trunk. The need to resolve some of the issues on your own is a cost/risk you need to take into account.
what kind of SLA do you want to have for your deployment? Is your business going bankrupt after a 1 minute outage? Is it acceptable to just frequently restart your service (in case you run into some weird issues)?
do you really need support for all 3 products? For example my background would allow me to work easily without OpenAM support, but I would be in the deep end if something is going wrong with my provisioning system...
And a generic remark:
Having user growth of 20x within two years sounds a bit unrealistic, or very hopeful at least. Maybe what you should look for is a 1 year subscription for a bit more reasonable target number and then have a renewal once you have a better understanding of customer growth in your business?

Mitigating the risks of auto-deployment

Deployment
I currently work for a company that deploys through github. However, we have to log in to all 3 servers to update them manually with a shell script. When talking to the CTO he made it very clear that auto-deployment is like voodoo to him. Which is understandable. We have developers in 4 different countries working remotely. If someone where to accidentally push to the wrong branch we could experience downtime, and with our service we cannot be down for more than 10 minutes. And with all of our developers in different timezones, our CTO wouldn't know till the next morning and we'd have trouble meeting with the developers who had the issue because of vast time differences.
My Problem: Why I want auto-deploy
While working on my personal project I decided that it may be in my best interest to use auto-deployment, but still my project is mission critical and I'd like to mitigate downtime and human error as much as possible. The problem with manual deployment is that I simply cannot manually deploy on up to 20 servers via SSH in a reasonable amount of time. The problem perpetuates when I consider auto-scaling. I'd need to spin up a new server from an image and deploy to it.
My Stack
My service is developed on the Node.js Express framework. These environments are very rich in deployment and bootstraping utilities. My project uses npm's package.json to uglify my scripts on deploy, and also runs my service as a daemon using forever-monitor. I'm also considering grunt.js to further bootstrap my environments for both production and testing environments.
Deployment Methods
I've considered so far:
Auto-deploy with git, using webhooks
Deploying manually with git via shell
Deploying with npm via shell
Docker
I'm not well versed in technologies like Docker, but I'm interested and I'd definitely give points to whoever gave me a good description as to why I should or shouldn't use Docker, because I'm very interested in its use. Other methods are welcome.
My Problem: Why I fear auto-deploy
In a mission critical environment downtime can put your business on hold, and to make matters worse there's a fleet of end users hitting the refresh button. If someone pushes something that's not build passing to the production branch and that's auto-deployed, then I'm looking at a very messy situation.
I love the elegance of auto-deployment, but the risks make me skeptical. I'm very much in favor of making myself as productive as possible. So I'm looking for a way to deploy to many servers with ease, and in very efficient manner.
The Answer I'm Looking For
Explain to me how to mitigate the risks of auto-deployment, or explain to me an alternative which is better suited to my project. Feel free to ask for any missing details in the comments.
No simple answer here. I offer a set of slides published by Mike Brittain from Etsy, a company that practices continuous deployment:
http://www.slideshare.net/mikebrittain/mbrittain-continuous-deploymentalm3public
Selected highlights:
Deploy frequently and in small batches
Use config/feature flags to control system behaviour and "dark release" major features
Code review all changes to the production branch
Invest in monitoring and improve the feedback loop
Manage "services" separately to the "application" and be mindful of run-time version and backwardly compatible changes.
Hope this helps

What are the pros and cons of git-flow vs github-flow? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have recently started to use GitLab.
Currently using a "centralized" workflow.
We are considering moving to the github-flow but I want to make sure.
What are the pros and cons of git-flow vs github-flow?
As discussed in GitMinutes episode 17, by Nicholas Zakas in his article on "GitHub workflows inside of a company":
Git-flow is a process for managing changes in Git that was created by Vincent Driessen and accompanied by some Git extensions for managing that flow.
The general idea behind git-flow is to have several separate branches that always exist, each for a different purpose: master, develop, feature, release, and hotfix.
The process of feature or bug development flows from one branch into another before it’s finally released.
Some of the respondents indicated that they use git-flow in general.
Some started out with git-flow and moved away from it.
The primary reason for moving away is that the git-flow process is hard to deal with in a continuous (or near-continuous) deployment model.
The general feeling is that git-flow works well for products in a more traditional release model, where releases are done once every few weeks, but that this process breaks down considerably when you’re releasing once a day or more.
In short:
Start with a model as simple as possible (like GitHub flow tends to be), and move towards a more complex model if you need to.
You can see an interesting illustration of a simple workflow, based on GitHub-Flow at:
"A simple git branching model", with the main elements being:
master must always be deployable.
all changes made through feature branches (pull-request + merge)
rebase to avoid/resolve conflicts; merge in to master
For an actual more complete and robust workflow, see gitworkflow (one word).
There is no silver bullet workflow where everyone should follow, since all models are sub-optimal. Having said that, you can select the suitable model for your software based on below points;
Multiple versions in production - use Git-flow
If your code is having multiple versions in production (i.e. typical
software products like Operating Systems, Office Packages, Custom
applications, etc) you may use git-flow. Main reason is that you need
to continuously support previous versions in production while
developing the next version.
Single version in production simple software - use Github-flow
If your code is having only one version in production at all times
(i.e. web sites, web services, etc) you may use github-flow. Main
reason is that you don't need to complex things for the developer.
Once developer finish a feature or finish a bugfix its immediately
promoted to production version.
Single Version in production but very complex software - use Gitlab-flow
Large software like Facebook and Gmail, you may need to introduce
deployment branches between your branch and master branch where CI/CD > tools could run, before it gets in to production. Idea is to
introduce more protection to production version since its used by
millions of people.
I've been using git-flow model for over a year and its ok.
But it really depends on how how your application will be developed and deployed.
It works well when you have an application that have a slow development/deployment flow.
But for example, like GitHub we have an application that has a fast development/deployment flow, we deploy everyday, and sometimes several times a day, in this case, git-flow tends to slow down everything in my opinion, and I use GitHub flow.
The other thing to consider is, git-flow is not standard git, so you might, and when I say you might, I really mean, you will find developers that don't know it, and then there is the learning curve, more chance to mess things up. Also as mentioned above, someone developed a set of scripts to make the use of git-flow more easy, so you don't have to remember all the commands, it will assist you with the commands, but remembering the actual flow is your job, I've came across more than once when a developer didn't know whether it was a hotfix or feature, or even worst when they can't remember the flow and stuff things up.
There is at least one GUI that supports git-flow for Mac and Windows SourceTree.
These days, I'm leaning more towards GitHub flow, due to its simplicity and easy to manage. Also, because of "deploy early deploy often"...
Hope this helps

Integrating PowerShell in SharePoint

For some time I have been looking at the possibility to integrate PowerShell as a scripting engine in SharePoint but I haven't found the right solution yet.
My main objective is to enable event triggers in e.g. a list to call and execute a PowerShell script (by filename) on the local server. This would give me a lot of flexibility compared to using an ordinary event handler written in visual studio, but the question is whether it is possible and whether I have overlooked some serious security issues?
Since each and every unique idea that I come up with in many years have already be invented by somebody else, I might have missed an existing product/project so any links to such projects will be appreciated, thanks
In the spirit of "already being invented by somebody else", check out http://www.codeplex.com/iLoveSharePoint for some very interesting uses of PowerShell inside SharePoint. Some great code samples and documentation. Haven't tried myself yet, but seems interesting.
I see what you're trying to achieve, but there's something that just doesn't "feel right" about a user indirectly running script code on your server.
The key difference is that the script can be run by anyone logging into the server. Event handlers can only be run by SharePoint. Strict validation of any inputs would be essential. You should also ensure the script is signed so tampered scripts won't execute.
Also, scripts by their nature aren't really designed for enterprise solutions. There is less opportunity for best practices such as good software architecture, design patterns, source control, code analysis, unit testing, and reuse of code. It's also messy/difficult to share code with a common code base that contains web parts, controls, entities, etc.
Finally, introducing PowerShell means another technology to be maintained in the mix we already have with SharePoint. This might be OK if you are comfortable with it.
Depending on how much customisation has already been done or is planned for the future some of the points above may not matter. Be sure to think about how this idea would feel if implemented 6, 12 and 24 months down the track.

Should CruiseControl.NET be used to handle tasks that are not related to building source?

Weird question, perhaps. We have a number of simple utilities written in-house that need to be run on an automated basis. These are not build jobs. Just things like running SendOutHourlyEmailAlarms.exe, KeepFoldersInSynch.exe and such. I would normally set these things up as simple scheduled tasks/AT commands (or a Windows Service if more granular control is needed over the scheduling), but a co-worker has set up a number of these tasks as build projects on the CruiseControl.NET server. I asked him why he set these up this way and his response was that the executions (and their logs, return values, thrown exceptions) were all tracked and logged and that this information was accessible through an organized interface on the build server website. I couldn't argue with this.
But this just has a smell that I can't quite identify. Is this a proper use of CruiseControl.NET? If not, what are the dangers? Even if it may fit the bill, aren't there other products better suited for this type of thing?
We have all sorts of non-build related tasks for the exact same reason as your coworker had, I want one spot to look up any and all jobs I need run.
Some Examples of our CC.NET projects:
FTP installers to Remote QA
Creating Source Code Documentation
Create VM's with the installers
installed for QA in the morning
Archiving Installers
Pretty much anything I have to do by hand more than once, becomes a project. IMHO it is much better than a scheduled task for one other reason as well. Our config files are in source control, so we have 1 place to make adjustments. We do not have to log into multiple servers and make adjustments or wonder which server did that.
I think your coworker has made a good argument. If these tasks are related to the development process, then placing them in CruesControl.Net as a project seems acceptable. I would draw the line at utilizing a development server to run production processes though. Although it is true that "If the only tool you have is a hammer, you tend to see every problem as a nail," it doesn't mean that the hammer isn't capable of solving a lot of problems!
Just because a tool is designed to solve a particular problem does not mean that it will not have equal facility at solving similar problems outside the scope originally concieved by the tool creator. If CruiseControl.NET solves these problems well, then it is absolutely the appropriate tool to use.

Resources