Manage multiple test environments - visual-studio-2012

I need some help trying to make my lab tests better!
Where I work every team has one server to test new features and bug fixes.
Then, each server is divided in slots. When a new feature is done, the Programmer needs to check if there is some available slot to deploy the software. If the is no slot available, the Programmer needs to create another slot!
My question is: is there any software that helps the programmers to manage the available environments? That tells me wich slot is available and with feature is benigno tested in each slot?
Thanks!

Related

Mitigating the risks of auto-deployment

Deployment
I currently work for a company that deploys through github. However, we have to log in to all 3 servers to update them manually with a shell script. When talking to the CTO he made it very clear that auto-deployment is like voodoo to him. Which is understandable. We have developers in 4 different countries working remotely. If someone where to accidentally push to the wrong branch we could experience downtime, and with our service we cannot be down for more than 10 minutes. And with all of our developers in different timezones, our CTO wouldn't know till the next morning and we'd have trouble meeting with the developers who had the issue because of vast time differences.
My Problem: Why I want auto-deploy
While working on my personal project I decided that it may be in my best interest to use auto-deployment, but still my project is mission critical and I'd like to mitigate downtime and human error as much as possible. The problem with manual deployment is that I simply cannot manually deploy on up to 20 servers via SSH in a reasonable amount of time. The problem perpetuates when I consider auto-scaling. I'd need to spin up a new server from an image and deploy to it.
My Stack
My service is developed on the Node.js Express framework. These environments are very rich in deployment and bootstraping utilities. My project uses npm's package.json to uglify my scripts on deploy, and also runs my service as a daemon using forever-monitor. I'm also considering grunt.js to further bootstrap my environments for both production and testing environments.
Deployment Methods
I've considered so far:
Auto-deploy with git, using webhooks
Deploying manually with git via shell
Deploying with npm via shell
Docker
I'm not well versed in technologies like Docker, but I'm interested and I'd definitely give points to whoever gave me a good description as to why I should or shouldn't use Docker, because I'm very interested in its use. Other methods are welcome.
My Problem: Why I fear auto-deploy
In a mission critical environment downtime can put your business on hold, and to make matters worse there's a fleet of end users hitting the refresh button. If someone pushes something that's not build passing to the production branch and that's auto-deployed, then I'm looking at a very messy situation.
I love the elegance of auto-deployment, but the risks make me skeptical. I'm very much in favor of making myself as productive as possible. So I'm looking for a way to deploy to many servers with ease, and in very efficient manner.
The Answer I'm Looking For
Explain to me how to mitigate the risks of auto-deployment, or explain to me an alternative which is better suited to my project. Feel free to ask for any missing details in the comments.
No simple answer here. I offer a set of slides published by Mike Brittain from Etsy, a company that practices continuous deployment:
http://www.slideshare.net/mikebrittain/mbrittain-continuous-deploymentalm3public
Selected highlights:
Deploy frequently and in small batches
Use config/feature flags to control system behaviour and "dark release" major features
Code review all changes to the production branch
Invest in monitoring and improve the feedback loop
Manage "services" separately to the "application" and be mindful of run-time version and backwardly compatible changes.
Hope this helps

How does one use a different LightSwitch database connection string per user and environment?

I develop software in a multi-programmer source controlled world (TFS). Moreover, I have multiple environments (i.e. individual desktop developer machines, QA and production in the Azure cloud). The problem is, I'm unsure as to the best way to use different database connection strings for each environment. I realize that I can manually update the web.config for the Server project but that causes a clash when check-in occurs. This is a pretty critical issue for our team, so any help in this regard would be greatly appreciated. Thanks....

What are the pros and cons of git-flow vs github-flow? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
We have recently started to use GitLab.
Currently using a "centralized" workflow.
We are considering moving to the github-flow but I want to make sure.
What are the pros and cons of git-flow vs github-flow?
As discussed in GitMinutes episode 17, by Nicholas Zakas in his article on "GitHub workflows inside of a company":
Git-flow is a process for managing changes in Git that was created by Vincent Driessen and accompanied by some Git extensions for managing that flow.
The general idea behind git-flow is to have several separate branches that always exist, each for a different purpose: master, develop, feature, release, and hotfix.
The process of feature or bug development flows from one branch into another before it’s finally released.
Some of the respondents indicated that they use git-flow in general.
Some started out with git-flow and moved away from it.
The primary reason for moving away is that the git-flow process is hard to deal with in a continuous (or near-continuous) deployment model.
The general feeling is that git-flow works well for products in a more traditional release model, where releases are done once every few weeks, but that this process breaks down considerably when you’re releasing once a day or more.
In short:
Start with a model as simple as possible (like GitHub flow tends to be), and move towards a more complex model if you need to.
You can see an interesting illustration of a simple workflow, based on GitHub-Flow at:
"A simple git branching model", with the main elements being:
master must always be deployable.
all changes made through feature branches (pull-request + merge)
rebase to avoid/resolve conflicts; merge in to master
For an actual more complete and robust workflow, see gitworkflow (one word).
There is no silver bullet workflow where everyone should follow, since all models are sub-optimal. Having said that, you can select the suitable model for your software based on below points;
Multiple versions in production - use Git-flow
If your code is having multiple versions in production (i.e. typical
software products like Operating Systems, Office Packages, Custom
applications, etc) you may use git-flow. Main reason is that you need
to continuously support previous versions in production while
developing the next version.
Single version in production simple software - use Github-flow
If your code is having only one version in production at all times
(i.e. web sites, web services, etc) you may use github-flow. Main
reason is that you don't need to complex things for the developer.
Once developer finish a feature or finish a bugfix its immediately
promoted to production version.
Single Version in production but very complex software - use Gitlab-flow
Large software like Facebook and Gmail, you may need to introduce
deployment branches between your branch and master branch where CI/CD > tools could run, before it gets in to production. Idea is to
introduce more protection to production version since its used by
millions of people.
I've been using git-flow model for over a year and its ok.
But it really depends on how how your application will be developed and deployed.
It works well when you have an application that have a slow development/deployment flow.
But for example, like GitHub we have an application that has a fast development/deployment flow, we deploy everyday, and sometimes several times a day, in this case, git-flow tends to slow down everything in my opinion, and I use GitHub flow.
The other thing to consider is, git-flow is not standard git, so you might, and when I say you might, I really mean, you will find developers that don't know it, and then there is the learning curve, more chance to mess things up. Also as mentioned above, someone developed a set of scripts to make the use of git-flow more easy, so you don't have to remember all the commands, it will assist you with the commands, but remembering the actual flow is your job, I've came across more than once when a developer didn't know whether it was a hotfix or feature, or even worst when they can't remember the flow and stuff things up.
There is at least one GUI that supports git-flow for Mac and Windows SourceTree.
These days, I'm leaning more towards GitHub flow, due to its simplicity and easy to manage. Also, because of "deploy early deploy often"...
Hope this helps

Azure releasing complications

We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?
Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.
Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.

Creating a simple mobile agent system

I am looking to create a simple mobile agent system which will deal with 4 tasks, i.e 4 different mobile agents jobs: Database update, meeting scheduling, network services discovery and kernel update.
I have done my research and have seen different frameworks such as Aglet, Jade, agent builder etc. My question is which one should i use? Also i need to setup the base code for it to work, can someone point me to a site or help me to setup the basic functions of the mobile agent?
I've read about tahiti server for the Aglet model. I'm quite confused about how to set up the mobile agent system. Any help would be much appreciated.
I have also tried to it using RMI. I had created a method of type agent, but i couldn't pass it through remote method implementation. I was reading about tcp and udp socket programming. I was thinking may be it would be more fair to do it using socket programming. In this case, would this be called an agent? I was thinking about the server sending datagram packets to multiple clients.
You need to ask yourself why you want to use mobile agents at all. The notion of a mobile agent was popular in the agent research community in the early 90's, but fell out of favour because (i) it wasn't clear what problem it was solving, (ii) the capability to allow arbitrary code to migrate to a particular computer and execute with enough privileges to access local data and services is very open to abuse, and (iii) all of the claimed benefits of mobile agents can actually be achieved though web services (REST or otherwise) and open data formats such as RDF. Consequently, few, if any, mobile agent platforms have been properly maintained since the early experiments.
It also sounds as though you need to be clear which end-user problem you want to solve. Scheduling a meeting and updating my kernel are very different tasks - I'd be very uncomfortable with a program that claims do both. If your interest is in the automation of system maintenance tasks, such as DB tuning and kernel patching, on large networks you might want to look at the SmartFrog project, or read up on autonomic computing.
I use JADE and I agree with the first guy, agent systems usually take alot of overhead to going so if you can avoid it, please do. If however you choose to proceed choose a platform with alot of support and a big user group.
Jade has some neat features like a directory facilitator DF, which works like a yellow pages so other agents don't have to know what agents are running and what services are supplied they can simply inquire by the DF.
Also JADE ContractNetBehaviours help simplify communication.

Resources