I would like to share a project\solution with two teams, ideally on two TFS.
The option to have both teams using the same TFS doesn't work, because both teams don't have access to one of the TFS and hosting the solution on this TFS is a requirement.
It looks as follows:
Project\solution -> Team1 -> TFS1 (requirement)
Same Project\solution -> Team1 + Team2 -> TFS2 (???)
What are my options? Is there a tool out there that can do this? Should I use tow different version control packages?
You can use TFS Integration Plataform to sync the Team Projects between the TFS's installs... But the best world is: access one TFS trought TFS Proxy.
Another way is use Git repository, you will can sync remote the repository with your repository, but access the work items just by TFS.
There are really three ways to solve your problem. The reality is that only #1 is effective if you can't use the cloud. Note that using #3 is fraught with pain and suffering. Aa with all dark side hack/workarounds nothing meets the needs like solving the underlying problem rather than sweeping it under the carpet.
All access - the only really viable solution I to give all required users access to the TFS Server. I have worked with healthcare, banking, defence, and even Insurance. In all cases, in all companies, you can have a single server where all required users can access. In some cases it is hard and fraught with beurocracy but ultimately it can be done.
Visual Studio Online - while there is fear of the cloud this is likely your only option if you have externals that really can't directly access your network. This would be that common server. If you are in Europe then MS has just signed an agreement that ostensibly puts EU located servers for an American company outside of the reach of the Partiot Act (untested.) You can also easily use the TFS Integration Tools to create a one way sync between VSO and your local server.
Bi-directional synchronization - this can be achieved in many ways but there is always a penalty in merging if you have changes on both ends. You can use the TFS Integration Tools that are free or use a Commercially available tool like OpsHub. If your are using Git as your repository within TFS then you can use the command line to push source between two servers... Even if the can't communicate by using a USB stick.
Use #1 or #2 and only as a temporary and short term measure ever use #3
I use the tools all the time to move one way only from one system to another and even this a is a complicated experience. To move stuff bi-directionally you will need a full time resource to resolve conflicts.
If the servers can communicate with each other you may be able to use a system (akin to) replication. There is one master Tfs instance and then external sites use a proxy to allow the second team to work without direct or always-available access to the main server.
Alternatively you may be able to use branches - you could keep a branch for the external team's code and then merge to or from that branch to your mainline. In this scheme you would be able to sync the code by copying between the branch and external site, so you could even transfer updates on memory sticks if there is no direct net connection. Each sync would be pretty time consuming though, as someone on your main server team would have to merge code back and forth through the branch.
Another thing to consider is whether there is any way you can divide up the codebase and the tasks to minimise the overlap between the two teams. for example if one team provides a library that the other uses. This is just too minimise the merging needed.
Related
We are considering to build a webapplication and rely on Azure. The main idea behind this application is that users are able to work together on specific tasks in the cloud. I'd love to go for the concept of instant releasing where users are not bothered with downtime but I have no idea how I can achieve this (if possible at all). Lets say 10.000 users are currently working on this webapplication, and I release software with database updates.
What happens when I publish a new release of my software into Azure?
What will happen to the brilliant work in progress of my poor users?
Should I bring the site down first before I publish a new release?
Can I "just release" and let users enjoy the "new" world as soon as they request a new page?
I am surprised that I can't find any information about releasing strategies in Azure, am I looking in the wrong places?
Windows Azure is a great platform with many different features which can simplify lots of software management tasks. However, bear in mind that no matter how great platform you use, your application depends on proper system architecture and code quality - well written application will work perfectly fine; poorly written application will fail. So do not expect that Azure will solve all your issues (but it may help with many).
What happens when I publish a new release of my software into Azure?
Windows Azure Cloud Services has a concept of Production and Staging deployments. New code deployment goes to staging first. Then you can do a quick QA over there (sometimes "warm up" the application to make sure it has all caches populated - but that depends on application design) and perform "Swap" - your staging deployment becomes production and production deployment becomes staging. That gives you ability to perform "rollback" in case of any issues with the new code. Swap operation is relatively fast as it is mostly internal DNS switch.
What will happen to the brilliant work in progress of my poor users?
It is always good idea to perform code deployments during the lowest site load (night time). Sometimes it is not possible e.g. if your application is used by global organization. Then you should use "the lowest" activity time.
In order to protect users you could implement solutions such as "automatic draft save" which happens every X minutes. But if your application is designed to work with cloud systems, users should not see any functionality failure during new code release.
Should I bring the site down first before I publish a new release?
That depends on architecture of your application. If the application is very well designed then you should not need to do that. Windows Azure application I work with has new code release once a month and we never had to bring the site down since the beginning (for last two years).
I hope that will give you better understanding of Azure Cloud Services.
Yes you can.
I suggest you create one of the visual stdio template applications and take a look at the "staging" and "production" environments located directly when you click your azure site in portal manager.
Say for example the users work on the "production" environment which is connected to Sqlserver1. You publish your new release to "staging" which is also connected to Sqlserver1. Then you just switch the two using the swap and staging becomes the "production" environment.
I'm not sure what happens to their work if they have something stored in sessions or server caches. Guess they will be lost. But client side stuff will work seamlessly.
"Should I bring the site down first before I publish a new release?"
I would bring up a warning (if the users work conissts of session stuff and so forth) saying brief downtime in 5 minutes and then after the swith telling everyone it is over.
Scenario: 2 developers working on the same project (VS2010, C#, MVC3, WinXP) on seperate stand alone computers. Due to IA restriction (DOD) we are NOT allowed to connect these two computers in any way. The only way we are allowed to pass data between computers is via a CD-R/DVD-R disk. We need to be able to share a SVN repository for the code we are writing. I'm trying to figure out what the best way to do this would be.
Will this scenario even work? What the best workflow to use? I would appreciate any guidance or suggestions on the best way to do this.
Mark Buckley
putrtek#gmail.com
It sounds to me that you would be better off using distributed source control, such as Mercurial or Git for this project. SVN makes it exceptionally hard to merge, and distributed source control would make it so that you just have to pass ChangeSets back and forth.
Also, distributed source control houses a repository on each system, which is what you would have to do in this situation anyways.
This book should help you with most things Mercurial-related.
This Link explains how to pull new ChangeSets into your repository.
In your situation I would propose the following scenario: setup and maintain SVN repository on the one selected PC (let's say the most reliable one), the other members pass CD-R's with patches when they finish part of work, then all patches are integrated in that SVN repo and for each members own patches are created to have similar code on each PC. I know, this sounds awkward, but maybe the best option in this case and operations with patches can be automatized.
From a design perspective I think the code architecture needs to be good with clear separation of modules, less coupled codes, follow strict OOP, reduce code dependency and I guess in that way two people can easily work without much interaction... do plan your integration and do have your code / class signatures defined before hand if possible.
Our group integrates a bunch of different sub-blocks into our main project and we are trying to determine the best way to manage all of these different pieces of intellectual property. (From here on out I will refer to these sub-projects as pieces of IP "Intellectual Property").
The IP will be a mixture of third party vendor IP, previous projects IP and new to this project IP. Here are some of the ideas we are considering for managing all the different pieces of IP:
Publish releases on a physical drive and have the main project point to the correct releases.
PROS - Little to no dependencies on the SCM: seems simpler to manage initially:
CONS - Must remember to keep each physical design center up to date:
Use Perforce client spec views to include the correct version.
PROS - Able to quickly see what IPs are being used in the client spec:
CONS - With a lot of IPs the client spec becomes very messy and hard to manage: each team member manages there own client spec (inconsistencies): the very thing determining which IP version to use is not under SCM (by default):
Integrate the the different releases into a single one line client view.
PROS - Makes client spec maintenance dead simple: any change to the IP version is easly observable with the standard Perforce tools:
CONS - Not as easy to see what versions of IP we are using:
Our manager prefers #2 because it is easiest for him to look at a client spec and know all the IPs we are using and the versions. The worker bees tend to strongly dislike this one as it means we have to try and keep everyones individual client specs up to date and is not under SCM of the project itself.
How do others handle IP within a Perforce project and what recommendations do you have?
UPDATE:
I am really leaning towards solution #3, it just seems so much cleaner and easier to maintain. If any one can think of why #3 is not a good idea please let me know.
I would go for the third solution too.
I can't think of any downsides, and have not experienced any when faced with similar situations in the past.
You could placate your manager by using a branch spec that clearly spells out which IP versions are branched in. He could then refer to that branch spec instead of a client spec.
Also if you look up 'spec depots' in the help, you can set Perforce up so that it version controls all specs, including branch specs, automatically, which will give you traceability if you alter IP versions.
"each team member manages there own client spec (inconsistencies)"
Don't do that. Have the client spec be a file that is checked in to Perforce.
I would suggest #2 as it is the most transparent system. Yes it will mean some more work keeping clients up to date, but you can minimize that issue by using template clients.
At my work we use template clients that the devs copy from to keep their clients properly configured. We name this with the pattern "0-PRODUCT-BRANCH" (and sometimes add platform if needed). Then it is a one line command from the command line, or a couple clicks from the GUI to update your client. I send notices to the team whenever the template changes.
Now in my case, template changes don't happen very often. There is maybe a max of 5-6 per year, so the hassle level may be different for you.
This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
I am working in a small startup organization with approximately 12 - 15 developers. We recently had an issue with one of our servers where by the entire server was "Re provisioned" i.e. completely wiped of all the code, databases, and information on it. Our hosting company informed us that only someone with access to the server account could have done something like this - and we believe that it may have been a disgruntled employee (we have recently had to downsize). We had local backups of some of the data but are still trying to recover from the data loss.
My question is this - we have recently began using GitHub to manage source control on some of our other projects - and have more then a few private repositories - is there any way to ensure that there is some sort of protection of our source code? What i mean by this is that I am aware that you can delete an entire Project on GitHub, or even a series of code updates. I would like to avoid this from happening.
What i would like to do is create (perhaps in a separate repository) a complete replica of the project on Git - and ensure that only a single individual has access to this replicated project. That way if the original project is corrupted or destroyed for any reason we can restore where we were (with history intact) from the backup repository.
Is this possible? What is the best way to do this? Github has recently introduced "Company" accounts... is that the way to go?
Any help on this situation would be greatly appreciated.
Cheers!
Well, if a disgruntled employee leaves, you can easily remove them from all your repositories, especially if you are using the Organizations - you just remove them from a team. In the event that someone deletes a repository maliciously that still had access for some reason, we have daily backups of all of the repositories that we will reconstitute if you ask. So you would never lose more than one day of code work at worst. Likely someone on the team will have an update with that code anyhow. If you need more protection than that, then yes, you can setup a cron'd fetch or something that will do mirrors of your code more often.
First, you should really consult github support -- only they can tell you how they do the backup, what options for permission control they have (esp. now that they introduced "organizations") etc. Also you have agreement with them -- do read it.
Second, it's still very easy to do git fetch by cron, say, once an hour (on your local machine or on your server) -- and you're pretty safe.
Git is a distributed system. So your local copy is the same as your remote copy on Git hub! You should be OK to push it back up there.
A friend of mine and I are developing a web server for system administration in perl, similar to webmin. We have a setup a linux box with the current version of the server working, along with other open source web products like webmail, calendar, inventory management system and more.
Currently, the code is not under revision control and we're just doing periodic snapshots.
We would like to put the code under revision control.
My question is what will be a good way to set this up and software solution to use:
One solution i can think of is to set up the root of the project which is currently on the linux box to be the root of the repository a well. And we will check out the code on our personal machines, work on it, commit and test the result.
Any other ideas, approaches?
Thanks a lot,
Spasski
Version Control with Subversion covers many fundamental version control concepts in addition to being the authority on Subversion itself. If you read the first chapter, you might get a good idea on how to set things up.
In your case, it sounds like you're making the actual development on the live system. This doesn't really matter as far as a version control system is concerned. In your case, you can still use Subversion for:
Committing as a means of backing up your code and updating your repository with working changes. Make a habit of committing after testing, so there are as few broken commits as possible.
Tagging as a means of keeping track of what you do. When you've added a feature, make a tag. This way you can easily revert to "before we implemented X" if necessary.
Branching to developt larger chunks of changes. If a feature takes several days to develop, you might want to commit during development, but not to the trunk, since you are then committing something that is only half finished. In this case, you should commit to a branch.
Where you create a repository doesn't really matter, but you should only place working copies where they are actually usable. In your case, it sounds like the live server is the only such place.
For a more light-weight solution, with less overhead, where any folder anywhere can be a repository, you might want to use Bazaar instead. Bazaar is a more flexible version control system than Subversion, and might suit your needs better. With Bazaar, you could make a repository of your live system instead of setting up a repository somewhere else, but still follow the 3 guidelines above.
How many webapp instances can you run?
You shouldn't commit untested code, or make commits from a machine that can't run your code. Though you can push to backup clones if you like.