We are maintaining code for one of our clients.
Initially, we copied all the source code that they have and added it to our TFS 2012.
We modify the code any time they need a bug fix and give the client deployment packages.
Now, client wants all the latest code in their TFS 2012 as well.
Is there a way to update their source code with our changes? ...
preferably automatically (i.e. power shell script) and preferably with history of changes.
There are many approaches each with some pros and cons. The following are the main options I would suggest.
Database backup and restore
This is the only path that guarantees full fidelity. It has some technical difficulties (e.g. SQL Server version and editions) and political (how much information you care to expose, how much effort you want to put in sanitizing your data).
Project synchronization
There are some tools, most notably the Integration Platform, that use the API to read and reply the changes from one system to the other. It requires that the syncing tool can see both systems via HTTP(S).
It gives you the flexibility to project only some data (say source code not work items).
Keep in mind that you will always loose something in the process: the Changeset number will never match, some users details.
Dumb dump
Give up conserving full history and be content to share the code.
This is the simplest to implement: get all the code, ship and check into the other system. You can associate release notes in the check-in.
Two simple scripts using TF.exe is all you need.
You can use TFS Integration Tool to achieve the code migration(TFS-to-TFS). TFS Integration Tool moving data between two different servers. The migration is done through the APIs of TFS, and there also some limitations.(Check the above link for more info)
Detail steps please see my answer in this question: Move Team Project to another Project Collection TFS 2013
Related
We have a project with a single code base which we build both on Windows and Linux. And we want to run Klocwork code analysis on both Windows and Linux. Currently our approach is:
We have set up one KW project in the web UI
Inject and build on Linux, push the results to the server, save the report
Inject and build on Windows, push the results to the server, save the report
It somehow works, but the problem is that latter scan effectively overwrites results of the first one. If we save report directly after push, then we can still have a saved copy, but if developers want to triage/analyze the hit which is present only in the first build (i.e. some Linux-specific code), then it's almost impossible because KW has already marked this hit as "obsolete" (because it was not present in Windows scan)
Having two projects is not really an option, because 90% of the code is shared and it will cause huge overhead of developers to triage the same hits twice.
There are multiple options to achieve your goal.
Option 1: There is a tab under projects called Builds. That can give you the build chain report. here you can see the reports of previous builds.
Option 2: Are you using Klocwork desktop tools (Plugins/kwcheck)? If yes, developer will be notified automatically about the new defects/issues that he has produced at his machine. So there may be no question of reviewing Klocwork portal by a developer just to see what are all the issues he has created.
Option 3: I see you have mentioned 90% of the code has been shared. Is that meant, your project needs windows dlls and Linux libraries together to build your project?.
If the Answer is YES, please do let me know. i will think about some possible workarounds.
If the Answer is NO, then creating kwtables is one time job and from second time onwards Klocwork can perform incremental analysis (kwbuildproject ....... --incremental)
Option 4: Creating multiple project is not a bad option. Existing project settings can be replicated and the issue status can be sync. When you push the results to Klocwork server, the results will be pushed from Build Machine to Klocwork web/database server and it creates /projects_root/My_Project/builds/My_Build_Name/ directory. So, maintaining two Klocwork projects wont make much of a difference.
Option 5: Schedule a call with Klocwork support team. They will be happy to assist you with the best possible way.
I hope this helps.
This wiki (https://www-10.lotus.com/ldd/ddwiki.nsf/dx/Headless_Designer_Wiki) seemed to indicate that you can only create NSF under your Notes Data directory. I have done a couple of quick test and the only workaround I can find is to install Domino Designer on the same server as the target Domino server and set the target as the Domino data folder (i.e: C:\Domino\Data\sample.nsf instead of just sample.nsf).
The reason for this is I am trying to find an automated way of the following operation
Import ODP into workspace
Associate with a new NSF, but choose a Domino Server as a target
Does anyone have other workaround for this ?
I wish I had a more complete answer for you, but as this is still unanswered after a few days, I'll try to add some insight. It sounds like you have some experience getting headless DDE builds to work, so I won't focus on that. If you're looking for my take on headless DDE builds, I blogged on the subject a while ago, but since adapted the Jenkins CI based process I outlined there for a GitLab CI runner based solution, which I described in another SO answer.
Firstly, I would strongly recommend against setting your Designer target as the same as a server instance. This might work, but seems an unnecessary complication, and potentially issue prone, IMO.
My interpretation of your steps:
automatically receive updates (e.g.- on master branch, or all commits, etc.)
perform build via headless DDE
deploy built NSF
Splitting apart the logic for deploying of the built NSF is ideal here, since you have an asset that needs to be parked in a server path. The two main approaches I see are either:
having a dev/staging server that you can programmatically restart on demand
a more complex mechanism, in an NSF or server plugin, that will ingest the NSF's design and replace the design elements in a (newly created) destination NSF
As you can imagine, that last one is a bit tricky, but it was something I've left off working on, until I have more "free time". As for the former, you'll likely want someone with a bit of admin/operations skills set assist you, but in my mind there would be a total of three scripts involved:
one to down the destination server (this is why it should be a dev/staging server)
one to copy the built NSF to the destination file system path
one to start up the destination server
If you have a design task set to run at a certain interval and point the staging server for any changes, you could conceivable pull from that at whatever your interval is; nightly, etc. I hope the perspective helps.
I am having a problem where My Eclipse 9.1 is not able to connect to multiple projects in 2010 using the Team Explorer Everywhere plugin. If I try to connect a second project, it disconnects me from the first one. I can not find any way to be able to pull down multiple projects like I was in TFS 2008.
Any Ideas?
This is as-designed. Team Explorer Everywhere can only connect to a single Team Project Collection at a time. There are myriad reasons why this is the case, but all are to preserve the notion of atomic operations against the server. Some operations (for example, check-in) simply must be scoped to a single server instance in order to make sense.
Since a single changeset is atomic in TFS, an attempt to check-in multiple pending changes either all succeed or all fail. Consider if you had pending changes from two different servers: you cannot commit all these changes as a single changeset - one server could reject your check-in due to conflicts, while the other could proceed successfully. This is, at best, confusing, but most likely actually leaves your projects in an inconsistent state since there may be dependencies between these projects. Since there are distinct changesets for each server, the UI must reflect that.
After much deliberation and experimentation, we concluded that the best user experience is simply to have an experience where you can import projects from multiple TFS servers, but you must select which server you want to work with in the UI by selecting which one is currently "online". All TFS functionality is available for the online server which a limited subset of the TFS functionality is available to the other projects.
We would recommend that you consolidate your Java projects to a single Team Project Collection if you need to import all of them.
This behavior is unchanged from any previous versions of the software, including before the acquisition of the technology by Microsoft (when the product was still part of Teamprise Client Suite.)
Also note that the scope of commands available to "offline" projects has increased dramatically in TFS 2012 thanks to the new Local Workspace functionality.
So, I'm about to embark on a fairly lengthy, time consuming project that could net me some good results/rewards - and I'd like to give everything the attention and focus it deserves. I will be the sole developer, and I'm experienced in that capacity (about 13 years in the industry). I've just never had to be responsible for EVERY choice so I'd like to throw this out there for some feedback. This is going to be a website.
Dev Tools on Win x64 workstation:
VS2010
SourceGear Client
FileZilla
UltraEdit
SQL 2008 Mgmt Studio
I will have my own DB Server machine also, which will run SQL 2008 for both the web DB and will host the Soucegear repository DB.
I'd like to have an automated build process that includes
pulling the latest code from the repository
checking it against rules (ala FX Cop)
compile the code
running a series of tests against the new compilation (unit tests?)
Any suggestion on tools to do these tasks? Should I just write & execute scripts to do certain steps?
Backups! - I'd like the source code repository and web files, graphics, media, etc. for the site backed up regularly. I use Mozy for my own personal backups - is there something more suitable for this kind of backup? Windows Home Server or something like that perhaps?
Lastly - what am I not thinking of that needs to be on my radar? For example, I plan on using jQuery but only have limited experience with it - any good javascript tools besides VS2010? How to most web devs test their sites across the plethora of browsers available? Should I use minify-ing tools for the web content - which are best? I've built plenty of web sites & applications before - this is just my first real "commercial" venture and I'd like it be founded in solid practices.
I, too, am a single developer and use much the same tools as you...
Couple of things...OK...it grew to more than a couple...
I like to use UltraEdit sometimes for editing my javascript files...as it has some features that VS2008 lacks...not sure abt VS2010...mainly a function list to aid in navigation of the file.
I also use JavaScript Lint to check the syntax of my javascript files...you can integrate it into UltraEdit...my choice...or VS...or both.
I use subversion for source code control...Visual SVN for the server and Tortoise for the client...both free
For backup I recently started using DropBox...you can point DropBox to the folder that holds your files and it will sync whenever DropBox starts (and it will keep the files synched on multiple machines...so if you have an offsite machine...you're covered if something happens at your main development site).
If you'll be using LINQ at all...I'd recommend getting a copy of LinqPad It's free, but you can pay to get "intellisense"...the examples included are fantastic learning tools.
When using jQuery...look for plug-ins that do what you're looking for...
I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)