I have a solution in Visual Studio Team Services that has 2 Web Applications (specifically one project for WebAPI services and another for the actual site using MVC).
I'm trying to set up continuous delivery to Azure but all the information that I can find seems to assume that you only have a single Web Application within your solution (which seems a little unrealistic for all but the simplest of projects!).
The out of box continuous delivery process seems to just pick and deploy the first Web Application it finds (which isn't necessarily the same project each time!)
I've tried specifying the Deployment Settings file, but that seems to affect the destination rather than the project being deployed since again, it seems to just "pick" a project to deploy, and each time it deploys every single compiled assembly plus all dependencies rather than just the binaries and dependencies of the project actually being deployed, which can cause issues with MVC finding duplicate controller matches for a given name (this can of course be fixed by specifying the namespace of the controllers within the route configuration, but that seems less than ideal, and still doesn't fix the entire problem).
Ideally I'd like to find a way to deploy both projects with a single build, but as a temporary solution I'd be happy with 2 builds that are both triggered by a check-in of the single solution, that each reliably deploy 1 of the 2 Web Applications.
Does anyone know if this is possible? I guess I could write my own custom build template, but I'm hoping there is an easier answer (not least because I can't imagine that this isn't a problem being faced by other people!)
I did find this question TFSPreview.com and Azure continuous deployment for multiple solutions in TFS but since that's quite old and is specifically talking about AzureWebRoleProjects rather than Web Applications being deployed to the newer Azure Websites feature, I'm hoping that there is a more positive answer?
This is possible with multiple build configurations. In addition to Debug and Release you could specify two more, one for each app.
You can find these in Visual Studio at Build -> Configuration Manager. And then in the configurations specify only one of them to be built. Then running MSBuild with that configuration will output only one WebDeploy package.
Related
I have an MVC Web App that I am trying to get set up with continuous integration on Azure and Visual Studio Online. Basically, the solution has 4 projects within, 3 of which go to supporting the 1 Web App. The problem is, when I set up continuous deployment on Azure, it builds the entire solution and doesn't know which project I want for the root URL.
When I download the drop folder that is produced by the compilation it looks like this.
drop/lots of dlls including the dll of my web app.
drop/_PublishedWebsites
drop/_PublishedWebsites/MyWebApp (including its bin, content, fonts etc)
drop/_PublishedWebsites/MyWebApp_Package
and some other folders as well.
How can I configure the continuous deployment to put my Web App at the root of the website??
Thanks
Looks like the dumb solution is to rename your "main" project to be the alphabetically first project in the solution. Microsoft suggests this, or having only one project per solution. Either way, this is the most reliable and simple way to get the desired effect.
Before we used TFS, we were building our packages locally using visual studio. There was a lot of projects organized into solutions. When we wanted to build the package, we simply located the ccproj project, right click on it and hit “package”.
There is couple of specific in our solution:
We use web roles with multiple web sites and virtual applications and we have them as project dependencies in VS2012 solution
We use config transformations for both web and worker roles. Worker roles transforms was achieved by adding transform target manually to the project file.
We have some additional class library projects – their outputs needs to go in a subfolder of the worker role along with correct configuration files, it is kind of plug-in architecture. We used some xcopy commands to include these non-referenced libraries in our worker roles.
Everything worked smoothly when building in VS 2012 locally.
When migrated to TFS we quickly learned that we won’t be able to replicate the same build process on build server
It turned out that TFS is not preserving solutions structure, more
details here:
http://social.msdn.microsoft.com/Forums/en-US/tfsbuild/thread/9ac815c8-5961-4670-a6d0-660a9b66da9c
The project dependency that was solving multiple web sites and
virtual applications in a single role did not work on build server,
probably because of different output directory. We had to add some
hacks in our ccproj and csproj files to get these published and
correctly included in the resulting package.
xcopy commands failed because of different directory structures on
TFS build sever.
We had to force run cspack on TFS build server by explicitly adding
/t:Publish parameter to msbuild command line.
Config transforms for worker roles did not worked, we had to force to
occur using another hacks in the ccproj and csporj files.
There was more issues but those are too detailed. I would keep it on
high level just to illustrate the whole issue. The build somehow
works now, but we have now a lot of hacks in place now.
I have two questions:
Is it possible to configure TFS build server to have exact same
behavior as the local build in VS2012?
Is there any official solution for building azure packages with multiple web sites and virtual applications in a single web role?
I haven't yet tried this on a TFS build server, but the approach outlined in my blog at http://michaelcollier.wordpress.com/2013/01/14/multiple-sites-in-a-web-role/ has been working well. The "trick" is basically to modify the .ccproj file to tap into the CoreBuildDependsOn target, adding logic that will execute MSBuild against the secondary sites. This should also allow config transforms to work.
I have a web application project in VS2012 which I'm publishing using a "Web Deploy Package". I want this package to include app-pool settings, specifically creating an IIS app-pool and assigning the newly created application to it.
I'm familiar with the option "Include application pool settings used by this Web project" available when the project is configured to use an IIS instance (not IIS Express), but IIS configuration is not part of the project file, and thus not source controlled. What happens when somebody builds a deployment package on a machine that hasn't had IIS meticulously configured? Not ideal.
How else then, can I go about getting AppPool settings into my web deploy package? I understand that the appPoolConfig provider is IIS7+ only, I'm fine with that limitation. I've banged my head against this issue in the past and never found a solution. 18 months later, we've got a new VisualStudio version, and a new web-publishing-pipeline, are there new options to address this? Or maybe something I missed when I first tackled this problem?
Edit
OK, I'm seeing the following as options:
Configure my project to sync settings from an IIS instance. As mentioned, I'm not a fan of this given that it puts settings outside of the project, meaning the environment has to be meticulously configured to build + publish. Plus it drags along other IIS settings I don't want included.
Inject something into the web-publishing-pipeline (WPP) to modify the archive.xml. I've toyed with this in the past and had limited success. One problem is the pipeline isn't exactly co-operative with working directly on the archive.xml file, another problem is some of the more cryptic attributes involved, like MSDeploy.MSDeployProviderOptions which appears to have some Base64 encoded binary? No idea what to put in there.
Find an existing "provider" that can do what I want. I might be out of luck here, the appPoolConfig provider only seems to want to read / write IIS, not, say, an XML file of settings. Does anybody know otherwise?
Write my own "provider" to produce manifest output entries. I'm not sure, is it possible to write a custom provider that writes to a manifest using the name of an existing provider? As in, MyCustomPoolProvider writes appPoolConfig sections into a manifest? This sounds like a potentially painful exercise that may or may not work. Would I still need to figure out the encoding of whatever is going into MSDeploy.MSDeployProviderOptions?
I get the feeling that the fundamental obstacle with Web Deploy for what I'm trying to accomplish, is how strictly it leans on "providers". The pre-existing providers are largely designed for IIS synchronisation, not primary development and publication. It so happens that some of these providers can be relatively easily hooked into via MSBuild, but the majority insist on pulling data from IIS, and that's that.
You are correct in your understanding of the appPoolConfig provider, in that it can only sync between App Pools and can't be provided with the configuration directly. What you could potentially do is keep a copy of the appPool in question in package form (ie. msdeploy -verb:sync -source:appPoolConfig=PoolName -dest:package=apppool.zip) and attempt to hijack the pipeline so that the MSDeploy call adds the application content into the package, leaving the existing content there.
Alternatively, you could always keep the packages separate and deploy them with different calls to MSDeploy.
FYI, MSDeploy.MSDeployProviderOptions is simply an encoded version of the parameters supplied to the provider when it was packaged. For example, -source:dirPath=c:\,ignoreErrors=0x10293847 -dest:package=package.zip would package the ignoreErrors value.
I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)
We are investigating using CruiseControl.NET as both a Continues Integration build provider, as well as automating the first part of our deployment process.
Has anyone modified CruiseControl.NET's dashboard to add custom login and user roles (IE, Separate out access to forcing a build to only certain individuals on a per project basis?
The dashboard is a .NET App, but I believe it uses the nVelocity view engine instead of web forms, which I don't have experience with.
Can you mix nVelocity and Webforms,or do I need to spend a day learning something new =)
#Keith:
We are leveraging CC.NET to both run a CI build, as well as being able to use the Force Build feature to do a Build + Deploy. That is why we want hands off the dashboard.
I found this morning that I was able to place CCNET in a virtual directory within another web app, This allowed me to setup Forms Authentication, and let the root app manage that. Problem solved.
Why do you need to? Do you really need to limit users in the way with an integration server. I think that's why CC.Net doesn't have that sort of support built in.
You can always see who forced a build, and control it that way.
I find that continuous integration works best with regular builds and regular unit test runs (our rather large C# app + test run takes 25 mins and checks hourly), so for me forcing a build is rarely an issue.
If you want some users to have some kind of report-only access you could limit them so that they can't access the CC.Net web application at all.
All the results (MSBuild, NCover, NUnit, FxCop, etc) are in XML, so you can build relativity simple report pages out of XSLT.