Is Sharepoint local bin deployment possible? - sharepoint

I’ve inherited a SharePoint solution where all the projects have strong names and are deployed to the GAC.
I find that its difficult working with projects that are signed, it slows down development, testing and makes debugging difficult.
So, is it possible that SharePoint projects, WebParts, Codebehinds etc be deployed to the local bin instead of the GAC? Is it considered bad practice to deploy to the local bin?

It is always recommanded to use the Bin directory over to GAC, for all the WebParts & Code Behinds. As that will restrict the Trust given to the code, GAC provides full trust to the code.
After deploying to the bin you can gain the required permission using CAS.
I recommend you to read the chapter Application Security of Inside Windows SharePoint Services 3.0 Book
Note: You will have to deploy your code Feature Handler, Timer Job etc to the GAC

By default, SharePoint Web applications are only allowed to run with a very restrictive trust level of WSS_Minimal. If we want to have our Web Part deployed to the bin folder, then in order for it to run we must do one of two things: either set the trust level to WSS_Medium or WSS_Full in the web.config, or create a custom CAS policy that will allow this assembly's managed code to run. In a production environment, you will need to make an informed decision on this yourself.
I would use the GAC for local development and testing and use the bin in production.
To debug locally, check the following in your config file:
customErrors=off
Enable Stack Traces by adding CallStack=”true” to the SafeMode tag
Set the compilation debug attribute to "true"

Related

Can I modify an app manifest and re-sign the SharePoint .app file?

I am building a SharePoint 2013 provider-hosted app using the high-trust model. This allows a customer to deploy the .app to their App Catalog and make it available to all SharePoint Sites. The provider-hosted portion of the app runs in an IIS box (cluster) which the customer also deploys (on-premise) with setup instructions and automated tools.
The .app file structure includes the application manifest - which specifies the precise endpoint where the provider-hosted portion resides, and also specifies whitelisted endpoints which the add-in can call. These are all specified by entering in URLs, hostnames, and port numbers into edit fields in Visual Studio in the 'Deploy App' form just before the .app file is built and digitally signed.
This seems to work just fine for a single app built by IT folks internally, if the org is small enough... but I really want to be able to distribute this solution to more than one customer. In order to do so, I would have to ask the customer for their respective endpoints, enter them into my build tools, and rebuild the .app for them. This just doesn't seem right... no customer wants to talk to the developer first and have a custom-built app. And why should they? No code is changing...
Upon investigation into the .app file format, it turns out it is really just a simple .zip file - and inside (voila!) there is the app manifest! Unfortunately, if you edit the app manifest and re-zip the file, the digital signature is broken, and the .app no longer works. (grrrr...)
What I want to do is simply reconfigure the app manifest to match the environment where it is deployed. This can happen programmatically during setup/installation time, or perhaps even just prior to download, but cannot be a process that involves developers typing into visual studio and pressing Rebuild. That simply won't scale.
Is there a tool that exists that can help with this problem? If not, does anyone have experience with the signing of .app files programmatically? I'm open to skinning this cat in any way possible.
This is a wild idea and not maybe even possible.
Create web ui, where clients enter their endpoints.
Have internal process that invokes MSBUILD/TFS to package app with endpoint
change app manifest with pre-build powershell
Then provide app via email or download?
http://www.sharepointconfig.com/2013/10/building-sharepoint-2013-apps-with-tfs-2013/
This is more of a workaround than a true answer - but would work:
For on-premise deployments of high-trust SharePoint 2013 apps - build the application with "known endpoints" - essentially hard-coded endpoints that can be deployed locally. Then instruct the customer to redirect those endpoints using DNS records or hosts file entries. In addition, the client would need to generate a local wildcard certificate signed by their own trusted root in order to satisfy the SharePoint 2013 app model requirements for appdomain and server-to-server communication.
This is by no means ideal, but for certain environments it might be the most practical approach. This also allows scaling for the IIS WebApp to occur at the customer-site, where it realistically belongs for a high-trust app.
This approach avoids the need to automate build tools and also avoids building a separate instance for every customer - both of which are somewhat undesirable. It might, for those reasons, be slightly less costly - but it also pushes some responsibility to the customer. Namely - hard-coding a DNS entry locally for machines in the topology.

TFS server build vs. local visual studio build differences

Before we used TFS, we were building our packages locally using visual studio. There was a lot of projects organized into solutions. When we wanted to build the package, we simply located the ccproj project, right click on it and hit “package”.
There is couple of specific in our solution:
We use web roles with multiple web sites and virtual applications and we have them as project dependencies in VS2012 solution
We use config transformations for both web and worker roles. Worker roles transforms was achieved by adding transform target manually to the project file.
We have some additional class library projects – their outputs needs to go in a subfolder of the worker role along with correct configuration files, it is kind of plug-in architecture. We used some xcopy commands to include these non-referenced libraries in our worker roles.
Everything worked smoothly when building in VS 2012 locally.
When migrated to TFS we quickly learned that we won’t be able to replicate the same build process on build server
It turned out that TFS is not preserving solutions structure, more
details here:
http://social.msdn.microsoft.com/Forums/en-US/tfsbuild/thread/9ac815c8-5961-4670-a6d0-660a9b66da9c
The project dependency that was solving multiple web sites and
virtual applications in a single role did not work on build server,
probably because of different output directory. We had to add some
hacks in our ccproj and csproj files to get these published and
correctly included in the resulting package.
xcopy commands failed because of different directory structures on
TFS build sever.
We had to force run cspack on TFS build server by explicitly adding
/t:Publish parameter to msbuild command line.
Config transforms for worker roles did not worked, we had to force to
occur using another hacks in the ccproj and csporj files.
There was more issues but those are too detailed. I would keep it on
high level just to illustrate the whole issue. The build somehow
works now, but we have now a lot of hacks in place now.
I have two questions:
Is it possible to configure TFS build server to have exact same
behavior as the local build in VS2012?
Is there any official solution for building azure packages with multiple web sites and virtual applications in a single web role?
I haven't yet tried this on a TFS build server, but the approach outlined in my blog at http://michaelcollier.wordpress.com/2013/01/14/multiple-sites-in-a-web-role/ has been working well. The "trick" is basically to modify the .ccproj file to tap into the CoreBuildDependsOn target, adding logic that will execute MSBuild against the secondary sites. This should also allow config transforms to work.

VS2012 Web Deploy Package to create application pool

I have a web application project in VS2012 which I'm publishing using a "Web Deploy Package". I want this package to include app-pool settings, specifically creating an IIS app-pool and assigning the newly created application to it.
I'm familiar with the option "Include application pool settings used by this Web project" available when the project is configured to use an IIS instance (not IIS Express), but IIS configuration is not part of the project file, and thus not source controlled. What happens when somebody builds a deployment package on a machine that hasn't had IIS meticulously configured? Not ideal.
How else then, can I go about getting AppPool settings into my web deploy package? I understand that the appPoolConfig provider is IIS7+ only, I'm fine with that limitation. I've banged my head against this issue in the past and never found a solution. 18 months later, we've got a new VisualStudio version, and a new web-publishing-pipeline, are there new options to address this? Or maybe something I missed when I first tackled this problem?
Edit
OK, I'm seeing the following as options:
Configure my project to sync settings from an IIS instance. As mentioned, I'm not a fan of this given that it puts settings outside of the project, meaning the environment has to be meticulously configured to build + publish. Plus it drags along other IIS settings I don't want included.
Inject something into the web-publishing-pipeline (WPP) to modify the archive.xml. I've toyed with this in the past and had limited success. One problem is the pipeline isn't exactly co-operative with working directly on the archive.xml file, another problem is some of the more cryptic attributes involved, like MSDeploy.MSDeployProviderOptions which appears to have some Base64 encoded binary? No idea what to put in there.
Find an existing "provider" that can do what I want. I might be out of luck here, the appPoolConfig provider only seems to want to read / write IIS, not, say, an XML file of settings. Does anybody know otherwise?
Write my own "provider" to produce manifest output entries. I'm not sure, is it possible to write a custom provider that writes to a manifest using the name of an existing provider? As in, MyCustomPoolProvider writes appPoolConfig sections into a manifest? This sounds like a potentially painful exercise that may or may not work. Would I still need to figure out the encoding of whatever is going into MSDeploy.MSDeployProviderOptions?
I get the feeling that the fundamental obstacle with Web Deploy for what I'm trying to accomplish, is how strictly it leans on "providers". The pre-existing providers are largely designed for IIS synchronisation, not primary development and publication. It so happens that some of these providers can be relatively easily hooked into via MSBuild, but the majority insist on pulling data from IIS, and that's that.
You are correct in your understanding of the appPoolConfig provider, in that it can only sync between App Pools and can't be provided with the configuration directly. What you could potentially do is keep a copy of the appPool in question in package form (ie. msdeploy -verb:sync -source:appPoolConfig=PoolName -dest:package=apppool.zip) and attempt to hijack the pipeline so that the MSDeploy call adds the application content into the package, leaving the existing content there.
Alternatively, you could always keep the packages separate and deploy them with different calls to MSDeploy.
FYI, MSDeploy.MSDeployProviderOptions is simply an encoded version of the parameters supplied to the provider when it was packaged. For example, -source:dirPath=c:\,ignoreErrors=0x10293847 -dest:package=package.zip would package the ignoreErrors value.

What are the methods for production deployment in SharePoint?

Why does Microsoft suggest using WSPs for production deployment in SharePoint? What are the other methods for production deployment?
WSPs are suggested as they are deployable 'bundles' of functionality, whether that is an Event Handler, Application page or Web Part. By using WSPs you can create and test them in Dev and then roll them out to production once they have been tested. A WSP can be easily managed from the Solution store in Central Administration
It is possible to deploy features by putting the necessary files into the 12 Hive (SharePoint ambiguously named folder), but this requires manual changes to the system. If you have several Web Front Ends (WFEs) in a web farm, then you would need to manually maintain each of them. When using WSPs for deployment, the updates can be deployed to all Servers from one location.
WSP files are designed for deploying functionality to SharePoint in a consistent manner. Although technically they don't do anything you can't do by just copying files to the server, relying on manual deployment is a great way to put the system into an inconsistent state. It may work at first, and even be quicker/easier in some cases, but sooner or later you will permanently break your production environment.
The wsp was specifically designed for the purpose of packaging and deploying SharePoint 2007 solutions. That's why Microsft suggests using it!
While there are a few limitations to it, it's by far the best way to deploy solutions into a prod environment.
You should use WSPs to deploy in SharePoint.
I have used this WSP builder and it's makes your life a little easier.
http://www.codeplex.com/wspbuilder

SharePoint Solution Deployment: How do I prevent SP from resetting IIS when upgrading or retracting a globally deployed solutions?

So I figured out that by adding the ResetWebServer="FALSE" attribute to the solution manifest prevents SharePoint from recycling any app pools.
However, when upgrading a solution that originally did not specify ResetWebServer="FALSE" or when retracting a solution that does specify ResetWebServer="FALSE", the application pools are still being recycled. Is there a way to prevent any auto-recycling of app pools?
This does not seem possible given the document on MSDN (see below), note that I included Deploying a Solution over Upgrading a solution as underneath it is effectively doing a file replacement. I believe the restart/recycling is necessary as a result of how IIS functions. An option to explore if you wanted to manage when this occurs is to ensure that all deployments are done via timer jobs and execute when their impact will be minimized.
Deploying a solution
Initially, manifest and feature manifests are parsed to find assembly and _layouts files, which are copied to the appropriate locations. All other files contained within a feature directory are copied to the feature directory. After solution files are copied to the target computers, a configuration reset is scheduled for all front-end Web servers; the reset then deploys the files and restarts Microsoft Internet Information Services (IIS).
Retracting a solution
On each front-end Web server, the following occurs:
Microsoft Internet Information Services (IIS) is disabled.
Files are removed from the system.
IIS is re-enabled and Windows SharePoint Services is reloaded when
a user browses to a page.
You might also take a look at the "-local" switch. Didn't try it yet but it seemed that it allowed deployment server per server when you are in a load balanced situation.
Might be a good lead.

Resources