I have a ps1 script that deploys all of my webparts. I started noticing an error (Error 503 service unavailable) after running Update-SPSolution. What is happening is that when I upgrade all my webparts, the application pools for all SharePoint web applications stop. It also takes about 12 minutes per web part to deploy (which seems like forever - it looks like it may be running them all in parallel). Could someone shed some light as to what the best way is to upgrade web parts using Update-SPSolution. Optimally, I would like my script to stop while it fully completes an upgrade on a particular web part, and then move on the next one when it is finished. Thoughts?
You might get better performance on the upgrade if you set ResetWebServer to false in each solution manifest. Naturally, you would be compelled to reset the web server(s) after all the upgrades, but at least you would only be required to do it once.
You might also consider combining web parts into fewer projects/solutions. This can be challenging, as your web parts' assembly-qualified names are part of the .webpart file, and therefore part of any web part that is still in use.
If your solutions are Farm solutions, SharePoint will restart the application pool in order to reload your assemblies.
The only way to completely remove this restart is to use Sandbox solution. That's not always possible, but depending on your type of customization this may be an answer.
Another solution is to only have one solution containing all your webparts. You'll still need an application pool restart, but it should take less than a minute.
12min is really a lot!
Edit:
To merge your WSPs, you'll have to merge your Visual Studio projects into one. It's also possible to do it by hand, but it's not a good choice in the long term.
Related
We have wsps in our project.But whenever wsp is deployed the "Service Unavailable" page comes at the site level.
Is there any way that few Dlls can be added in GAC without taking a downtime in production server?
No is the answer in reality. When you do a deployment with a WSP the reset is triggered so the latest dll's are reloaded and to clear down memory within the application pools.
So anything that is related to server side code will require a reset.
If you update anything in the hive you can get away with zero downtime deployment.
There is a more in depth answer here as to when do you need an IISReset
Cheers
Truez
It is technically possible to do a local deployment and stage this yourself. Install-SPSolution offers the local switch. You could in theory use this and control the rotation of the deployment (assuming more than one server). but as noted above to get IIS to reload the assemblies requires the application pool to be cycled. Assemblies are loaded from GAC, but are then memory resident.
https://blog.ithinksharepoint.com/2012/07/16/deploying-sharepoint-wsp-solutions-without-downtime/
I've tried it a few times and not been hugely successful so your mileage may vary.
If you are dependent on GACed Assemblies you can technically push them yourself outside of the WSP. However you may end up in a weird place as the following may occur:
1) You retract the web application from all servers (this will also retract the solution). You may end up with nothing or orphaned assemblies in GAC
2) You add a server later on and it takes the deployment package and you have a server with a different set of bits in your farm. This is painful to troubleshoot for anyone. Imagine if you left?
3) You deploy an assembly and it has a different version than what your web.config expects and it can't be used or found.
My question is similar to this one, which remained unanswered, unfortunately.
We are rolling out a web application as a web deployment package (Web Deploy/MSdeploy) to different environments. The package is created from within Visual Studio 2012/Team Build. Several parameters are to be set at install time (connection strings, WCF endpoints, logging settings, etc.). We have these in a parameters.xml at the root of the project.
Most of our customers import the package through IIS UI. Each time we roll out an update, customer IIS administrators have to provide the parameter values again through the UI. Most of the time, parameters do not change across updates.
What is the best way to handle this? Advise customer IIS administrators to use the command-line instead, injecting a SetParameters.xml that they keep separately (the level of some of our customer administrators isn't particularly high, so having something UI-based which we can document with a couple of screenshots is an advantage)? Keep the settings file (web.config or appconfig) out of the package altogether? What is the neatest way to do this?
I had the same problem, but decided to go with the batch-script installer file that comes with the web deploy package. In my mind it is more secure, doing this installation by script, instead of having to install through GUI. It can be documented, and maybe they need to learn a little bit of command-line?
As you say, they can use the same SetParameters-file for all following releases, if nothing in it changed - which in my mind is a huge benifit - not having to manage web.configs manually.
Automated deploys minimizes manual errors.
Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").
I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)
We are starting with Sharepoint development with a team of three and are currently setting up our development environments. We would like to avoid installing a Server 2008 for each developer, thus a single terminal server has been setup, using Remote Windows to start a VS2008 instance on each developer's machine. Now we would like to separate developers' testing environments (i.e. a different site colletion per developer), but have realized that the assemblies would need to be installed into GAC to show properly on the site. But since there is AFAIK only one GAC, developers wouldn't be able to test their stuff independently.
Is there any way we could create separate testing environments without installing a bunch of 2008 Servers?
So you're all going to remote in an fire up Visual Studio and be compiling stuff and restarting IIS, etc?
You're going to be stamping on each other's toes.
A wiser choice nowadays is to use Hyper-V (or some other virtualisation).
We use Windows Server 2008 on our laptops, and use Hyper-V to run our dev environments. We then have a dev environment (sandbox) each, and these have VS2008, SVN, Nunit, etc.
Our code is tested against each other thanks to CruiseControl on the only shared Hyper-V.
This has been great for us... we distribute the load, we can work on the move, we don't step on each others toes and if we need to do a demo we can switch Hyper-Vs and demo from the demo Hyper-V (branched from the dev one early on so that the environments are known).
Go virtual and don't look back.
PS: I've just seen your comment about one server... just put Hyper-V on that and run 3 instances. That's also what we do ;)
I don't know about installing the server on everything, but this sounds like an ideal task for Virtual Machines rather than physical ones- where I work we using VMWare a whole lot for this kind of work and it does very well.
It's also useful to be able to roll back to a snapshot when it comes to testing installation processes and so on.
No. In addition to the GAC there are all the SharePoint files in the 12 hive, such as features and site templates. It's not worth what you save on server costs.
(Of course if you don't use the GAC, but deploy to the bin folder, and you don't touch anything in the 12 hive, you can give each developer their own web application on the same server. But this approach puts a lot of restrictions on what they can do. It's still not worth it.)
Virtual machines will work, but they can be slow to develop on. For instance, you'll need to restart the application pool for every GAC deploy - which means a pause of maybe 15-60 seconds to reload the application, (depending on the hardware). This will become annoying.
Virtual machines work better for test and production, where you don't restart the application so often.
I recommend a physical server for each developer. This will minimize the code-deploy-test cycle time, and make sure they don't have to worry about stepping on each others toes.
You are on the wrong track with Terminal Services - its just not going to give you any separation.
A lot of people do recommend developing on W003/2008 server directly, and it does simplify some things like remote debugging.
I prefer the more traditional method of using VMWare to run virtual machines. These can be running on a local or remote host. Remote debugging is a little more complex to setup but still possible.
Finally - if possible then deploy to the bin dir rather than the GAC. This will make it much easier to deploy automatically after compilation.
The contributors are right that there are lots of stumbling blocks to multi-developer single server environments.
Number one developers will be trying to attach to the same Web Application process w2ps.exe so creating separate Web Applications on different ports is a must unless you are prepared to share time debugging. How to setup a development environment for sharepoint 2013
The second problem is when you try to collaborate and use shared components/features. Having a desire to work separately is debatable, I believe that the team developers should be collaborating and sharing so combing work is desirable to ensure seamless integration into a single final solution and that no work is duplicated. The multi-developer single server environment works perfectly until you try to collaborate 'One common mistake is to have one “development server” used by all team developers. Unless team members are working on totally unrelated components and never need to do common things such as restart IIS or attach a debugger to an IIS process, this type of environment generally doesn’t work well.' http://technet.microsoft.com/en-us/magazine/dn145990.aspx We made this mistake through lack of experience and knowledge, but once you make it it's possible to work round it.
My first attempt to share features was to copy developer 1's project into developer 2's solution and add a reference to it in developer’s 2's project and add all the features to developer 2's package. Deploying this works fine for developer 2, until as I discovered if developer 1 detaches their solution from the debugger it retracts the solution based on the duplicated solution id from the farm and therefore from each developer's web application. Therefore developer 2 has the rug pulled out from underneath them. Although this is a part solution and seemed to work for a while, it took me a while to work out what was happening and what combinations of dev 1 and 2 deployments make each other’s work and not work.
So I found a better solution. Under the project properties in Visual Studio under SharePoint tab there is a combo box called 'Auto-retract after debugging'. This by default retracts the solution when the developer stops the attached debugger and pulls the features out from underneath the other developers. Unticking this box prevents the retract and leaves each developers individual solutions deployed at farm level and on reattaching to the debugger just replaces the solution with minimal fuss.
In my experience recycling the IIS application pool is so fast other developers don't even notice, but with a larger team than 2 this might become more prevalent, so perhaps someone else could add their experiences. I also guess unless the other develops try to attach at exactly the same time that the recycle is happening it'll be fine, so is a really small chance of having a cross over time, and simply detaching and reattaching will fix this if it is ever experienced.