Test deployment for Sharepoint by multiple developers on a single server - sharepoint

We are starting with Sharepoint development with a team of three and are currently setting up our development environments. We would like to avoid installing a Server 2008 for each developer, thus a single terminal server has been setup, using Remote Windows to start a VS2008 instance on each developer's machine. Now we would like to separate developers' testing environments (i.e. a different site colletion per developer), but have realized that the assemblies would need to be installed into GAC to show properly on the site. But since there is AFAIK only one GAC, developers wouldn't be able to test their stuff independently.
Is there any way we could create separate testing environments without installing a bunch of 2008 Servers?

So you're all going to remote in an fire up Visual Studio and be compiling stuff and restarting IIS, etc?
You're going to be stamping on each other's toes.
A wiser choice nowadays is to use Hyper-V (or some other virtualisation).
We use Windows Server 2008 on our laptops, and use Hyper-V to run our dev environments. We then have a dev environment (sandbox) each, and these have VS2008, SVN, Nunit, etc.
Our code is tested against each other thanks to CruiseControl on the only shared Hyper-V.
This has been great for us... we distribute the load, we can work on the move, we don't step on each others toes and if we need to do a demo we can switch Hyper-Vs and demo from the demo Hyper-V (branched from the dev one early on so that the environments are known).
Go virtual and don't look back.
PS: I've just seen your comment about one server... just put Hyper-V on that and run 3 instances. That's also what we do ;)

I don't know about installing the server on everything, but this sounds like an ideal task for Virtual Machines rather than physical ones- where I work we using VMWare a whole lot for this kind of work and it does very well.
It's also useful to be able to roll back to a snapshot when it comes to testing installation processes and so on.

No. In addition to the GAC there are all the SharePoint files in the 12 hive, such as features and site templates. It's not worth what you save on server costs.
(Of course if you don't use the GAC, but deploy to the bin folder, and you don't touch anything in the 12 hive, you can give each developer their own web application on the same server. But this approach puts a lot of restrictions on what they can do. It's still not worth it.)
Virtual machines will work, but they can be slow to develop on. For instance, you'll need to restart the application pool for every GAC deploy - which means a pause of maybe 15-60 seconds to reload the application, (depending on the hardware). This will become annoying.
Virtual machines work better for test and production, where you don't restart the application so often.
I recommend a physical server for each developer. This will minimize the code-deploy-test cycle time, and make sure they don't have to worry about stepping on each others toes.

You are on the wrong track with Terminal Services - its just not going to give you any separation.
A lot of people do recommend developing on W003/2008 server directly, and it does simplify some things like remote debugging.
I prefer the more traditional method of using VMWare to run virtual machines. These can be running on a local or remote host. Remote debugging is a little more complex to setup but still possible.
Finally - if possible then deploy to the bin dir rather than the GAC. This will make it much easier to deploy automatically after compilation.

The contributors are right that there are lots of stumbling blocks to multi-developer single server environments.
Number one developers will be trying to attach to the same Web Application process w2ps.exe so creating separate Web Applications on different ports is a must unless you are prepared to share time debugging. How to setup a development environment for sharepoint 2013
The second problem is when you try to collaborate and use shared components/features. Having a desire to work separately is debatable, I believe that the team developers should be collaborating and sharing so combing work is desirable to ensure seamless integration into a single final solution and that no work is duplicated. The multi-developer single server environment works perfectly until you try to collaborate 'One common mistake is to have one “development server” used by all team developers. Unless team members are working on totally unrelated components and never need to do common things such as restart IIS or attach a debugger to an IIS process, this type of environment generally doesn’t work well.' http://technet.microsoft.com/en-us/magazine/dn145990.aspx We made this mistake through lack of experience and knowledge, but once you make it it's possible to work round it.
My first attempt to share features was to copy developer 1's project into developer 2's solution and add a reference to it in developer’s 2's project and add all the features to developer 2's package. Deploying this works fine for developer 2, until as I discovered if developer 1 detaches their solution from the debugger it retracts the solution based on the duplicated solution id from the farm and therefore from each developer's web application. Therefore developer 2 has the rug pulled out from underneath them. Although this is a part solution and seemed to work for a while, it took me a while to work out what was happening and what combinations of dev 1 and 2 deployments make each other’s work and not work.
So I found a better solution. Under the project properties in Visual Studio under SharePoint tab there is a combo box called 'Auto-retract after debugging'. This by default retracts the solution when the developer stops the attached debugger and pulls the features out from underneath the other developers. Unticking this box prevents the retract and leaves each developers individual solutions deployed at farm level and on reattaching to the debugger just replaces the solution with minimal fuss.
In my experience recycling the IIS application pool is so fast other developers don't even notice, but with a larger team than 2 this might become more prevalent, so perhaps someone else could add their experiences. I also guess unless the other develops try to attach at exactly the same time that the recycle is happening it'll be fine, so is a really small chance of having a cross over time, and simply detaching and reattaching will fix this if it is ever experienced.

Related

Liferay With Multiple Server Instances

I'm working with multiple Liferay Projects (different Portal, plugins, user and usergroups etc ) in the same time, and often have to switch between them. This switch requires lots of steps like
Editing the portal-ext.properties (to change the Liferay Database, and edit some custom project-specific properties), and edit 'portal-setup-wizard.properties'
Add/remove portlets themes and hooks from the Eclipse Server instance, sometimes clean the Tomcat's 'data' 'Webapps' and 'work' folder
Go to Liferay's Control-Panel/Server/Plugins Installation and re-index portlets like 'Users and Organizations' or 'Documents and Media'
So, I thought that creating a new Server Instance for each project, with a new tomcat and JRE, would be a nice idea. When I had to switch project, I could just stop the old server and start another. At first, I thought (was adviced actually) that using the same Liferay Plugins SDK (6.1.0), should be ok, as long as the Server instances are the same version.
Practically this doesn't work 100% perfectly. While most of the work is getting done, there are some problems here and there, like a theme not getting deployed propertly, hooks not beeing applied etc. As I understand, there is some [Liferay SDK] - [Liferay Server] binding, and that means that only 1 Server (the first one I created) will fully work.
For example, By investigating the [Liferay SDK folder]/bild.[user name].properties, I can see some properties that are referring to a specific Server/JRE location :
app.server.portal.dir
app.server.lib.global.dir
app.server.deploy.dir
app.server.type
app.server.dir
So, my question is, what should I do to work with multiple Liferay Projects ?
Is the multi-Server practice, a good approach to work with multiple-projects ?
If yes, should I create a different SDK for each Server? Maybe a different Eclipse workspace too ? Or is there some way to use the same SDK
What about working with Servers of different Liferay Version ?
Personally, I set up every project with its own source, tomcat, database, etc. even if it means duplication. These days storage is cheap and makes this possible. Of course your milage may very but I thought I'd share my setup with you.
I have a project directory with all my projects which looks like so:
/projects
/foo-project
/bar-project
/my-project
Inside a project I have
/my-project
/tomcat
/bin
/conf
...
/src
/portal
... my portal source ...
/plugins
... my plugin source ...
/portal-ext.properties
I then setup tomcat to use different ports (8080, 8081, 8082, etc...) so that I can just leave them all running if I have to or want to.
I setup Liferay to use different database for each Liferay instance.
I place the portal-ext.properties as a sibling to the tomcat directory and Liferay will read this file (assuming the default behavior). This offers quick and easy edits as well as figuring out how you've set each project up.
The advantages should be clear. You can just "walk away" from a project and into another without tearing down and setting up. And when you return everything will still be as you left it. Context switching is also quicker and helpful if you want to answer a question about a project you're not yet working on.
Depending on the complexity of each of your projects, multi-instance might not work for you. Hooks and EXTs may conflict with each other and it appears as if this is already the case with your projects.
If you can afford the space (which is not much) this has been the fastest way I have found as a Liferay developer.
If we start working on a new Liferay project in our company, we setup:
a new database schema,
a new, clean Liferay server connected with that schema and
a fresh Eclipse workspace, with
a clean SDK project
Only this way you're sure to have cleanly separate projects. To switch to another project, just shutdown the current Liferay server, startup the new one and switch to the right workspace in Eclipse. This all takes no more than 2 minutes, a lot less than to do all the cleanup actions you have to do if you share workspace and server.
In my opinion, this is the approach of most development teams.
Why mess with all these complications in a single computer? I use Oracle VirtualBox and set up a separate VM for each project. Even though I work on a laptop, it has 8 cores, and I've bumped my memory up to 16GB and set each machine up with 4GB of RAM.
I can have multiple VMs running at once and have set all active projects as home pages in Chrome. Using bridged networking each VM has its own IP address, and they all listen on 8080.
Another benefit is that, although my primary project is being developed using Eclipse Indigo and LR 6.1 CE GA1, I have another using Eclipse Juno, its specific IDE plugin and LR 6.1.1 CE GA2. So it also works as a new version tester.
VirtualBox is free. Memory is cheap. And remember that you can put a VM to sleep without shutting it down. That takes about 10-20 seconds and waking it up again takes 30-60 seconds.
The simplest solution would be :
Create 3 different users, the Liferay SDK's bundle.properties file is separate for each user. So, lets say, if you want to run 3 servers with the same sdk. Create 3 files like
bundle.user1.properties
bundle.user2.properties
bundle.user3.properties
Now, when you want to deploy something for server 1, log in the server using user1 and try to deploy the portlet, this will read bundle.user1.properties and it will deploy the portlet/hook to the specified location.
Hope this will resolve your deployment issue.
Also, when you have 3 users, you can run 3 different servers together in a different user accounts, in this way, they would be secure and apart from admin, nobody can shutdown the same.
Hope this helps!

Update-SPSolution stops application pools on IIS

I have a ps1 script that deploys all of my webparts. I started noticing an error (Error 503 service unavailable) after running Update-SPSolution. What is happening is that when I upgrade all my webparts, the application pools for all SharePoint web applications stop. It also takes about 12 minutes per web part to deploy (which seems like forever - it looks like it may be running them all in parallel). Could someone shed some light as to what the best way is to upgrade web parts using Update-SPSolution. Optimally, I would like my script to stop while it fully completes an upgrade on a particular web part, and then move on the next one when it is finished. Thoughts?
You might get better performance on the upgrade if you set ResetWebServer to false in each solution manifest. Naturally, you would be compelled to reset the web server(s) after all the upgrades, but at least you would only be required to do it once.
You might also consider combining web parts into fewer projects/solutions. This can be challenging, as your web parts' assembly-qualified names are part of the .webpart file, and therefore part of any web part that is still in use.
If your solutions are Farm solutions, SharePoint will restart the application pool in order to reload your assemblies.
The only way to completely remove this restart is to use Sandbox solution. That's not always possible, but depending on your type of customization this may be an answer.
Another solution is to only have one solution containing all your webparts. You'll still need an application pool restart, but it should take less than a minute.
12min is really a lot!
Edit:
To merge your WSPs, you'll have to merge your Visual Studio projects into one. It's also possible to do it by hand, but it's not a good choice in the long term.

Single developer process/practice/infrastructure recommendations

So, I'm about to embark on a fairly lengthy, time consuming project that could net me some good results/rewards - and I'd like to give everything the attention and focus it deserves. I will be the sole developer, and I'm experienced in that capacity (about 13 years in the industry). I've just never had to be responsible for EVERY choice so I'd like to throw this out there for some feedback. This is going to be a website.
Dev Tools on Win x64 workstation:
VS2010
SourceGear Client
FileZilla
UltraEdit
SQL 2008 Mgmt Studio
I will have my own DB Server machine also, which will run SQL 2008 for both the web DB and will host the Soucegear repository DB.
I'd like to have an automated build process that includes
pulling the latest code from the repository
checking it against rules (ala FX Cop)
compile the code
running a series of tests against the new compilation (unit tests?)
Any suggestion on tools to do these tasks? Should I just write & execute scripts to do certain steps?
Backups! - I'd like the source code repository and web files, graphics, media, etc. for the site backed up regularly. I use Mozy for my own personal backups - is there something more suitable for this kind of backup? Windows Home Server or something like that perhaps?
Lastly - what am I not thinking of that needs to be on my radar? For example, I plan on using jQuery but only have limited experience with it - any good javascript tools besides VS2010? How to most web devs test their sites across the plethora of browsers available? Should I use minify-ing tools for the web content - which are best? I've built plenty of web sites & applications before - this is just my first real "commercial" venture and I'd like it be founded in solid practices.
I, too, am a single developer and use much the same tools as you...
Couple of things...OK...it grew to more than a couple...
I like to use UltraEdit sometimes for editing my javascript files...as it has some features that VS2008 lacks...not sure abt VS2010...mainly a function list to aid in navigation of the file.
I also use JavaScript Lint to check the syntax of my javascript files...you can integrate it into UltraEdit...my choice...or VS...or both.
I use subversion for source code control...Visual SVN for the server and Tortoise for the client...both free
For backup I recently started using DropBox...you can point DropBox to the folder that holds your files and it will sync whenever DropBox starts (and it will keep the files synched on multiple machines...so if you have an offsite machine...you're covered if something happens at your main development site).
If you'll be using LINQ at all...I'd recommend getting a copy of LinqPad It's free, but you can pay to get "intellisense"...the examples included are fantastic learning tools.
When using jQuery...look for plug-ins that do what you're looking for...

Advice Needed: Deploying application to IIS - Can this be fully automated?

I am seeking advice: Ideally, I would like to give an Administrator (of the web server) one file (.exe, .msi, .bat, whatever you suggest), so that when they execute the package, it will setup my application (contains .aspx, .xap silverlight, web service .svc, etc.) on IIS. This will include and certainly not be limited to such things in the IIS Manager, like creating a virtual directory, path, default document, security, and all of the IIS settings one finds via inetmgr and properties. I would also maybe like to run a .bat file (not sure if this correct), but to check for certain settings and pinging other servers for status.
Many years ago, I used to automate everything and used concepts like .bat files - got the job done and it was amazing what I could do. Fast forward a couple of years now and am approaching the automation process again. I wanted to know if there is anything new out there.
Any and all advice will be greatly appreciated!
It's quite a bit of a learning curve but yes, WiX / InstallShield / MSI can do this. I've done installers for n-Tier / SOA systems including single tenant SaaS where you could run the application layer installer dozens of times creating new instances running on different host headers or ports pointed to different data layers and different configuration settings. You could then do the same for the WebUI pointing to which ever application layer you want.
Basically whether it's instaling .NET, setting up vDir / AppPools / WebSites / Extensions, reading and writing XML config files, executing SQL scripts, creating services and so on it can all be done... if you take the time to learn it all. Deployment Engineering is a bigger domain then it first appears to be.
As for .BAT, that's bad form. First you work to leverage native capabilities before writing custom actions. Then when you do have to write one, you design it to be declarative and transactional ( install, uninstall, rollback, commit ). WiX has a really nice framework called DTF that allows you to encapsulate C# classes as if they were C++ from MSI's perspective and provides a nice interop library needed to talk to MSI during the install.
Visual Studio has a Web Setup Package project you can use for this.

Best practices for applying changes to a SharePoint application

I feel like I need a better defined framework for updating my SharePoint (MOSS 2007) application with custom code changes. I am creating wsp solution files with features and new types and such, but once those get tested and deployed, I feel like it's a bit of a leap of faith, and that makes me nervous and occasionally reluctant to deploy changes. After deployment, it's difficult to correlate the current state of the SharePoint application with the specific code that is deployed on that SharePoint server. What features are actually installed and on which sites? Which features are activated or deactivated? Which version of this custom field or content type is really there? Things like this. If an error crops up, I have to rely on my assumptions about what code is there and actually running, or I have to spend time digging through deployed assemblies and the 12 hive -- not impossible, but pretty unpleasant.
What steps should I take to improve my ability to unambiguously determine the state of the application and find the code that truly represents that state? Are there third-party tools that can help with this?
I feel your pain... Application Developyment Lifecycle with SharePoint 2007 leaves me with a bitter taste in my mouth.
To answer your question. We built our own deployment utility that does a few things for us.
Checks state of key Timer Jobs (too many times we would do a deployment to find one WFE that did not get deployment)
Checks state of key Services on all our web front ends (again we want to know health of farm before we start kicking off timer jobs).
Shows file version and date of selected assemblies from GAC (does this across all Web Front Ends). We have seen problems before where assemblies did not get installed correctly across the farms.
Updates web.config settings based on an custom XML scheme we provide. We ran into some problems with web.config updates so we have thought about creating a utility to validate the web.config (specifically make sure there are no duplicate entries for specific keys).
Push content type updates (first time content types are deployed via feature it works great, but as soon as you need to update that content type it gets tough).
Checks status of WSP package after deployment or upgrade.
This utility uses the SharePoint API to do most of this work. Some of it is done by checking WMI Events.
Unfortunately the SharePoint development experience is lacking in this regard. As long as you are "namespacing" all features deployed using solution packages, you can use solution management from central admin to keep track of versions, and what gets deployed to which site collection.
Features are scoped from all levels from the farm to an individual web; so maintenence from that level is a little tough. I just try to organize all deployed code from the (top down) solution level.
It gets even more complicated when deploying custom timer jobs, event handlers, etc; I really hope that version next will address a lot of these common developer concerns.
Isn't the only way that you have a planned/controlled deployment process and a version management system like TFS
In the current project I am involved in we have:
Continuous builds
Daily Builds on a development server
When we release something to test we merge the code to the Main bransch in the version management system (TFS)
When tested and ready for production then we merge the main bransch to the release bransch
Using this structured way we always knows what is deployed in what environment and can also track all changes based on environment or changes in requirements(are also tracked in TFS)

Resources