Proper setting of the Umbraco distributedCall setting (multi-server cluster setting) - iis

We have a server setup where we have 2 web servers for our Umbraco site:
1 is the "editor". This is where people add content. This can serve pages back to the editors, but the public never sees this.
2 are the "web servers". These serve content to the public. They NEVER allow editing.
Normally, we have had them as 2 servers - one serves to the public AND allows editing, the other just serves to the public.
Whats the correct setup for the distributedCall config option? I'd normally list all nodes in the cluster, but do I need to list the editor one, in this case? Or just the two "public" ones? I assume that it does something like:
commit the publish locally
call server A in the list
call server B in the list
which would mean I dont need to list the editor in the list (Which I'd rather not do)
[Umbraco 4.7 BTW]

Answering my own questions again. Sigh.
Ended up "just doing it" on our live site. In the end:
Editing server: has distributedcall turned on. points to both other nodes AND itself.
2 web nodes: have distributedcall turned OFF.
I had the 2 nodes turned on, but they tended to call back on themselves, which was problematic.
Fixed.

Related

iis 10 Static Website: Deleting default site and creating completely new site (how to access new site)

This post needs help from experienced iis administrators, but must be explained in details for EXTREME newbies.
What I am doing:
I have two computers, both running Windows 10. One is a desktop and one is a laptop.
iis is enabled on both computers. Each computer can access the iis web server from the other and pull up a page from the other - using the ip address.
There is no DNS or host files being used (this is by ip address only), nor do I want to use any sort of naming.
Both computers are running an identical website, and the website files are in a different directory than the default. The structure is like this:
C:\inetpub\ROOT\myWebsite\myIndex.html
web.config
Changes I've made - now a few problems.
On both computers I have deleted the DefaultAppPool and the default website that comes installed with iis. This has not stopped the website from completely working, so adding that back seems unlikely to fix my problem.
I have deleted my application pool and website from iis (never deleting the actual files from the file system) several times, and added it several times. Each time I do this, my site comes back, but with the same problem I am having.
I have deleted all of the default documents, and the only default document listed in iis is myIndex.html.
myIndex.html initially displays a graphic image (using the standard tag), and this image comes up. Sort of. See explanation below.
The problem I am having
Before I started this project, I had iis working on the desktop with the default site and app pool and simply added some of my own files with really simple text content and some pics. I had replaced the default iis splash image with my own image, and all that worked with no problem.
the image that comes up is a link to another page that has a list of links to other stuff in my website. It all works no problem there.
Now, with the setup I have now, on the desktop I was originally using (in the paragraph above) if I pull up my website locally, myIndex.html loads in the browser and my image comes up, and everything works fine.
The same is true on the laptop, when I access the site locally.
However, if I attempt to access the desktop site (using its ip address) from the laptop, it pulls up the old splash image from the default site I deleted.( I left those files there even though I deleted the site from within iis). All those files are in the default location C:\inetpub\wwwroot.
If I move those files to another directory, thus leaving C:\inetpub\wwwroot completely empty, then when I access the site on the desktop (via the ip address) from the laptop, my new site comes up without a problem.
While it seems I may have solved my problem by moving the file from the previous project, doing that does not teach me how iis is actually working, and why files from a website that no longer exists in iis are still being accessed from remote computers.
So, please teach me something about the internal workings of iis, and how it chooses to access the different application pools and websites.
Again, please word your answers for complete newbies, because I know a little but not enough to get real technical.
I have been reading posts on stackexchange.com and other sites; links to microsoft docs etc. That's not helping as those docs are expecting too much prerequisite knowledge, and speaking in terms that are not really explaining things in a way I can understand.
You have described several different problems. I will try to address each of them (contrary to S/O recommendations).
First, when you make changes, and they don't seem to show up, it is usually because of caching. IIS always wants to cache files/configs. So does your web browser. So, to force an accurate test, you need to dump your browser cache and cycle IIS (to make sure it drops its cache and loads new files and configs). Start there.
Second, IIS is designed for settings inheritance. Which means, each app and each folder will inherit settings and permissions from the parent, unless you override them. Overriding them can be done by files and/or IIS configs (application vs folder). The IIS configs are the stronger of the two.
Also, the IIS config for "default files" might have come into-play for your test. If you didn't set up MyIndex.html as the top-most default file, then IIS would look for other files first. In fact, if you don't have MyIndex.html in the list of default files, IIS would have to depend on your app to choose that as a default page (MVC routing, etc).

Sitecore 8.1 Lucene not updating - how to indentify if index has been fullly built?

We are using Sitecore 8.1 with LUCENE search provider, 1 CM and 2x CDs. The solution is hosted in Azure Web Apps.
We noticed that when content author publishes or updates the article, the changes is seen my some users/browsers and not for others.
I suspect this is due to index not being built on one of CDs (as history engine is not enabled). In the past I could troubleshoot this by RDP to Azure Web Role VM or similar and analyse the lunene index files data time.
Above is not possible with WEB APP as you can't RDP or FTP to specific instances.
So..
Is there a way in Sitecore to find out whether index has been 100% built for N number of CDs?
Is it true that History Engine MUST be turned on if we have more than 1 CDs?
If there are N (where N > 1) number of CDs, does one of the CD gets rebuilt instantly after publish end? This is what we have noticed and it confuses me.
Any reason why History Engine section might be missing out of box?
Thanks.
Don't know.
My understanding that you need to have History Engine "on" if you have ANY CDs.
The combined instance (that has CM and CD on the same instance) does not need a History Engine as it gets updated instantly.
I would expect it to be missing, as the default installation is not intended for scaling. Also, I would mention, that you need all your CD instances that you publish to explicitly listed in a web.config (or added through Include). Please see this post from Alen Pelin: http://blog.alen.pw/2012/06/lucene-index-isnt-updated-on-cd.html

Liferay With Multiple Server Instances

I'm working with multiple Liferay Projects (different Portal, plugins, user and usergroups etc ) in the same time, and often have to switch between them. This switch requires lots of steps like
Editing the portal-ext.properties (to change the Liferay Database, and edit some custom project-specific properties), and edit 'portal-setup-wizard.properties'
Add/remove portlets themes and hooks from the Eclipse Server instance, sometimes clean the Tomcat's 'data' 'Webapps' and 'work' folder
Go to Liferay's Control-Panel/Server/Plugins Installation and re-index portlets like 'Users and Organizations' or 'Documents and Media'
So, I thought that creating a new Server Instance for each project, with a new tomcat and JRE, would be a nice idea. When I had to switch project, I could just stop the old server and start another. At first, I thought (was adviced actually) that using the same Liferay Plugins SDK (6.1.0), should be ok, as long as the Server instances are the same version.
Practically this doesn't work 100% perfectly. While most of the work is getting done, there are some problems here and there, like a theme not getting deployed propertly, hooks not beeing applied etc. As I understand, there is some [Liferay SDK] - [Liferay Server] binding, and that means that only 1 Server (the first one I created) will fully work.
For example, By investigating the [Liferay SDK folder]/bild.[user name].properties, I can see some properties that are referring to a specific Server/JRE location :
app.server.portal.dir
app.server.lib.global.dir
app.server.deploy.dir
app.server.type
app.server.dir
So, my question is, what should I do to work with multiple Liferay Projects ?
Is the multi-Server practice, a good approach to work with multiple-projects ?
If yes, should I create a different SDK for each Server? Maybe a different Eclipse workspace too ? Or is there some way to use the same SDK
What about working with Servers of different Liferay Version ?
Personally, I set up every project with its own source, tomcat, database, etc. even if it means duplication. These days storage is cheap and makes this possible. Of course your milage may very but I thought I'd share my setup with you.
I have a project directory with all my projects which looks like so:
/projects
/foo-project
/bar-project
/my-project
Inside a project I have
/my-project
/tomcat
/bin
/conf
...
/src
/portal
... my portal source ...
/plugins
... my plugin source ...
/portal-ext.properties
I then setup tomcat to use different ports (8080, 8081, 8082, etc...) so that I can just leave them all running if I have to or want to.
I setup Liferay to use different database for each Liferay instance.
I place the portal-ext.properties as a sibling to the tomcat directory and Liferay will read this file (assuming the default behavior). This offers quick and easy edits as well as figuring out how you've set each project up.
The advantages should be clear. You can just "walk away" from a project and into another without tearing down and setting up. And when you return everything will still be as you left it. Context switching is also quicker and helpful if you want to answer a question about a project you're not yet working on.
Depending on the complexity of each of your projects, multi-instance might not work for you. Hooks and EXTs may conflict with each other and it appears as if this is already the case with your projects.
If you can afford the space (which is not much) this has been the fastest way I have found as a Liferay developer.
If we start working on a new Liferay project in our company, we setup:
a new database schema,
a new, clean Liferay server connected with that schema and
a fresh Eclipse workspace, with
a clean SDK project
Only this way you're sure to have cleanly separate projects. To switch to another project, just shutdown the current Liferay server, startup the new one and switch to the right workspace in Eclipse. This all takes no more than 2 minutes, a lot less than to do all the cleanup actions you have to do if you share workspace and server.
In my opinion, this is the approach of most development teams.
Why mess with all these complications in a single computer? I use Oracle VirtualBox and set up a separate VM for each project. Even though I work on a laptop, it has 8 cores, and I've bumped my memory up to 16GB and set each machine up with 4GB of RAM.
I can have multiple VMs running at once and have set all active projects as home pages in Chrome. Using bridged networking each VM has its own IP address, and they all listen on 8080.
Another benefit is that, although my primary project is being developed using Eclipse Indigo and LR 6.1 CE GA1, I have another using Eclipse Juno, its specific IDE plugin and LR 6.1.1 CE GA2. So it also works as a new version tester.
VirtualBox is free. Memory is cheap. And remember that you can put a VM to sleep without shutting it down. That takes about 10-20 seconds and waking it up again takes 30-60 seconds.
The simplest solution would be :
Create 3 different users, the Liferay SDK's bundle.properties file is separate for each user. So, lets say, if you want to run 3 servers with the same sdk. Create 3 files like
bundle.user1.properties
bundle.user2.properties
bundle.user3.properties
Now, when you want to deploy something for server 1, log in the server using user1 and try to deploy the portlet, this will read bundle.user1.properties and it will deploy the portlet/hook to the specified location.
Hope this will resolve your deployment issue.
Also, when you have 3 users, you can run 3 different servers together in a different user accounts, in this way, they would be secure and apart from admin, nobody can shutdown the same.
Hope this helps!

How to backup and restore IIS configuration from script

I'm writting a script that sets up a lot of different applications in Windows (mainly svn and open source servers for http, dns, mail, ftp and db). This script is intended to be executed in new/clean Windows workstations for new developers, it automatically sets everything up to create an environment very similar to the one in production. After it's executed, everything runs locally and the developer can start working right away.
This not only helps new developers, but all existing developers whenever there are changes in the whole system, everything is replicated locally.
The one thing I'm still not able to do is making some kind of backup of an IIS server that is running a web app (it's in the Prod server) and restoring it automatically to the new developer's machine so he doesn't have to install/configure IIS locally.
I've read about using appcmd.exe to create and restore backups, but that works only for the same machine (it uses encryption keys and those keys change between computers).
Is there a way, a scriptable way, to take everything IIS related from one server and restore it on another server, without user intervention and having the restored IIS run exactly as the original?
Thanks in advance!
Francisco
Just putting this here so anyone who comes across this will have an understanding as to why this wasn't answered. A website has a massive amount of variables associated with it that prevents any easy methods to copy all of its configuration through one or even just a few cmdlets.
To get started though you would want to become very familiar with the applicationHost.config file and how you access the properties within it using the Get-WebConfigurationProperty. One way to get familiar with how to script against webconfiguration properties is to use the Configuration Editor in IIS. Whenever you make a change in the Configuration Editor, before commiting the changes there is a nifty little link titled Generate Script, which will have a Powershell tab you can use to help you gather the proper Get/Set commands for the configuration elements within the applicationHost.config file.
I've created something almost exactly like what the OP is looking for and it spans 4 modules (over 20,000 lines of code) and has a SQL backend that holds all of the configuration elements.
When a website has everything from underlying DLLs that may need registered, IsapiCGI Restrictions and IsapiFilters, accounts that are tied to the AppPool that may need added to certain local groups on the server, to secure bindings that require a certificate to be loaded on the server. You can see that this isn't a simple undertaking. (and these are just a small portion of the variables that a website may contain)
There is however a large chunk of cmdlets that Microsoft provides you out of the box that you can leverage to aid you in developing something like this inside the WebAdministration module. I know this is four years old but hope anyone who stumbled on this will find the above useful.

Clustered Web Servers Failing

I have three web servers running Windows Server 2008. Two are clustered, the third a standalone server (two live, one test). They use shared configuration with the configuration file located on a central file server. Every so often one live web server will stop responding. The event log shows the following error.
The worker process for application pool 'My Website' encountered an error 'Configuration file is not well-formed XML
' trying to read configuration data from file '\\?\C:\inetpub\temp\apppools\My Website\My Website.config', line number '3'. The data field contains the error code.
The config file has the following data
<!-- ERROR: There's been an error reading or processing the applicationhost.config file. Line number: 0 Error message: Cannot read configuration file
-->
There is nothing in the event viewer on the file server.
When I restart the web server everything works fine.
Any ideas?
Edit
I have around 30 websites. 10 are true standalone websites running in their own application pools. The other 20 are old websites that just redirect all requests to a different URL (some on my server, some external), these share the same application pool.
One of the 10 "standalone" websites is running php. One is .NET 2.0. One is classic asp with two virtual directories set up to run as a .NET 2.0 applications. The other 7 are running classic asp only.
It's quite old but I came here because I had this issue today and it seems to be still around. The reason is usually the DFSR feature, which is especially valid if one runs a cluster. There are three possible solutions.
Change a registry key to reduce the speed of synchronization and avoid the file lock the leads to the error as described here: https://blogs.msdn.microsoft.com/asiatech/2013/12/01/you-may-experience-configuration-file-is-not-well-formed-xml-error-while-using-dfsr-to-synchronize-the-iis-configuration-files/
Install a Hotfix provided by Microsoft. The direct link is https://support.microsoft.com/de-de/hotfix/kbhotfix?kbnum=960412&kbln=en-US
Configure your DFSR environment properly and exclude the folder *\inetpub\temp\apppools* explicitly (after DFSR replicated the applicationHost.config the WAS will rebuild the files anyway)
Despite the Hotfix I personally like the third option most.
The purpose of this late answer is documentation, as others may come here and the page seems well ranked on Google.
This is a long shot but...
If the file is accessed through netbios and its regularly accessed you may be hitting the fileserver too hard and windows thinks you are trying to do a DOS attack so it starts rejecting requests. This may also be caused by a firewall interpreting this very samething when accessing the file server.
Also, from this, check you that the temp folder for the iis user is full, the problem may not only be the file server, but failure to temporary store the config file.
Can you share some volume information of your server layout? (# application pools / # websites , etc)
Edit:
I assume you're not interested in workarounds to this, especially if it means changing the location of the file :)

Resources