I'm currently running Umbraco on a web app for Microsoft Azure. Anytime I enable scaling out and the web app starts scaling out, I get the error:
"Process cannot access the file, Examine Indexes write.lock because it is being used by another file.
The website then needs to be restarted before it becomes fully functioning again. Is there a setting on Umbraco that I'm missing?
Or is it something that happens with Azure Web Apps Auto Scaling features?
This sounds like an issue with the indexes. Your index appears to be getting locked when scaling out. Ideally if you're running on a load balanced environment, you should have a single index for all environments instead of on a per instance basis. I've used Azure Search in the past and it's worked perfectly, swapping out the index isn't too difficult with Umbraco, plenty of information available online. Good example here
In the future you shouldn't need to restart the entire site, rebuilding the indexes should be fine.
Also, what version of Umbraco are you running? This may be of some help, I encountered some similar issues a few months ago - unrelated to scaling though.
https://issues.umbraco.org/issue/U4-10735
Sounds like you need to isolate your index files so they aren’t shared across the difference instances and don’t lock each other out. There’s a few ways to do this based on the version you are running, but in 7.3, i think you update the index file location to include the instance name like ~/App_Data/TEMP/ExamineIndexes/{machinename}/Internal/
For more details, see https://our.umbraco.com/documentation/getting-started/setup/server-setup/load-balancing/flexible#if-you-plan-on-using-auto-scaling
Related
In the Azure Management Portal, you can configure your website. As an example, you can change the PHP version your website is using. When you have edited a configuration option, you have to click “Save”.
So far, so good. But you also have the option to restart your site (by clicking “Restart“ next to “Save”).
My question is, when should you restart your website? Are there some configuration changes that require a restart, and others that don't? I haven't found any hints in the user interface.
Are there other situations that require a restart? Say, the website has been running for a given time without a restart?
Also, what are the consequences of restarting a website? Does it affect cookies/sessions in any way (i.e. delete a user's shopping cart or log them out)? Are there any other consequences I should be aware of?
Generally speaking, you may want to restart your website because of application performance issues. For example, you may have a memory leak in your application, connections not getting closed, or other things that would degrade the performance of the application over time. As you monitor your website and observe conditions like this you may make a decision to restart it. Even better, you may even automate the task of restarting when these conditions occurr. Anyway, these kinds of things are not unique to Azure Websites. You would take similar actions for a website running on-premises.
As for configuration changes, if you make a change to your web.config file, this change is detected and your website would be restarted automatically for you. Similarily, if you were to make configuration changes in the CONFIG page of your website in the Azure Management Portal such as application settings, connection strings, etc., then Azure Websites will detect this change to your environment and automatically restart it.
Indeed, restarting a website will result in any session data kept in memory being lost for that instance. Additionally, if you have startup/initialization code that takes time to complete then that will have to be rerun. Again, this is not anything unique to Azure Websites though.
I am currently changing our database deployment strategy to use FluentMigration and have been reading up on how to run this. Some people have suggested that it can be run from Application_Start, I like this idea but other people are saying no but without specifying reasons so my questions are:
Is it a bad idea to run the database migration on application start and if so why?
We are planning on moving our sites to deploying to azure cloud services and if we don't run the migration from application_start how should/when should we run it considering we want to make the deployment as simple as possible.
Where ever it is run how do we ensure it is running only once as we will have a website and multiple worker roles as well (although we could just ensure the migration code is only called from the website but in the future we may increase to 2 or more instances, will that mean that it could run more than once?)
I would appreciate any insight on how others handle the migration of the database during deployment, particularly from the perspective of deployments to azure cloud services.
EDIT:
Looking at the comment below I can see the potential problems of running during application_start, perhaps the issue is I am trying to solve a problem with the wrong tool, if FluentMigrator isn't the way to go and it may not be in our case as we have a large number of stored procedures, views, etc. so as part of the migration I was going to have to use SQL scripts to keep them at the right version and migrating down I don't think would be possible.
What I liked about the idea of running during Application_Start was that I could build a single deployment package for Azure and just upload it to staging and the database migration would be run and that would be it, rather thank running manual scripts, and then just swap into production.
Running migrations during Application_Start can be a viable approach. Especially during development.
However there are some potential problems:
Application_Start will take longer and FluentMigrator will be run every time the App Pool is recycled. Depending on your IIS configuration this could be several times a day.
if you do this in production, users might be affected i.e. trying to access a table while it is being changed will result in an error.
DBA's don't usually approve.
What happens if the migrations fail on startup? Is your site down then?
My opinion ->
For a site with a decent amount of traffic I would prefer to have a build script and more control over when I change the database schema. For a hobby (or small non-critical project) this approach would be fine.
An alternative approach that I've used in the past is to make your migrations non-breaking - that is you write your migrations in such a way they can be deployed before any code changes and work with the existing code. This way both code and migrations both can be deployed independently 95% of the time. For example instead of changing an existing stored procedure you create a new one or if you want to rename a table column you add a new one.
The benefits of this are:
Your database changes can be applied before any code changes. You're then free to roll back any breaking code changes or breaking migrations.
Breaking migrations won't take the existing site down.
DBAs can run the migrations independently.
I have been playing around with Kudu on an IIS development server and succeeded in making it deploy a hello world site etc.
But I was wondering if there was some resources on how to deploy Kudu in larger environments. So that you can quickly add new server nodes (virtual) to balance load.
Is there some approach to manage multiple Kudu deployments from a centralized location? I know It has an REST API and obviously we can probably use this, but that sort of requires some development, so was just looking to see if there was an existing solution out there.
But So far I have either searched after the wrong thing on Google, or there isn't much of what I need.
Does anyone have any experience in running it in larger environments with many servers?
The company I'm working for like to manually rebuild the Lucene indexes using the /admin/toolbox/rebuild-index.aspx in Sitecore 6.6. Once they have been rebuilt they then copy the files to each Content Delivery server manually and then restart the app pool on each CD server.
At the moment due to the way the site was built, the site has a long start-up time (this is being fixed sometime in the future) so the restarting of the app pools is a pain. My question is:
Does one need to restart the app pools for the new index files to be picked up?
Yes. Files will be locked by Lucene when it's running.
I guess, theoretically, it would be possible to get Lucene running on the CD servers in a read-only mode - but I've never attempted this myself, and don't know of a way to achieve it off hand.
If you are going to be doing fixes on the site in the future, might I suggest you move the indexing off server? Implement a SOLR centralised index. That way, when rebuilt, it will be readily available to any and all CD servers right away, with no need for copying and/or restarting app pools.
Scalability settings may be somethkng you need for multiple server implementation.
Does this cover lucene is something you may need to find out.
We are currently using MVC3, .NET4.5, EF6.1, MSSQL2008(dev), SQL Azure(Test and Live). Our application is quite complicated and we are encountering significant warm up lags, around 30 secs, after an application pool refresh. We use External autoping services to keep the sites warm, which are OKish.... However it would be a much better solution to just deploy native images, and then whenever a app pool refreshes for whatever reasons, we know the application will load as quickly as possible.
Hence the reason for investigating NGEN.
However I am unsure whether this is possible for Azure Websites. Some questions I have:
1) NGen requires Admin privilege. As I understand it I would need admin privilege to install Native images to Azure Websites, or can I generate them on a local "same cpu" machine and copy them across?
2) Require Full Trust now. I believe this is no issue with WAWS.
3) Does NGen only install in Cache and not produce some sort of file for copying to a different location?
Thanks inadvance.