I am trying to deploy a large web site to Azure as a Web Role. However, Azure on the Instances tab of the Azure dashboard, it tells me it suffers an error during start up, causing it to restart over and over again.
Where can I find log files that will tell me what specifically is going wrong? The manage.windowsazure.com site doesn't seem to have any.
First, debug on your dev machine. Make sure you deployed the right .cscfg file, you don't have any broken connection strings, you're referencing the right version of the DLLs (the same as Azure's VMs) or are copying newer versions to Azure. If those fail, read this topic on WindowsAzure.com and the topics in this node on MSDN. The Hello World code sample also has a basic demonstration of diagnostics that should be helpful.
The basics of diagnostics in Windows Azure:
Must be manually enabled for each role by importing the Diagnostics module in your ServiceDefinition.csdef file
A storage location needs to be configured for the resulting logs in your ServiceConfiguration.cscfg file, such as the storage emulator, or a Windows Azure Storage account. Depending on the types of logs, they are stored in either blobs or tables.
You can either configure diagnostics collection programmatically or with a file that is read when your role starts and can be updated on-the-fly
You can set up and control how often diagnostics data is downloaded to your storage account (important because transactions/transfer/storage costs money), performance counters, or other metrics you need
There are a series of 4 blog posts at http://blogs.msdn.com/b/kwill/archive/2013/08/09/windows-azure-paas-compute-diagnostics-data.aspx which will walk you through step by step how to troubleshoot a role startup failure including log file locations, etc.
Related
When creating an Azure function with an Dedicated (Standard) App Service Plan the file service I'm expecting to get "linked" up isn't getting "linked" to the storage account. However the Storage account does get created correctly. When I go to the Azure Storage Accounts blade and find the file storage, Azure doesn't have the file service linked to the Storage Account. I don't see any linked File Shares when using the Windows desktop software, Microsoft Azure Storage Explorer (0.9.6).
When I go to the Azure Function's Advanced Tools (Kudo) I can see the storage account folders "Data", "LogFiles", and "Site" with the wwwroot I'm expecting to find. However due to certain network restrictions I can't upload the code through the web-site so that option is out.
When creating a Consumption based plan everything links up nicely and I can manage them in the Azure Storage Explorer app. How can I get my already created File Service File Share linked to my already created Azure Function so I can manage it in the desktop app and get things appearing correctly linked up in the Storage Explorer?
There's a solution for you to refer, as it works on my side.
In portal, Application Settings Tab, add two application settings below:
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING storage account connectionstring
WEBSITE_CONTENTSHARE file share name
And some explanations for you.
When creating a function app, the storage account we specified is mainly used to store logs and files like host locks.
Only for function app created in consumption plan, it automatically adds the two application settings above and uses File Share to store whole function app by default.
As Azure document says, the File Share related settings is for consumption plan only. It seems not an expected operation for function created in app service plan, but it works in practice anyway.
Update
For a function app created in app service plan, assume its files are stored in place A(somewhere on the server). It works well and kudu displays files stored in A. So far it has nothing to do with file share.
Then we add the two settings, and assume the file share is B.
System retains old files(if exist) in B and creates an empty function app in it. From now on, system targets at and leverage files in B, as long as the "link"(two settings) exist. In portal, kudu or app service editor, we see files in B and also changes will be saved there.
And if we delete the "link", everything returns to A. Need to wait a little while for system "redirecting".
All explanation is based on my test(dozens of times) since it's an unexpected operation and has no document description.
I am developing with Azure storage locally using VS2015. I created and accessed my development storage blob container fine. I upload three images and have code to calculate the size of the images.
For some unknown reason, I cannot expand the Blob Containers node in Cloud Explorer any more. I.e. Cloud explorer > Storage Accounts > (Development) > Blob Containers. Doing so results in the following error message:
Cloud Explorer has encountered an unexpected error: Unable to retrieve
child resources.
It has been working fine in the past, so not sure what's changed. I know there are containers inside and I can seemly create one but then it doesn't show up in the list.
It works for live Azure storage accounts but not development.
Though I can still write code against this Development container - so it's there and functional but Cloud Explorer just not listing the containers i.e. no access to view files / upload file through VS UI.
Here are my steps to resolve
Uninstall Cloud Explorer via Extensions and Updates
Restart VS
Update the Cloud Explorer in Extensions and Updates (that apparently wasn't uninstalled)
Experience the catastrophic behaviour (very slow)
Restart VS
Fixed (seemingly)
Look into Extensions and Updates to see Cloud Explorer is disabled
Everything is fixed.
Update (29/07/2018)
If you are having trouble launching Microsoft Azure Storage Explorer (Development) e.g. blob storage and get the error message "Unable to retrieve child resources" followed by details of "A network error occurred...ECONNREFUSED 127.0.0.1:10002" then simply (install and) run the Azure Storage Emulator.
The aforementioned solution didn't work for me.
The error went away after upgrading to Azure SDK 3.0 (using web platform installer). After that I am able to expand the child resources in App Services, and attach the debugger.
Another option that worked was using the Server Explorer to expand files/attach the debugger, but that option seems to have been turned off in 3.0.
The name of the Storage Account Name is case sensitive to teh azure service. The client however is not.
Because of this you can connect, but when the Storage Explorer tries to enumerate the child object it will fail if the Storage Account is not entered with the proper case.
I got this error when my system-clock was accidentally set a couple of hours back in time. Just saying.
I have an Azure Cloud Service out in production. I recently received the following message from Microsoft about upcoming service for their Cloud Services platform:
All Cloud Services running web and/or worker roles referenced below will experience downtime during this maintenance. Cloud Services with two or more role instances in different upgrade domains will have external connectivity at least 99.95 percent of the time. Please note that the SLA guaranteeing service availability only applies to services that are deployed with more than one instance per role. Azure updates one upgrade domain at a time. For more information about distribution of roles across upgrade domains and the update process, please visit the Update an Azure Service webpage.
The way I take the email, each instance is essentially a VM on a different host, and they'll be rebooting hosts throughout the maintentance period, so if I don't want to be out of service during this time, I need to ensure I have more than one instance. Is this accurate? If so, how do I "add" an instance?
That is correct.
You can increase your instance count by updating the configuration for the role. In Visual Studio, you can do this in the Properties window for the role and increase the Instance count setting. Then, redeploy the service.
A faster way is to download the configuration file (.cscfg) for the role from the Azure portal, update the instance count setting, and then upload the changed configuration file. The setting is in the Instances element shown here.
You can download the configuration file for the role from the Azure portal (portal.azure.com) by going to the Cloud Service blade and clicking on Settings in the toolbar. In the Settings blade click Configuration. In the Configuration blade are where you will find Download and Upload buttons in the toolbar.
The way I take the email, each instance is essentially a VM on a
different host, and they'll be rebooting hosts throughout the
maintentance period, so if I don't want to be out of service during
this time, I need to ensure I have more than one instance. Is this
accurate?
Your understanding is correct.
If so, how do I "add" an instance?
There are many ways to do it.
One would be to edit your role's configuration file (*.cscfg) and changing the Instances element's count property from 1 to 2 and then uploading this file through Azure Portal (Cloud Service --> Configure Tab --> Upload button)
Other one would be to change the instance count through "Scale" tab for Cloud Service in question. On this tab, you will see "Instance Count" setting. Just update it from 1 to 2.
Other options include PowerShell, writing code but I think the ones I mentioned above should be the easiest way to accomplish the task.
We've recently started using Azure to host some virtual machines, but I've got problems getting the grips on the available resource monitoring metrics.
When I go to the dashboard for the virtual machine, I have the option to add metrics for several things, but Memory Available is missing:
When reading about how to monitor cloud services, it seems clear that you should have the option to add a metrics for Memory Available. Reading other posts here on Stack Overflow, I see other tools such as MetricsHub mentioned - but I don't think this is what we want, as we don't need any monitoring endpoint, we only want to see memory usage in the Azure dashboard (and apps from the Azure store isn't available to us, since we're on an Enterprise Agreement).
Am I missing something obvious here? What must be done to add memory monitoring to the dashboard?
Cloud Services is not the same as Virtual Machines. When you use cloud services, Azure will provision VMs for you and Azure is able to install monitoring tools that see the amount of available memory. When you create your own VMs Azure can't and shouldn't do that. In other words, with VMs you are on your own. The metrics you do see in the portal are the ones that can be measured from outside the VM.
If you do deploy as a Cloud Service then initially you will only have the same metrics as for the VM. There are several ways you can change this.
The easiest is to go to the configuration for your cloud service in the Management Portal and change the logging level from Minimal to Verbose; That will enable a lot more metrics. Alternatively, you can specify which metrics you want collected in the cloud configuration in your project in Visual Studio. It is also possible to do this in code, though that is not the currently recommended practice, instead use the configuration tool in the cloud project in visual studio.
The key thing to understand about the metrics in Cloud Services is that, whichever way you elect to configure them, they are stored in a standard way in Table Storage and Blob Storage. That means using the Azure Management Portal or the tool in Visual Studio or code, the outcome is the same. This also means that a variety of tools including Cerebrata, Visual Studio and, indeed, the management portal can all read this data.
It is also worth noting that because of the way this works, the configuration can be changed at runtime, usually through the portal but there are other tools and approaches in code.
In my experience, you normally only want to sample your performance metrics every two minutes, but do the log shipping every minute. Also note that you can configure trace logs and IIS logs etc to be available to tools like Visual Studio and Cerebrata. For Cloud Services, it is quite rich functionality but it takes some working with it before you start to "get" it all. Enjoy!
You can monitor memory and other "Guest" level metrics in Azure, here's how:
in Azure, go to your virtual machine, scroll down the settings to Monitoring > Diagnostics Settings
Click to enable Guest level monitoring, it can take a few minutes
Then you can go into Metrics for the VM, or Monitor at the top level:
choose the resource (the VM)
choose Guest in the metric namespace, it will load all the new metrics
choose Memory\Committed bytes or whatever ones you want.
You can then pin to dashboard etc as you would normally
It should be possible to install azure diagnostics on VM using powershell command Set-AzureVMDiagnosticsExtension
http://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#virtual-machine
or using new management portal
http://feedback.azure.com/forums/231545-diagnostics-and-monitoring/suggestions/5535368-provide-azure-diagnostics-runtime-for-vm-iaas
I've tried to configure it using new portal, I can see the the extension IaaSDiagnostics is installed on VM, but no luck yet with getting the data.
I have created my first app for azure. It's has an MVC3 web role which writes some data to table storage.
It also has a worker role that does some work behind the scenes to the same data.
It all works fine in the emulator.
I've uploaded it all to Azure as a staging deployment, the hosted service it is reports all roles as "ready". The health for all roles is "healthy", though the worker role appears to crash and goes to "degraded" and then resets itself (I assume this is what is happening).
So what now? I have found a "DNS Name" on my Web Role in the form "http://{guid}.cloudapp.net/"
Clicking on that link just gives me a network access error, http://www.downforeveryoneorjustme.com/ can't find it either.
What am I missing? Where can I see diagnostics similar to the emulator? I've set "Enable Diagnostics" to use my Azure storage account in each role. How do I get into the storage to see if it has traced anything? Can this be done through the Management Portal?
I've tried searching through MSDN, but I can't find a page that says "and then you click the DNS name link and your website will launch. I'm sure there is a lovely page like that but I can't find it.
thanks in advance!
In August 2011, the Windows Azure role templates were updated to work with the ASP.NET Universal Providers. As such, when you create a new project, the session state provider is backed by SQL Express by default. If you don't change this to SQL Azure or Cache (or disable session state), you'll run into issues.
I'm not sure this is exactly the issue you're running into, but it's a common one. See Nate Totten's blog post for more information about this (Nate calls out this issue a few pages down, under IMPORTANT NOTE).
You can access diagnostics data directly from Visual Studio Server Explorer.
Here you have all necessary information: Browsing Storage Resources with Server Explorer http://msdn.microsoft.com/en-us/library/windowsazure/ff683677.aspx
Personally I use Azure Diagnostics Manager from Cerebrata http://www.cerebrata.com/products/AzureDiagnosticsManager/ that is easy and has a good dashboard