Why aren't my IIS Logs being copied for my Azure web role? - azure

This is a follow up on this question. I used Cerebrata Diagnostics Manager Remote Diagnostics to try to turn on IIS logs. I hadn't deployed with it on. It seemed to work and a few files were copied. Then it never seemed to work again. I tweaked settings again. I tried deleting the iis related blob and table storage entries to see if that would get it to start over. Here is what the config looks like in the wad-control-container that shows that it seemed to be updated based on Cerebrata tool.
<?xml version="1.0"?>
<ConfigRequest xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<OnDemandTransfers />
<DataSources>
<OverallQuotaInMB>4096</OverallQuotaInMB>
<Logs>
<BufferQuotaInMB>1024</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>1</ScheduledTransferPeriodInMinutes>
<ScheduledTransferLogLevelFilter>Undefined</ScheduledTransferLogLevelFilter>
</Logs>
<DiagnosticInfrastructureLogs>
<BufferQuotaInMB>0</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>0</ScheduledTransferPeriodInMinutes>
<ScheduledTransferLogLevelFilter>Undefined</ScheduledTransferLogLevelFilter>
</DiagnosticInfrastructureLogs>
<PerformanceCounters>
<BufferQuotaInMB>0</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>0</ScheduledTransferPeriodInMinutes>
<Subscriptions />
</PerformanceCounters>
<WindowsEventLog>
<BufferQuotaInMB>0</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>0</ScheduledTransferPeriodInMinutes>
<Subscriptions />
<ScheduledTransferLogLevelFilter>Undefined</ScheduledTransferLogLevelFilter>
</WindowsEventLog>
<Directories>
<BufferQuotaInMB>0</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>1</ScheduledTransferPeriodInMinutes>
<Subscriptions>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8973cd09642f4dfeafe830612cc8c1fe.AllRole.DiagnosticStore\FailedReqLogFiles</Path>
<Container>wad-iis-failedreqlogfiles</Container>
<DirectoryQuotaInMB>1024</DirectoryQuotaInMB>
</DirectoryConfiguration>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8973cd09642f4dfeafe830612cc8c1fe.AllRole.DiagnosticStore\LogFiles</Path>
<Container>wad-iis-logfiles</Container>
<DirectoryQuotaInMB>1024</DirectoryQuotaInMB>
</DirectoryConfiguration>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8973cd09642f4dfeafe830612cc8c1fe.AllRole.DiagnosticStore\CrashDumps</Path>
<Container>wad-crash-dumps</Container>
<DirectoryQuotaInMB>1024</DirectoryQuotaInMB>
</DirectoryConfiguration>
</Subscriptions>
</Directories>
</DataSources>
<IsDefault>false</IsDefault>
</ConfigRequest>
Any ideas on why it won't seem to work?
UPDATE
We redeployed today with the following diagnostics.wadcfg and still no IISLogs. Trace logs are working. We don't have any code that calls the diagnostics because it was my understanding that the file could handle it all. Am I missing something?
<DiagnosticMonitorConfiguration xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration"
configurationChangePollInterval="PT1M"
overallQuotaInMB="4096">
<Logs bufferQuotaInMB="1024"
scheduledTransferLogLevelFilter="Verbose"
scheduledTransferPeriod="PT1M" />
<Directories bufferQuotaInMB="1024"
scheduledTransferPeriod="PT1M">
<!-- These three elements specify the special directories
that are set up for the log types -->
<CrashDumps container="wad-crash-dumps" directoryQuotaInMB="256" />
<FailedRequestLogs container="wad-frq" directoryQuotaInMB="256" />
<IISLogs container="wad-iis" directoryQuotaInMB="256" />
</Directories>
</DiagnosticMonitorConfiguration>
Could it be that the web roles are XS instances (since we're just testing right now)? Again, it did work once, but seems to be dead now.

Few suggestions:
Lower the OverallQuotaInMB to something like 4000
Increase
BufferQuotaInMB under the Directories node to a number (say 1 gig)
Lower the other individual Directory quotas so that they add up
to something slightly less than BufferQuotaInMB in #2 and so that
ALL of the quotas (including the overall Directories and and
individual Folders are under OverallQuotaInMB). IE: Logs - 1gig
(this is trace data), Directories: 1 gig, FailedRequests: 256mb,
IISLogs: 256mb, CrashDumps: 256mb
Reboot your servers (just in
case)
Good luck
Basically, I've seen diagnostics act fussy when the overall quota is set to the max space that Azure allocates for diagnostics storage (4gig). Lowering individual quotas so that they add up to something less than total quota also helps because if Azure Diagnostics ever fills up, there is breathing room before Azure removes old data.
Overall, setting up Azure Diagnostics is something of a black magic art. I've been helping AzureWatch customers do this for two years now and I still feel like I'm fumbling with the quotas. Wish they would just let users turn the thing on or off have the entire config be driven by convention vs. configuration. Almost noone ever cares to capture the data onto their VM's and not transfer it to azure storage and thus small quotas are totally fine for majority of the cases as majority of folks transfer their data to storage every few minutes.
HTH

Related

No performanceCounters/Requests Data in azure Application Insights

I have a web app running on prem (IIS) and I am trying to collect performance counters from it, although I have everything setup on my local and I am seeing all other types of telemetry in my Azure resource (requests, exceptions, traces) performance Counters are just not there,
it is worth mentioning that I am running 2 more web applications (they all work together) under the same site and same applicationPool in IIS and they are collecting Performance counters just fine, they all use the same package that contains our Application Insights implementation. Something strange I have noticed is that, when I am serving the application from Visual Studio, all the telemetry goes through, even performance counters, but, when I try with an installed Instance of my webapplications (using our inhouse installer ) any request or perfCounter for this particular web application just won't work.
I don't know what else to check, all 3 web applications should be collecting performance counters the same way since they are pretty much alike, and running under the same AppPool, same goes for requests.
this is what my appInsights.config looks like:
<Add Type="Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.PerformanceCollectorModule, Microsoft.AI.PerfCounterCollector">
<Counters>
<Add PerformanceCounter="\Processor(_Total)\% Processor Time" ReportAs="Processor Total - % Processor Time"/>
<Add PerformanceCounter="\Process(??APP_W3SVC_PROC??)\% Processor Time" ReportAs="W3SVC - % Processor Time"/>
<Add PerformanceCounter="\Process(??APP_WIN32_PROC??)\% Processor Time" ReportAs="Win32 - % Processor Time"/>
<Add PerformanceCounter="\Process(??APP_CLR_PROC??)\% Processor Time" ReportAs="CLR - % Processor Time"/>
<Add PerformanceCounter="\Process(_Total)\Private Bytes" ReportAs="Process Total - Private Bytes"/>
<Add PerformanceCounter="\Process(??APP_W3SVC_PROC??)\Private Bytes" ReportAs="W3SVC - Private Bytes"/>
<Add PerformanceCounter="\Process(??APP_WIN32_PROC??)\Private Bytes" ReportAs="Win32 - Private Bytes"/>
<Add PerformanceCounter="\Process(??APP_CLR_PROC??)\Private Bytes" ReportAs="CLR - Private Bytes"/>
</Counters>
<!--
Use the following syntax here to collect additional performance counters:
<Counters>
<Add PerformanceCounter="\Process(??APP_WIN32_PROC??)\Handle Count" ReportAs="Process handle count" />
...
</Counters>
PerformanceCounter must be either \CategoryName(InstanceName)\CounterName or \CategoryName\CounterName
NOTE: performance counters configuration will be lost upon NuGet upgrade.
The following placeholders are supported as InstanceName:
??APP_WIN32_PROC?? - instance name of the application process for Win32 counters.
??APP_W3SVC_PROC?? - instance name of the application IIS worker process for IIS/ASP.NET counters.
??APP_CLR_PROC?? - instance name of the application CLR process for .NET counters.
-->
</Add>
few more details: AppInsights sdk 2.4
.NetFramework 4.6.1
Thanks, all help is welcome

Why is CloudConfigurationManager using my Cloud.cscfg instead of Local.cscfg too?

I do realise that his question was asked and answered, but unfortunately the solution of complete clean, rebuild, restart.. doesn't work in my case and my lowly reputation doesn't allow me to comment. So I am I think compelled to ask it again with my info.
Sample code:
CloudStorageAccount storageAccount;
string settings = CloudConfigurationManager.GetSetting("StorageConnectionString");
storageAccount = CloudStorageAccount.Parse(settings);
I have my web.config section like this:
<appSettings>
<add key="owin:AppStartup" value="zzzz" />
<add key="webpages:Version" value="3.0.0.0" />
<add key="webpages:Enabled" value="false" />
<add key="ClientValidationEnabled" value="true" />
<add key="UnobtrusiveJavaScriptEnabled" value="true" />
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=yyyy"/>
</appSettings>
In the ServiceConfiguration.Cloud.cscfg I have:
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="DefaultEndpointsProtocol=https;AccountName=nnnn" />
<Setting name="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=xxxx;AccountKey=yyyy"/>
</ConfigurationSettings>
and in the ServiceConfiguration.Local.cscfg I have:
<ConfigurationSettings>
<Setting name="Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString" value="UseDevelopmentStorage=true" />
<Setting name="StorageConnectionString" value="UseDevelopmentStorage=true" />
</ConfigurationSettings>
I converted this to an Azure project from a standard MVC web project in order to use the Azure storage blobs etc. I am finding that no matter what I seem to do it always uses the Azure storage.
As I step through the code snippet above.. I can clearly see the returned connection string as the one coming from the web.config app setting... I feel I must be doing something fundamentally wrong or missing something..?
A small point (maybe?) as I converted the project over, there was an error message (on a pop up and not saveable) about a connection string error and it not working. I hadn't even created this particular connection string at that time and the only other one (for localDB does work). That however is in the web.config section and as it ain't broke I didn't fix it to go into the ..
Any help would be appreciated.
Further Addition, from the comments by Igorek below, I did check the Role settings and they appear to be correct.
Then .. after a lot of messing around, some experiments which still didn't work, I've taken a step back. I actually don't want a cloud service, I ended up with one because I thought I needed one to access Blobs and Queues, I had already decided that WebJobs seems like the way to go first to keep as abstracted as possible.
So I have rolled back to prior to the Web SITE that I had before and found but I still CAN'T seem to get it to use development storage.. although I imagine that CLoudConfigurationManager probably doesn't handle Web Sites? Any tips?
Check into settings of your Role within the cloud project. It will have a default for which configuration it starts with. Simply swap from Cloud to Local.

Blob wad-iis-logfiles is never created

I just started checking out Windows Azure and I have trouble getting any access logs from IIS for my test web role. The web role itself works fine, but I would like to see a log for accesses (both successful and failed).
As far as I can see the default configuration files for a web role contain instructions to send those logs to a blob named "wad-iis-logfiles", but that blob is never even created (it doesn't exist in my blob storage).
My diagnostics.wadcfg for the web role currently is:
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M" overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
<DiagnosticInfrastructureLogs bufferQuotaInMB="512" scheduledTransferPeriod="PT5M" />
<Directories bufferQuotaInMB="512" scheduledTransferPeriod="PT5M">
<IISLogs container="wad-iis-logfiles" />
<CrashDumps container="wad-crash-dumps" />
</Directories>
<Logs bufferQuotaInMB="512" scheduledTransferPeriod="PT5M" scheduledTransferLogLevelFilter="Information" />
<PerformanceCounters bufferQuotaInMB="512">
(... snip...)
</PerformanceCounters>
<WindowsEventLog bufferQuotaInMB="512" scheduledTransferPeriod="PT1M" scheduledTransferLogLevelFilter="Error">
<DataSource name="Application!*" />
</WindowsEventLog>
</DiagnosticMonitorConfiguration>
Question 1: is this configuration file correct?
Question 2: are there other things that need to be set before I can get the IIS log files?
With the help of the commenters I was able to solve the issue.
There are several interacting things causing the issue.
As commenter #kwill mentioned, an existing configuration blob in wad-control-container overrides any other configuration, and that configuration is not replaced during an in-place update. I was using in-place update to put my modified diagnostics.wadcfg in place, so that is the explanation why my attempts to change settings that way didn't work. Note that editing the properties of the web test role (in the "Roles" branch of the Azure Cloud services project operates by editing that same file, so that didn't work either. More information on how that wad-control-container overrides setting can be found in http://msdn.microsoft.com/en-us/library/windowsazure/dn205146.aspx .
The reason that blob already existed may have been that I had been changing some other performance measurement settings in the azure management window earlier.
I managed to "fix" the situation by editing the blob found in wad-control-container for my instance, using the tool mentioned by commenter #Gaurav Mantri - "Azure Explorer". As mentioned, without that tool you can download the blob and edit it, but never put it back properly, since the '/' characters in the blob's name get translated to '%2F', and those are not translated back on upload.
Note that the XML schema is not the same as the schema for diagnostics.wadcfg, but some similarities exist. I changed the "Directories" element toward the bottom of the blob to read:
<Directories>
<BufferQuotaInMB>512</BufferQuotaInMB>
<ScheduledTransferPeriodInMinutes>2</ScheduledTransferPeriodInMinutes>
<Subscriptions>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8091b0be14e54213ac12fcbd5f9c8e1b.WebTestRole.DiagnosticStore\CrashDumps</Path>
<Container>wad-crash-dumps</Container>
<DirectoryQuotaInMB>0</DirectoryQuotaInMB>
</DirectoryConfiguration>
<DirectoryConfiguration>
<Path>C:\Resources\directory\8091b0be14e54213ac12fcbd5f9c8e1b.WebTestRole.DiagnosticStore\LogFiles</Path>
<Container>wad-iis-logfiles</Container>
<DirectoryQuotaInMB>16</DirectoryQuotaInMB>
</DirectoryConfiguration>
</Subscriptions>
</Directories>
In the original version the "BufferQuotaInMB" and "DirectoryQuotaInMB" fields were 0.
Note that after uploading the blob again the effect is not immediate. It takes a while for the changed configuration to get picked up, and then it takes another while before the IIS log files are copied for the first time.
Last note: it may be obvious, but I don't think editing that blob is a recommendable solution. It is good to know the option exists though.

MaxBufferPoolSize and MaxBufferSize?

This is my first post to Stack Overflow so please apologies if there is any non-conformity in it.
Question
I have developed a Windows Azure based site (similar to eBay) and hosted it on Azure platform. I have deployed multiple instances of web role with Azure caching enabled. Till last week everything was going fine but suddenly product search page started freezing while loading the data from db. It hangs only for specific categories which returns huge amount of data.
I read somewhere that we should enable localCache and transportProperties if we are expecting large messages. Hence I modified datacache item in my web.config as below but no luck. The page still hangs for those categories!
Could somebody please tell me what is wrong in following and show me some pointers?
<dataCacheClient name="default" channelOpenTimeout="20000" maxConnectionsToServer="4" requestTimeout="30000">
<localCache isEnabled="true" sync="TimeoutBased" ttlValue="300" objectCount="10000"/>
<clientNotification pollInterval="300" maxQueueLength="10000"/>
<transportProperties connectionBufferSize="64000" maxBufferPoolSize="5242880"
maxBufferSize="1242880" maxOutputDelay="2" channelInitializationTimeout="60000"
receiveTimeout="600000"/>
<hosts>
<host name="<<AZURE CACHE URL>>" cachePort="22233" />
</hosts>
<securityProperties mode="Message">
<messageSecurity
authorizationInfo="<<KEY>>">
</messageSecurity>
</securityProperties>
</dataCacheClient>
<dataCacheClient name="SslEndpoint" channelOpenTimeout="20000" maxConnectionsToServer="4" requestTimeout="30000">
<localCache isEnabled="true" sync="TimeoutBased" ttlValue="300" objectCount="10000"/>
<clientNotification pollInterval="300" maxQueueLength="10000"/>
<transportProperties connectionBufferSize="64000" maxBufferPoolSize="15242880"
maxBufferSize="5242880" maxOutputDelay="2" channelInitializationTimeout="60000"
receiveTimeout="600000"/>
<hosts>
<host name="<<AZURE CACHE URL>>" cachePort="22243" />
</hosts>
<securityProperties mode="Message" sslEnabled="true">
<messageSecurity
authorizationInfo="<<KEY>>">
</messageSecurity>
</securityProperties>
</dataCacheClient>
My dev env,
Azure SDK 1.8 (Oct 12), SQL Server 2008 R2, ASP.Net MVC 3
UPDATE
Today I deployed a build with CustomerErrors off to see the if it throws any exception, and this is what I got.
Thanks in advance
ND
I would advice to first find out which component is truly causing your intermittent slowdowns. Is it cache or is it SQL Azure?
If it is indeed cache, and since you're using Azure Shared Cache (previously known as Azure AppFabric Cache)
I would suggest looking at Dedicated cache as a solution instead of Shared cache. Performance of Shared Cache can sometimes be... unpredictable since it is a multi-tenant service and data travels over a network.

azure cache preview

I tried the new azure preview that came with the new sdk on my computer.
I put a worker role with cache preview and put co-located role with 30% cache size.
on my controller i put this code:
[OutputCache(Duration=int.MaxValue, VaryByParam="none")]
public ActionResult Index()
{
ViewBag.Message = "Welcome to ASP.NET MVC!";
ViewBag.Id = Microsoft.WindowsAzure.ServiceRuntime.RoleEnvironment.CurrentRoleInstance.Id;
return View();
}
now i ran the worker role via the emulator with 4 instances. the result was that every time i saw a different id - which mean the output cache never work with all the 4 instances ( to be clear i configure the output cache to work with the cache preview).
Only when i put an extra cache worker role as dedicated role everything start to work like it should be.
My questions is:
Do i need the extra worker role to actually make the cache preview to work ok? - which mean the trade off of not working with azure appfabric cache is putting extra machine
Did i do something work and it should work with the web roles as co located roles?
thanks
edit:
this another section of my web.config
<dataCacheClients>
<tracing sinkType="DiagnosticSink" traceLevel="Error" />
<dataCacheClient name="default">
<autoDiscover isEnabled="true" identifier="NugetTest" />
<!--<localCache isEnabled="true" sync="TimeoutBased" objectCount="100000" ttlValue="300" />
</dataCacheClient>
if my identifier have NugetTest ( which is my web roles - which i have 4) every time i switch machine i get a different cache. if i change the identifier to my worker role i get the result
Can you add applicationName tag in the provider configuration in web.config of you app? If this is not added, instances will not share the cache across. Please note the applicationName tag.
This should be added for the web.config of webrole in both dedicated or colocated cache scenario.
Please reply if this solves your issue.
<caching>
<outputCache defaultProvider="DistributedCache">
<providers>
<add name="DistributedCache" type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache" cacheName="<cacheName>" applicationName ="<anyName>" dataCacheClientName="<dataCacheClientName>" />
</providers>
</outputCache>
</caching>
I'm unable to reproduce this issue. I always see the same instance, and I'm using Ctrl+F5 in the browser (thus rule out browser cache). PLease make sure you've configured output cache provider as described on http://www.windowsazure.com/en-us/develop/net/how-to-guides/cache/.
<!-- If output cache content needs to be saved in a Windows Azure
cache, add the following to web.config inside system.web. -->
<caching>
<outputCache defaultProvider="DistributedCache">
<providers>
<add name="DistributedCache"
type="Microsoft.Web.DistributedCache.DistributedCacheOutputCacheProvider, Microsoft.Web.DistributedCache"
cacheName="default"
dataCacheClientName="default" />
</providers>
</outputCache>
</caching>
Best Regards,
Ming Xu.

Resources