Getting web page hit count with IIS logs in Azure - iis

I have a website hosted in Azure as a cloud service (not as a website), and I need to get the hit count for every web page of the site.
I enabled Azure Diagnostics, and I see the IIS logs copied to my blob storage, however this logs contain very few data (only one hit to a javascript file).
Furthermore, putting "Verbose" or "All" in the diagnostics configuration of the web role doesn't seem to affect the results, I get only one line (an access to a css file, or an image file, etc).
I'm using Azure SDK 2.0.
Is it possible to use the included IIS logs generated by azure to get a hit count? What should I need to change in the diagnostics configuration?
Or should I need a different approach to achieve this?

The IIS logs it produces are the same ones you'd find on a Windows Server anywhere. Note that depending on the settings you provided to the diagnostics it might take a little while before the data is moved to the storage account. Setting the level of verbosity for the configuration determines what is moved from the instances over to the storage account. Did you give it plenty of time to move the data over before looking at the file in storage again? Sometimes it just brings over what it has, and of course, there could be buffering which means when the file was brought over not everything was in it, etc.
You should be able to get this information from the logs, and yes, you should be able to do it from the IIS logs. That being said, if what you are after is a hits per page I would suggest actually a different approach. Look at an analytics provider like Google Analytics or one of the competitors to that. You'll get a massive amount of information beyond just page hits and no need to worry about parsing log files, etc.

Related

Azure Blob Storage - Static Site analytics

I've got a static web site hosted in Azure Blob Storage with Cloudflare as my CDN. It's such a small site (not even 1Mb and only 1 blog post), but I'm getting 1.1-1.2Gb of requests each month for the past 6 months or so with no explanation. Is there a way to find out what is being requested? In Azure, I can only find information about the performance, up-time, etc, but nothing about url's and I need to pay to get this info from Cloudflare (I believe). Has anyone else experience such strange requests?
I suggest you open Diagnostic settings and download Azure Storage Explorer to view the logs.
When you finished settings, u can check logs by tools. You can see request urls and http status info.
The previous data should not be visible, but you can monitor and analyze future requests.
When I did a lookup on those two IP's, they were both registered to Cloudflare, one makes sense given I'm using their service, but to have what appears to be their bot hit my site with this frequency doesn't... Wonder if there's a setting

is azure diagnostics only available through code?

Is Azure diagnostics only implemented through code? Windows has the Event Viewer where various types of information can be accessed. ASP.Net websites have a Trace.axd file at the root that can viewed for trace information.
I was thinking that something similar might exist in Azure. However, based on the following url, Azure Diagnostics appears to require a custom code implementation:
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#overview
Is there an easier, more built-in way to access Azure diagnostics like I described for other systems above? Or does a custom Worker role need to be created to capture and process this information?
Azure Worker Roles have extensive diagnostics that you can configure up.
You get to them via the Role configuration:
Then, through the various tabs, you can configure up specific types of diagnostics and have them periodically transferred to a Table Storage account for later analysis.
You can also enable a transfer of application specific logs, which is handy and something that I use to avoid having to remote into the service to view logs:
(here, I transfer all files under the AppRoot\logs folder to a blob container named wad-processor-logs, and do so every minute.)
If you go through the tabs, you will find that you have the ability to extensively monitor quite a bit of detail, including custom Performance Counters.
Finally, you can also connect to your cloud service via the Server Explorer, and dig into the same information:
Right-click on the instance, and select View Diagnostics Data.
(a recent deployment, so not much to see)
So, yes, you can get access to Event Logs, IIS Logs and custom application logs without writing custom code. Additionally, you can implement custom code to capture additional Performance Counters and other trace logging if you wish.
"Azure diagnostics" is a bit vague since there are a variety of services in Azure, each with potentially different diagnostic experiences. The article you linked to talks about Cloud Services, but are you restricted to using Cloud Services?
Another popular option is Azure App Service, which allows you many more options for capturing logs, including streaming them, etc. Here is an article which goes into more details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Cloud Services - Two web roles sharing file system

I have a very special requirement which is:
Two web roles accessing a local shared file location.
I am aware of the "Local Storage" role settings, but those are only accessible within each role scope.
Does anyone know another option to accomplish this?
------- EDIT --------
As suggested I will explain more clearly what I'm trying to achieve here.
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
I don't want to mix their ASP.NET MVC open source project with my code, so that's why I want to deploy their website as a separate webrole.
-------- END EDIT ------------
Thanks
In your question, you state that (my emphasis):
I'm implementing Only Office which is a web editor for office files. Their product requires to have a file saved on the file system to be opened on the editor.
If Only Office's requirement is to have temporary file storage that is used while the document is being edited, you may be able to get away with this in a Cloud Service Web Role. This is assuming that your users wouldn't be too angry if the temp. working document was 'lost' during a role re-start.
Web (and Worker) Roles are non-durable and the Azure Service Fabric might bounce them if they need to patch the underlying host or they might just crash due to a fault (which is usually why you deploy them in pairs - fault-tolerance etc.) If you save something to the file system on a Web Role, you are not guaranteed that it will be there if the role is bounced.
If however you need durability, you will need to implement something based around Azure Blob Storage and possibly something based on Blob Leases. However I imagine that Only Office doesn't have an implementation for Azure....
Failing that, you could try running on Azure Web App Service, however I imagine you would have the same issue re. backing storage and would need to implement something on Blob Storage.
So, finally, if you want complete control and something akin to running on-premise, take a look at using an IaaS Virtual Machine where you have all of the file system to play-with as you please.
==UPDATE==
Taking a look at the Only Office website, there is a SaaS offering Only Office SaaS Hosting which is probably cheaper to run for a year than the time taken for me to write this answer!
Failing that, if you look at the requirements for Only Office Document Server there is no way you're going to run that on a Web Role. Go Azure IaaS VM's.
You basically have 2 options here, both mentioned in the comments. You can use BLOB storage, or you can use an SMB share using Azure Files, which I believe is in preview still. We have used Azure files to mount an SMB share on several linux boxes. One thing we have noticed is that it is not particularly fast. It is also built on top of blob storage. Here is a link to Azure Files https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/.
If you choose to use blob storage and you will need to consider concurrency.
https://azure.microsoft.com/en-us/blog/managing-concurrency-in-microsoft-azure-storage-2/
I would suggest to use Azure File Services, you could have a share like URI to be used.
take a look at this:
https://azure.microsoft.com/en-us/documentation/articles/storage-dotnet-how-to-use-files/

How to keep Azure Application Logging enabled?

I have a website hosted on Azure. I am using Trace.Error to output all my error logs to file system. However when I enable Application Logging on Azure website, it only remains enabled for 12 hours.
This is also confirmed in this article: http://www.hanselman.com/blog/StreamingDiagnosticsTraceLoggingFromTheAzureCommandLinePlusGlimpse.aspx
Now I will like to keep storing error logs indefinitely (i.e. till my website is live). I am not sure if I am missing the point here. How can I keep logging enabled forever?
You can store your logs in Azure Storage (tables or blobs). This doesn't have the 12 hour constraint that the file system does.

Changing Azure .cscfg settings on Role Start

I'm trying to create a startup script or task that changes the configuration settings in the cscfg file, that executes on role start.
I'm able to access these settings, but haven't been able to successfully change them. I'm hoping for pointers on how to change settings on Role Start, or if it's even possible.
Thanks.
EDIT: What I'm trying to accomplish
I'm trying to make a service to more easily configure configuration values on Azure applications. Right now, if I want to change a setting that it the same over 7 different environments, I have to change it in 7 different .cscfg files.
My thought is I can create a webservice, that the application will query for its configuration values. The webservice will look in a storage place, like Azure Tables, and return the correct configuration values. This way, I can edit just one value in Tables, and it will be changed in the correct environments much more quickly.
I've been able to integrate this into a deployment script pretty easily (package the app, get the settings, change the cscfg file, deploy). The problem with that is every time you want to change a setting, you have to redeploy.
Black-o, given that your desire appears to be to manage the connection settings among multiple deployments (30+), I would suggestion that perhaps your need would be better met by using a separate configuration store. This could be Azure Storage (tables, or perhaps just a config file in a blob container), a relational database, or perhaps even an external configuration service.
These options require only a minimum amount of information to be placed into the cscfg file (just enough to point at and authorize against the configuration store), and allow you to maintain all the detail settings side by side.
A simple example might use a single storage account, put the configuration settings into Azure Tables, and use a "deployment" ID as the partition key. The config file for deployment then just needs the connection info for the storage location (unless you want to get by with a shared access signature), and its deployment ID. Then can then retrieve the configuration settings on role startup and cache them locally for performance improvements (either in a distributed memory cache or perhaps on the temp "local storage" drive for each instance).
The code to pull all this together shouldn't take more then a couple hours. Just make sure you also account for resiliency in case your chosen configuration provider isn't available.
The only way to change the settings during runtime is via Management API - craft the new settings and execute "Update Deployment" operation. This will be rather slow because it honors update domains. So depending on your actual problem there might be a much better way to solve it.

Resources