Using Linux App Service on Azure. How do I view the logs (app/console logs and HTTP request logs) for a particular time in the past?
In other logging apps I can enter a search term, or a time and jump straight to that point to view the logs for that point (and before and after). That's what I'd like to do for Azure.
You need to set WEBSITE_TIME_ZONE variable in Application settings.
Supported timezone values are
https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
https://learn.microsoft.com/en-us/azure/app-service/faq-configuration-and-management
You can verify Time by navigating to Console and executing time command enter
If you want to search in logs you may download them in txt file into local machine or send the logs to Azure Log Analytics and query them using query language filters
| where TimeGenerated xxx
Related
We have provisioned the instance of the Azure app gateway (Standard v2 East AU region) and has enabled the diagnostics settings of it to dump all metrics and logs to the log analytics workspace and this seem to be working fine, however we wanted to additional insights of the request and hence have scaled up the tier and enabled the WAF v2 (as shown in the image below).
Now based on this documentation here https://learn.microsoft.com/en-us/azure/application-gateway/application-gateway-diagnostics#diagnostic-logging and after waiting for some time, we expected that the firewall logs will be automatically populating in the same log analytics workspace however this does not seem to work and they are simply not populated there.
Note that we can see the "ApplicationGatewayAccessLog" logs and below query is evident of the same AzureDiagnostics | distinct Category that returns only one category i.e. "ApplicationGatewayAccessLog"
Does anyone know if we are missing something or have any input?
Sometimes, the output is not the same when you explore data from Application Gateway ---logs and from your specific Log Analytics workspace---logs. You cam compare these results on your side. See this issue.
In this case, you should have finished some access actions to your Application Gateway and trigger the firewall access log collection before the data can be collected by the Azure monitoring. Though document stated Firewall logs are collected every 60 seconds. Sometimes, the data delays(even more than 2 days) to be logged in the logs and your located region also impacts on the data display time. From this blog, you can see hourly log of firewall actions on the WAF.
For more information, you can use Log Analytics to examine Application Gateway Web Application Firewall Logs.
I'm setting up Azure WebApp logging. My concern is that error logs are stored in webapp server level, the size increasing day by day from Elmah. Is there a best approach to maintaining the logs, both storing and automating archiving or deleting?
My web development is based on angular. Any suggestion for aggregating logs, like what kind of logs would be generated?
Yes, by default, logs are not automatically deleted (with the exception of Application Logging (Filesystem)). To automatically delete logs, set the Retention Period (Days) field. You could automate the deletion by leveraging KUDU Virtual File System (VFS) Rest API. For a sample script, checkout this discussion thread for a similar approach:
How can you delete all log files from an Azure WebApp using powershell?
Just to highlight, these are the logging that you could capture on WebApps:
• Detailed Error Logging
• Failed Request Tracing
• Web Server Logging
• Application logging - you can turn on the file system option temporarily for debugging purposes. This option turns off automatically in 12 hours. You can also turn on the blob storage option to select a blob container to write logs to.
For log directory information kindly refer to this document: https://learn.microsoft.com/azure/app-service/troubleshoot-diagnostic-logs
What would be the best way to monitor when our Azure web app is being unloaded when no requests have been made to the web app for a certain amount of time?
Enabling Logstream for the web server doesn't seem to reveal anything of use..
Any hints much appreciated!
You can use Azure Application Insights to create a web test that will alert you when the site is not available anymore. It will ping your site from the data centers you select and perform some action you select (mail, webhook, etc).
However, if you want your web app to stay online, you could upgrade its plan to be at least basic, and under settings enable always on.
In addition to the kim’s response:
If you are running your web app in the Standard pricing tier, Web Apps lets you monitor two endpoints from three geographic locations.
Endpoint monitoring configures web tests from geo-distributed locations that test response time and uptime of web URLs. The test performs an HTTP GET operation on the web URL to determine the response time and uptime from each location. Each configured location runs a test every five minutes.
Uptime is monitored using HTTP response codes, and response time is measured in milliseconds. A monitoring test fails if the HTTP response code is greater than or equal to 400 or if the response takes more than 30 seconds. An endpoint is considered available if its monitoring tests succeed from all the specified locations.
Web Apps also provides you with the ability to troubleshoot issues related to your web app by looking at HTTP logs, event logs, process dumps, and more. You can access all this information using our Support portal at http://.scm.azurewebsites.net/Support
The Azure App Service support portal provides you with three separate tabs to support the three steps of a common troubleshooting scenario:
-Observe current behavior
-Analyze by collecting diagnostics information and running the built-in analyzers
-Mitigate
If the issue is happening right now, click Analyze > Diagnostics > Diagnose Now to create a diagnostic session for you, which collects HTTP logs, event viewer logs, memory dumps, PHP error logs, and PHP process report.
Once the data is collected, the support portal runs an analysis on the data and provides you with an HTML report.
In case you want to download the data, by default, it would be stored in the D:\home\data\DaaS folder.
Hope this helps.
I have a website hosted on Azure. I am using Trace.Error to output all my error logs to file system. However when I enable Application Logging on Azure website, it only remains enabled for 12 hours.
This is also confirmed in this article: http://www.hanselman.com/blog/StreamingDiagnosticsTraceLoggingFromTheAzureCommandLinePlusGlimpse.aspx
Now I will like to keep storing error logs indefinitely (i.e. till my website is live). I am not sure if I am missing the point here. How can I keep logging enabled forever?
You can store your logs in Azure Storage (tables or blobs). This doesn't have the 12 hour constraint that the file system does.
I have a website hosted in Azure as a cloud service (not as a website), and I need to get the hit count for every web page of the site.
I enabled Azure Diagnostics, and I see the IIS logs copied to my blob storage, however this logs contain very few data (only one hit to a javascript file).
Furthermore, putting "Verbose" or "All" in the diagnostics configuration of the web role doesn't seem to affect the results, I get only one line (an access to a css file, or an image file, etc).
I'm using Azure SDK 2.0.
Is it possible to use the included IIS logs generated by azure to get a hit count? What should I need to change in the diagnostics configuration?
Or should I need a different approach to achieve this?
The IIS logs it produces are the same ones you'd find on a Windows Server anywhere. Note that depending on the settings you provided to the diagnostics it might take a little while before the data is moved to the storage account. Setting the level of verbosity for the configuration determines what is moved from the instances over to the storage account. Did you give it plenty of time to move the data over before looking at the file in storage again? Sometimes it just brings over what it has, and of course, there could be buffering which means when the file was brought over not everything was in it, etc.
You should be able to get this information from the logs, and yes, you should be able to do it from the IIS logs. That being said, if what you are after is a hits per page I would suggest actually a different approach. Look at an analytics provider like Google Analytics or one of the competitors to that. You'll get a massive amount of information beyond just page hits and no need to worry about parsing log files, etc.