Log streaming tab doesn't work in Azure functions - azure

I posted this on github earlier, but hoping I would get an answer here from the wider community.
In the azure portal, one can look at live logs of a Function app in the Log Streaming tab. I have noticed that this often doesn't work for weeks, and I am wondering if I am doing something obviously wrong. More details below:
I have a function that receives messages from a service bus. I am able to see the logs in application insights, and I can see that it's processing the requests as expected. The problem is that I don't see any logs in the "Log Streaming" tab in the portal. See image below:
The above image indicates that no lines were logged between 6:17:46 and 6:18:46. However, see the below image from application insights logs, and you can see there were clearly several requests that were processed during this time (and several log lines written).
I tried Edge browser and Chrome, and also private tab in Edge, but I see the same behavior.
Note that I end up seeing this for extended period of time, and then sometimes it resolves itself. For example, I noticed it not work for weeks in June. Then, surprisingly I saw it work for at least a week in the beginning of July. But now I noticed it is not working.
Also, I see this behavior across all the function apps that I currently have. (So it's not limited to just one app).
I am using Azure Functions v2 .net core, C#.

Related

Blob trigger affecting application insight logging in azure functions

I have two azure functions that exist in the same azure function app and they are both connected to the same instance of application insights:
TimerFunction uses a TimerTrigger and executes every 60 seconds and logs each log type for testing purposes.
BlobFunction uses a BlobTrigger and its functionality is irrelevant for this question.
It appears that when BlobFunction is enabled (it isn't being triggered by the way), it clogs up the application insights with polling, as I don't receive some of the log messages written in TimerFunction. If I disable BlobFunction, then the logs I see in the development tools monitor for TimerFunction are all there.
This is shown in the screenshot below. TimerFunction and BlobFunction were both running until I disabled BlobFunction at 20:24, where you can clearly see the logs working "normally", then at 20:26 I re-enabled BlobFunction and the logs written by TimerFunction are again intermittent, and missing my own logged info.
Here is the sample telemetry from the live metrics tab:
Am I missing something glaringly obvious here? What is going on?
FYI: My host.json file does not set any log levels, I took them all out in the process of testing this and it is currently a near-skeleton. I also changed the BlobFunction to use a HttpTrigger instead, and the issue disappeared, so I'm 99% certain it's because of the BlobTrigger.
EDIT:
I tried to add an Event Grid trigger instead as Peter Bons suggested, but my resource group shows no storage account for some reason. The way the linked article shows, and the way this video shows (https://www.youtube.com/watch?v=0sEzimJYhME&list=WL) just don't work for me. The options are just different, as shown below:
It is normal behavior that the polling is cluttering your logs. You can of course set a log level in host.json to filter out those message, though you might loose some valueable other logging as well.
As for possible missing telemetry: it could very well be that some logs are dropped due to sampling that is enabled by default. I would also not be suprised if some logging is not shown on the portal. I've personally experienced logging being delayed up to 10 minutes or not available at all in the azure function log page on the portal. Try a direct query in App Insights as well.
Or you can go directly to the App Insights resource and create some queries yourself that filter out those messages using Search or Logs.
The other option is to not rely on polling using the blobtrigger but instead use an event grid trigger that invocates the function once a blob is added. Here is an example of calling a function when an image is uploaded to an azure storage blob container. Because there is no polling involved this is a much more efficient way of reacting to storage events.

Diagnosing ASP.NET Azure WebApp issue

since a month one of our web application hosted as WebApp on Azure is having some kind of problem and I cannot find the root cause of that.
This WebApp is hosted on Azure on a 2 x B2 App Service Plan. On the same App Service Plan there is another WebApp that is currently working without any issue.
This WebApp is an ASP.NET WebApi application and exposes a REST set of API.
Effect: without any apparent sense (at least for what I know by now), the ThreadCount metric starts to spin up, sometimes very slowly, sometimes in few minutes. What happens is that no requests seems to be served and the service is dead.
Solution: a simple restart of the application (an this means a restart of the AppPool) causes an immediate obvious drop of the ThreadCount and everything starts as usual.
Other observations: there is no "periodicity" in this event. It happened in the evening, in the morning and in the afternoon. It seems that evening is a preferred timeframe, but I won't say there is any correlation.
What I measured through Azure Monitoring Metric:
- Request Count seems to oscillate normally. There is no peak that causes that increase in ThreadCount
- CPU and Memory seems to be normal, nothing strange.
- Response time, like the others metrics
- Connections (that should be related to sockets) oscillates normally. So I'd exclude something related to DB connections.
What may I do in order to understand what's going on?
After a lot of research, this happened to be related to a wrong usage of Dependency Injection (using Ninject) and an application that wasn't designed to use it.
In order to diagnose, I discovered a very helpful feature in Azure. You can reach it by entering into the app that is having the problem, click on "Diagnose and solve problems" then click on "Diagnostic tools" and then select "Collect .NET profiler report". In that panel, after configuring the storage for the diagnostic files, you can select "Add thread report".
In those report you can easily understand what's going wrong.
Hope this helps.

Azure functions portal log / monitor isn't very accurate

I've been using functions for a while and it seems the longer the Function is around, the less accurate the Portal logs are. When I first was using my functions for maybe 3 months everything monitor/logging wise was fine. Over time things starting getting less accurate.
Now I see the real logs by going to the ms azure storage explorer and checking the AzureWebJobsStorage.
First when I bring up the code/logs the last log it brings up isn't accurate. It will be from a few days ago usually, or the last error. When it triggers though, it does get the live feed. This isn't that big a deal, it's the monitor being inactive that and not being able to see the logs from that which is bad. I suppose I just use the Azure Storage explorer.
Monitor Invocation Logs, always seems a few days behind. This used to be accurate, but the last month or so, it's always a few days behind
Dan,
The local, file based logs, exist primarily to support the portal experience, so the behavior you're observing on the log window is expected as the logs are not written by the runtime as part of the normal invocation process, but only when you're actively developing/testing on the portal.
The issue you're experiencing with the monitor is due to a regression that has been patched and should be fully rolled out today (you can see more details here)
We've been listening to feedback on our logging capabilities, and there has been a lot of investment in that area, resulting in the recently announced built in integration with Application Insights. That integration addresses some of the pain points you've brought up as well as other issues, so I'd strongly recommend trying it out. You can find more information about it here.

How does one know why an Azure WebSite instance(WebApp) was shutdown?

By looking at my Pingdom reports I have noted that my WebSite instance is getting recycled. Basically Pingdom is used to keep my site warm. When I look deeper into the Azure Logs ie /LogFiles/kudu/trace I notice a number of small xml files with "shutdown" or "startup" suffixes ie:
2015-07-29T20-05-05_abc123_002_Shutdown_0s.xml
While I suspect this might be to do with MS patching VMs, I am not sure. My application is not showing any raised exceptions, hence my suspicions that it is happening at the OS level. Is there a way to find out why my Instance is being shutdown?
I also admit I am using a one S2 instance scalable to three dependent on CPU usage. We may have to review this to use a 2-3 setup. Obviously this doubles the costs.
EDIT
I have looked at my Operation Logs and all I see is "UpdateWebsite" with status of "succeeded", however nothing for the times I saw the above files for. So it seems that the "instance" is being shutdown, but the event is not appearing in the "Operation Log". Why would this be? Had about 5 yesterday, yet the last "Operation Log" entry was 29/7.
An example of one of yesterday's shutdown xml file:
2015-08-05T13-26-18_abc123_002_Shutdown_1s.xml
You should see entries regarding backend maintenance in operation logs like this:
As for keeping your site alive, standard plans allows you to use the "Always On" feature which pretty much do what pingdom is doing to keep your website warm. Just enable it by using the configure tab of portal.
Configure web apps in Azure App Service
https://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
Every site on Azure runs 2 applications. 1 is yours and the other is the scm endpoint (a.k.a Kudu) these "shutdown" traces are for the kudu app, not for your site.
If you want similar traces for your site, you'll have to implement them yourself just like kudu does. If you don't have Always On enabled, Kudu get's shutdown after an hour of inactivity (as far as I remember).
Aside from that, like you mentioned Azure will shutdown your app during machine upgrade, though I don't think these shutdowns result in operational log events.
Are you seeing any side-effects? is this causing downtime?
When upgrades to the service are going on, your site might get moved to a different machine. We bring the site up on a new machine before shutting it down on the old one and letting connections drain, however this should not result in any perceivable downtime.

Intermittent Microsoft Azure Web Site access failure

I have a number of small MVC apps deployed as Microsoft Windows Azure websites. This has been working for several months.
Yesterday I rolled out a new one, and the deployment was unremarkable, everything worked fine. But a couple of hours later, access to the site was unavailable. The symptoms were that when the browser tried to navigate to the URL for that site, it would try to load for several minutes and then just give up with a completely blank page.
I attempted to stop and restart the site, and it worked once, but the symptoms came back several minutes later. Then I tried to stop and restart, and it didn't work.
I deployed the identical app to three additional URLs. Again, immediately on deployment, they all work fine, however, they fail at some interval in the future. They seem to not all fail at once. Sometimes restarting the site will fix the problem, and sometimes not.
IMPORTANT: If I wait for some period of time, the site may start to work again on its own.
However, deploying four versions of the app so that our users can go to a backup one if the primary one is not working is not optimal.
Any words of wisdom as to how I might go about debugging this?
ADDITIONAL INFO NOV 25, 2013:
When sites are failing, the IIS logs show either 500 or 502 Internal Service Errors. Our own MVC code is never hit, not even app_start.
You can start by checking the logs and remote debugging
http://www.drdobbs.com/windows/azure-sdk-22-supports-visual-studio-2013/240163499
Are the apps working locally?
Might not be the same problem, but from time to time our Azure instances will get the blue question mark of death as a status.
The reason we found out was that Microsoft will do upgrades on instances from time to time. If you have just one instance in a cloud service/role, then from time to time they will do maintenance and during that time it will be dead.
I have confirmed this with their support.
The only way to get around this that I know of is to create two instances. Then Microsoft guarantees ~99% availability.
Of course I also confirmed with them that this means twice the cost. =/
If that's not the issue I would enable RDP and get onto the machine to see what the problem is. Microsoft has these tools to help debug problems: http://blogs.msdn.com/b/kwill/archive/2013/08/26/azuretools-the-diagnostic-utility-used-by-the-windows-azure-developer-support-team.aspx
First, you should always run multiple instances of your web role with more than 1 upgrade domain. This is configurable in the service definition (CSDEF). Without this, you don't get an SLA from Microsoft, so you can't really complain that the VMs go down.
Second, to figure out what might be going on with these boxes, you should have both logs (my preference is to roll my own with page blobs or table storage), AND you should always have RDP access to a pre-production environment (production as well if you're not too fussed about security). Once on the box, look through the event viewer for errors.
Third, when an outage occurs check out the azure service dashboard (http://www.windowsazure.com/en-us/support/service-dashboard/) for outages.
Lastly, contact Microsoft support. It may take a few hours, but they are pretty good.
That it is happening repeatedly and for extended periods of time (more than 5 minutes), I would be there's something wrong with your hosted service. Again, RDP in and poke around. Good luck.
To debug your sites try to enable diagnostic logs:
http://www.windowsazure.com/en-us/develop/net/common-tasks/diagnostics-logging-and-instrumentation/
Another nice way to look around your site is using the debug console:
https://github.com/projectkudu/kudu/wiki/Kudu-console

Resources