I uploaded the azure web application with custom dianostics.wadcfg and also included the onstart() function to transfer logs to azure storage on schedule basis.
However, the wad-control-container is always empty. I would have thought that this should include the xml configuration for given deployment ID.
Could someone please suggest on what scenarios this occurs?
I had a similar issue today and the problem was that I used HTTP to connect to storage instead of HTTPS. Might be it.
Related
I´ve created an Azure Synapse Analytics Pipeline that must be triggered by the creation of a file within a Azure Gen2 storage account.
Somehow the blob creation event (i.e. when I upload the file in the corresponding container and folder) doesn´t fire anything and the pipeline does not start. I´ve registered the Microsoft.EventGrid and Microsoft.Synapse resource providers in the subscription, as suggested by the Microsoft official documentation.
Am I missing anything? As far as I know, and according to the Microsoft documentation and the many tutorials I've read, I don´t need any Event Topic/Event subscription...
Can you please check the content type of the file :
usually when that is blank, event trigger is not initiated
I tried to reproduce your scenario in my environment, and it works for me (i.e., when I upload the file in the corresponding container and folder). Let me share my implementation and then you can compare with yours.
This is the setup for the trigger
The trigger is firing as expected.
Files uploaded date time
Trigger firing date time
I still didn´t figure out what is not working, so I implemented a workaround: a simple ADF pipeline looping for files in the landing zone. The pipeline is associated with a normal schedule trigger (it runs 3 times a day) and it calls in turn the pipeline I originally wanted to be triggered by the file creation trigger.
I use data stored in in a blob for some configuration for some azure web apps, and I'd like to react to changes to it in near realtime. Currently I just set a timed event and periodically check if the etag of the blob has changed, and if it has then download the new blob.
This is ok, but I don't want to poll the blob too often, and I also want to be reactive. The devs changing the values in the blob want to be able to test the new values quickly.
The web app scales up and down, and each instance of the web app needs to download the config file. So, as far as I can tell, I can't just use the event system that azure storage has, as that would only send a notification to one instance.
Is there a recommended way to do this?
Per my understanding, you want to centralize manage your azure web apps. Once some config has been changed, your app services should reload configs on time automatically. Actually, Azure App Configuration provides this kind of functionality.
You can also config the condition to reload all configs in code. This is a .net core sample here. And you find other samples under the Enable dynamic configuration blade.
How can code and configuration changes to Azure Functions be logged and audited? I'm trying to create ways to track and guard against malicious insiders making unauthorized changes to the functionality of Azure Functions. In AWS I can create a CloudTrail trail that logs all write events to Lambda functions and write them to an S3 bucket; the Events are also visible in the Event history section of the CloudTrail console. However, I can't seem to find a way to do something similar for Azure Functions, especially in Azure Stack. I've scoured the Activity Log and the Monitor to no avail. Any help or ideas would be greatly appreciated. Thanks!
Azure has a new feature called Change Analysis
https://aka.ms/changeanalysis
If you are logged in you probably can go directly here
https://portal.azure.com/?feature.customportal=false#blade/Microsoft_Azure_ChangeAnalysis/ChangeAnalysisBladeV2
This feature is also incorporated into the activity log and you can view the changes that were done. The only issue is it's only for 14 days. They are working on creating allowing export to Log Analytics so it could go back further.
You can create alerts on the activity log. The updates you are referring too should trigger an activity log
https://learn.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-alerts
I have an Azure WebApp that continually reports "Your app experienced failure(s) due to a transient storage access issue." The suggested solution is "Explore Local Cache feature for your web app." but my webapp exceeds the maximum storage (3GB) for this option.
The problem mostly occurs between midnight and 6am in the morning when the site is LEAST active, but there seems to be an increasing number of occurrences during the day.
What are the underlying causes of this problem? Is this something to do with my WebApp or is it the Azure Infrastructure. In either case, how do I determine the underlying issue(s) and resolve?
"Your app experienced failure(s) due to a transient storage access issue."
The Web Apps environment provides diagnostic functionality for logging information from both the web server and the web application. You could try to enable Logging and check the logs that generated within that period of time.
According to the error, it seems that a temporary issue causes app failure, and it suggests enabling local cache. You could follow the suggested solution and make sure if it helps resolve issue.
Besides, you could try to scale your web app (which would take additional charge) and check if it could mitigate the issue.
Updates:
As we know App Service offers a shared, persistent storage for the application. Maybe something wrong with the shared storage when the instances in farm access the storage, which may be the cause of the issue.
To determine the underlying issue, you may try to enable diagnostics logging for your web app. This should provide more information on what is happening at the storage level and what kind of activity is going on.
I have a website hosted in Azure as a cloud service (not as a website), and I need to get the hit count for every web page of the site.
I enabled Azure Diagnostics, and I see the IIS logs copied to my blob storage, however this logs contain very few data (only one hit to a javascript file).
Furthermore, putting "Verbose" or "All" in the diagnostics configuration of the web role doesn't seem to affect the results, I get only one line (an access to a css file, or an image file, etc).
I'm using Azure SDK 2.0.
Is it possible to use the included IIS logs generated by azure to get a hit count? What should I need to change in the diagnostics configuration?
Or should I need a different approach to achieve this?
The IIS logs it produces are the same ones you'd find on a Windows Server anywhere. Note that depending on the settings you provided to the diagnostics it might take a little while before the data is moved to the storage account. Setting the level of verbosity for the configuration determines what is moved from the instances over to the storage account. Did you give it plenty of time to move the data over before looking at the file in storage again? Sometimes it just brings over what it has, and of course, there could be buffering which means when the file was brought over not everything was in it, etc.
You should be able to get this information from the logs, and yes, you should be able to do it from the IIS logs. That being said, if what you are after is a hits per page I would suggest actually a different approach. Look at an analytics provider like Google Analytics or one of the competitors to that. You'll get a massive amount of information beyond just page hits and no need to worry about parsing log files, etc.