Logging in Azure Web App with Enterprise Library - azure

Is it possible to use Enterprise Library for logging errors in my Azure Web APP?

Probably, but where would this data live? In log files? If you are running multiple Azure regions, you'll have log files in multiple places. Also how would you query for data when you need to trouble shoot? Detect patterns, aggregate, and perform calculations, etc?
I think you'll run into a lot of operational issues with traditional log files, especially with a high traffic sites.
Azure provides Application Insights too. I would look into that first before looking into writing to log files.

Related

Should Azure Log Analytics and Application Insights be used per app or per environment?

We have a Azure based system which is growing in complexity, and we need to monitor chains of events and ensure they arrive where we expect them to arrive.
We have a on-prem Java application, which sends events to an IoT Hub. The IoT hub routes to service bus queues. We have functions that update a cosmos database, trigger other functions or route to additional queues. Some functions are also callable through an API Management instance.
Our functions are already connected to Application Insights, and here the Application Insights instance is named the same as the Function App (IIRC this naming was suggested through the form that created the AI resource)
The application map in Application Insights make me lean toward one AI per environment, to have a complete map of the system. Log Analytics also seems logical to use one per environment to be able to potentially correlate data if needed.
What is the correct path for Log Analytics and Application Insights, respectively?
If it is not as clear-cut as stated in my title, what factors do I need to consider when I start to use these services?
The correct number of instances is the one that works best for you, whether that exactly follows recommended practices or not.
The recommendation is to use one workspace per environment and make sure the cloud_RoleName in App Insights to distinguish parts of the system. Log Analytics has similar considerations.
Functions defaults to spinning up an App Insights instance along with the app because if you don't use App Insights you loose most of the logging ability- it's important to connect it to App Insights, but overriding the default behavior and connecting to a centralized workspace is common in larger systems.
There are certainly reasons you might want to split the workspaces, and you can union data across workspaces as needed to pull data together from both Log Analytics and App Insights instances.
Data access control or geographic locations. If you need to keep a portion of the data within certain geographic boundaries or limit access to certain people, then split that portion off.
Similar to the security concern is a billing one. If for whatever reason, billing for different portions of the application needs to be split, then you would also want to split the logging portion.
Different portions of the system rarely interact, or are maintained by different teams, and organizing the data into separate workspaces will provide more benefits over the hassle of cross-
You are going to surpass the limitations on a single resource. Very few applications actually hit these limits, but they are there.

Azure: Orchestration file transfers - Which Azure components are best suited?

I have a project in which we need to transfer files (mostly SFTP-based, but also HTTP), between 20+ systems. We have currently identified +200 different files that needs to be transferred. We would like a setup in Azure where the different transfers can be setup, monitored and logged, however, we are unsure which way to go.
The question is: Which Azure components would be best suited for the above task? Which components would you use?
One possible solution would be to implement at large set of Azure functions, each responsible for one file transfer. This would require us to setup the monitoring ourselves, and it will result in a very large number of functions.
We have also been looking towards Azure data factory and Azure Logic apps, but we are unsure if they would provide any benefits with regards to monitoring, re-running failed jobs etc.
As you already mentioned in your description,obviously, Azure Function is not suitable for your scenario because you have to build a large number of functions to do the transfer work.Moreover,it's painful to monitoring such scale of function executions.You need to distinguish the log data and persist them into table storage or something like that which causes more cost. So,it's passed!
In my opinion, ADF is the best solution for you.It could be monitored by many ways and it supports re-run feature,please follow this video.Also,another distinct feature for ADF is Self-Hosted Integration Runtime which supports transmission between on-premise system and Azure cloud environment.
As for Logic App,i'm can't find any re-run feature related to it so i don't think it could attract your attention.

is azure diagnostics only available through code?

Is Azure diagnostics only implemented through code? Windows has the Event Viewer where various types of information can be accessed. ASP.Net websites have a Trace.axd file at the root that can viewed for trace information.
I was thinking that something similar might exist in Azure. However, based on the following url, Azure Diagnostics appears to require a custom code implementation:
https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-diagnostics/#overview
Is there an easier, more built-in way to access Azure diagnostics like I described for other systems above? Or does a custom Worker role need to be created to capture and process this information?
Azure Worker Roles have extensive diagnostics that you can configure up.
You get to them via the Role configuration:
Then, through the various tabs, you can configure up specific types of diagnostics and have them periodically transferred to a Table Storage account for later analysis.
You can also enable a transfer of application specific logs, which is handy and something that I use to avoid having to remote into the service to view logs:
(here, I transfer all files under the AppRoot\logs folder to a blob container named wad-processor-logs, and do so every minute.)
If you go through the tabs, you will find that you have the ability to extensively monitor quite a bit of detail, including custom Performance Counters.
Finally, you can also connect to your cloud service via the Server Explorer, and dig into the same information:
Right-click on the instance, and select View Diagnostics Data.
(a recent deployment, so not much to see)
So, yes, you can get access to Event Logs, IIS Logs and custom application logs without writing custom code. Additionally, you can implement custom code to capture additional Performance Counters and other trace logging if you wish.
"Azure diagnostics" is a bit vague since there are a variety of services in Azure, each with potentially different diagnostic experiences. The article you linked to talks about Cloud Services, but are you restricted to using Cloud Services?
Another popular option is Azure App Service, which allows you many more options for capturing logs, including streaming them, etc. Here is an article which goes into more details: https://azure.microsoft.com/en-us/documentation/articles/web-sites-enable-diagnostic-log/

Alternate to run window service in Azure cloud

We currently have a window service which send some notification emails to users after doing some processing on database(SQL database). Runs once in day.
We want to move this on azure cloud. One alternate is to put it on Azure VM as is. but I am finding some other best possible solution for that.
I study about recurring and on demand Web jobs but I am not sure is this is best solution.
Also is there any possibility to update configuration of service code in App.config without re-deploy the code of service on cloud. I means we can manage configuration from Azure portal.
Thanks in advance.
Update 11/4/2016
Since this was written, there are 2 additional features available in Azure that are both excellent choices depending on what functionality you need:
Azure Functions (which was based on the WebJobs described below): Serverless code that can be trigger/invoked in various ways, and has scaling support.
Azure Service Fabric: Microservice platform, with support for actor model, stateful and stateless services.
You've got 3 basic options:
Windows service running on VM
WebJob
Cloud service
There's a lot of information out there on the tradeoffs between these choices, but here's a brief summary.
VM - Advantages: you can move your service basically as it is without having to change much or any of your code. They also have the easiest connectivity with other resources in Azure (blob storage, virtual networks, etc). The disadvantage is you're giving up all the of PaaS advantages and are still stuck managing your own VM infrastructure
WebJob - Advantages: Multiple invocation options (queues, blobs, manually, queue receive loops, continuous while-loop style, etc), scheduled (would cover your case). Easy to deploy (can go with website, as a console app, automatically through Kudu), has some built in logging in Azure portal - and yes, to answer your question, you can alter the configuration in the portal itself for connection strings and app settings.
Disadvantages - you'll need to update code, you don't have access to underlying resources (if you need that), and more of something to keep in mind than a disadvantage - it uses the same resources as the webapp it's deployed with.
Web Jobs are the newest of the options, but at the same time appear to have active development going on to increase the functionality and usefulness.
Cloud Service - like a managed VM, has some deployment options, access to underlying VM if needed. Would require some code changes from your existing service.
There's nothing you've mentioned in your use case that makes me think a Web Job shouldn't be first thing you try.
(Edit: Troy Hunt has a great and relatively recent blog post illustrating most of the points I've mentioned about Web Jobs above: http://www.troyhunt.com/2015/01/azure-webjobs-are-awesome-and-you.html)

Logging and tracing on Azure

We are looking a solution for logging and tracing for our multi-tenant application with distributed architecture, that will be hosted on Azure.
We have already gone through these two articles – Troubleshooting Best Practices for Developing Windows Azure Applications and Enabling Diagnostics in Windows Azure. Is there anything other better solution?
We would like to know
• what are the best practices and approach for it?
o Storage strategy?
• Any third party / open source tool that helps us for the same?
EDIT:
We are looking for two things:
Best practice for storage strategy, where should we store log data? Since it's multi-tenant multi-tier application, should we keep data separate for each tier per tenant, combine them or any better solution? How do we store the data so that we can trace single request individually that spanned across multiple tiers?
A tool that helps us to view trace data, analyse them, filter, sort, etc. Since size of trace data will be comparatively huge, trace a flow of single task that spanned across multiple tiers.
I have used System.Diagnostic with XML listener, in on-premise application - with multiple tiers (web app, service layer 1, service layer 2, etc). I then, used Microsoft Service Trace Viewer to view the log data. SVCTraceViewer supports many features including combining log files of many tier, graphical representation, tracing individual request, etc.
So, some thing similar third party / open source tool for Azure. That also helps support engineer to drill down the issue and resolve it.
I would recommend looking into an open source library like log4net. It provides a pluggable/fully configurable and super flexible way to log messages with a lot of custom data and to a lot of sources. Configuration for it can be retrieved from external sources/xml, code, config files, etc.
You can create your own appender for Table Storage or find someone else's
HTH

Resources