Best Practice to store Azure WebJob Logs incl. Data in Azure - azure

I have several Azure WebJobs (.Net Framework, Not .Net Core) running which interact with an Azure Service Bus. Now I want to have a convenient way to store and analyze their Log-Messages (incl. the related Message from the Service Bus). We are talking about a lot of Log Messages per Day.
My Idea is to send the Logs to an Azure Event Hub and store them in an Azure SQL Database. Later I can have for example a WebApp that enables Users to conveniently browse and analyze the Logs and view the Messages.
Is this a bad Idea? Should I instead use Application Insights?

Application insight charges are more than your implementation. So i would say this is good idea. Just one change i would send each logs to logic apps and do some processing like sorting error logs, info logs etc differently. Also why are you thinking about SQL when this can be stored in non SQL Azure tables and fetch them from there.

Related

Database/Cache in Azure Service Bus for jobs in queue completed elsewhere

I've an API (python-flask app) running on an app service in azure and want to implement a queuing system using Azure Service Bus such that requests from API are sent to a simple FIFO queue managed/ran by the service bus. Another resource in Azure will be pulling from this queue and running the jobs based on the contents of the json/payload contained in the message in the queue element.
When this element has been processed by the other resource I want to encode the job status/metadata (e.g., "finished" along with metadata such as the location where resulting data was stored). I read about such a system that makes use of the lightweight database offered by Redis, however, I'm wondering if something like this lightweight database/cache system of job status/ids/metadata is available through Azure Service Bus? I'm aware that Redis can be run standalone on a VM in Azure, however, if this can all be managed via the service bus that would be ideal. I couldn't find specifics on this being offered within Azure Service Bus and due to how this job metadata is later being accessed I cannot just push metadata messages to a new queue.
Does anyone have any insight on this or potential alternatives? If Redis can be run alongside flask within the same App Service then that would be ideal, but again I wasn't able to find anything explicit on this and it doesn't seem possible to simultaneously run a flask server/app and Redis server at the same time on an App Service.
Thanks.
I'm wondering if something like this lightweight database/cache system
of job status/ids/metadata is available through Azure Service Bus?
Azure Service Bus is a fully managed enterprise message broker, Azure Redis is a NoSQL database with steroids. It also offers queue mechanism and some other data structures.
it doesn't seem possible to simultaneously run a flask server/app and
Redis server at the same time on an App Service.
You can, but inside containers.
Please check if this can help you: https://stackoverflow.com/a/39008342/1384539

Approach for creating consolidated trace/ logs for on-premises solution consuming Azure services

Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.

How to do logging in azure functions and api management

I want to create logging for API and azure functions.
As thinking to use, "service bus" to create logging.
Logging needs for each request, response and error.
Would it be correct approach to do logging for api management and azure functions through service Bus. and appreciate if any example to create service bus and call from azure or api management to log the requests/responses.
Note: Regarding Application Insights, as found its hit the performance and its more for performance monitoring then logging. https://blogs.msdn.microsoft.com/apimanagement/2018/01/12/application-insights-integration/
I would still use Application Insight for that. You want to track requests and errors which Applicatin Insight will offer out of the box and also provides you a query language to query your logs or to build dashboards. Regarding your performance concern, you should just test the impact on your system - it most likely isn't that relevant.

Can ApplicationInsights track events across many WebApps/LogicApps/etc?

I have the following resources
One Mobile/API app
One MVC app
Three Logic apps
One Azure function deployment with 5 functions
I want to have a single tracking number (correlation ID) to track across all instances at the same time. I'm looking at the Contoso Insurance sample, but I'm rebuilding it by hand (not using Azure Deploy scripts).
I've read the deployment code, but I'm not sure if I can merge app insight logs together, or if it's a hack of some sort.
Observations
When I right click on visual studio, I can only associate to Application insights instances that aren't already connected to a *app (web | mobile | api).
However, in the configuration, I can give Application insights a direct GUID which might allow me to achieve the goal of one App Insights activity log for the entire process
Question
Is it possible to have one app insights log among all Mobile/API/Logic/MVC sites?
Is there a way to have (or should I have) one standard app insights instance per web app, then a special dedicated shared app insights instance for my code to call into and log?
What is contoso insurance doing with Azure App Insights?
Jeff from Logic Apps team here -- So the answer is yes - but there are some caveats. We are working to make the experience seamless and automatic, but for now it will require the following. First as a heads up:
First, for Logic Apps we have what's called the client tracking ID -- this is a header you can set on an incoming HTTP Request or Service Bus message to track and correlate events across actions. It will be sent to all steps (functions, connectors, etc.) with the x-ms-client-tracking-id header.
Logic Apps emits all logs to Azure Monitor - which unfortunately today only has a sink into Event Hubs, Storage, and Log Analytics -- not App Insights.
With all of that in-mind, here's the architecture we see many following:
Have your web apps just emit to App Insights directly. Use some correlation ID as needed. When firing any Logic Apps, pass in the x-ms-client-tracking-id header so you can correlate events.
Log your events to App Insights in the Function app. This blog details some of how to do that, and it is also being worked on for a better experience soon.
In your logic app - either write a Function to consume events off of Azure monitor and push to App Insights, or write a function that is an App Insight "logger" that you can call in your workflow to also get the data into App Insights.
This is how Contoso Insurance is leveraging App Insights as far as I understand. We are working across all teams (App Insights, Azure Monitor, Azure Functions, Logic Apps) to make this super-simple and integrated in the coming weeks/months, but for now achievable with above. Feel free to reach out for any ?s

Azure Appinsights vs Log Analytics

Currently I am logging my custom log messages to an Azure Table.
Now I need to automatically trigger the sending of emails based on log types and also need to generate an analysis report from the log messages.
Which service is more suitable to get this done? Azure Application Insights or Azure Log Analytics?
I think Application Insights will fit both - creating reports as well as sending out emails. You can do the same with Log Snalytics but the difference is, is that Log Analytics is basically a logical storage of all your log data and you can create custom reports, alerts etc. across many different services, also, everything can be nicely visualized in OMS.
As being said in the comments, you need to describe a bit more about the scenario.

Resources