I have the following resources
One Mobile/API app
One MVC app
Three Logic apps
One Azure function deployment with 5 functions
I want to have a single tracking number (correlation ID) to track across all instances at the same time. I'm looking at the Contoso Insurance sample, but I'm rebuilding it by hand (not using Azure Deploy scripts).
I've read the deployment code, but I'm not sure if I can merge app insight logs together, or if it's a hack of some sort.
Observations
When I right click on visual studio, I can only associate to Application insights instances that aren't already connected to a *app (web | mobile | api).
However, in the configuration, I can give Application insights a direct GUID which might allow me to achieve the goal of one App Insights activity log for the entire process
Question
Is it possible to have one app insights log among all Mobile/API/Logic/MVC sites?
Is there a way to have (or should I have) one standard app insights instance per web app, then a special dedicated shared app insights instance for my code to call into and log?
What is contoso insurance doing with Azure App Insights?
Jeff from Logic Apps team here -- So the answer is yes - but there are some caveats. We are working to make the experience seamless and automatic, but for now it will require the following. First as a heads up:
First, for Logic Apps we have what's called the client tracking ID -- this is a header you can set on an incoming HTTP Request or Service Bus message to track and correlate events across actions. It will be sent to all steps (functions, connectors, etc.) with the x-ms-client-tracking-id header.
Logic Apps emits all logs to Azure Monitor - which unfortunately today only has a sink into Event Hubs, Storage, and Log Analytics -- not App Insights.
With all of that in-mind, here's the architecture we see many following:
Have your web apps just emit to App Insights directly. Use some correlation ID as needed. When firing any Logic Apps, pass in the x-ms-client-tracking-id header so you can correlate events.
Log your events to App Insights in the Function app. This blog details some of how to do that, and it is also being worked on for a better experience soon.
In your logic app - either write a Function to consume events off of Azure monitor and push to App Insights, or write a function that is an App Insight "logger" that you can call in your workflow to also get the data into App Insights.
This is how Contoso Insurance is leveraging App Insights as far as I understand. We are working across all teams (App Insights, Azure Monitor, Azure Functions, Logic Apps) to make this super-simple and integrated in the coming weeks/months, but for now achievable with above. Feel free to reach out for any ?s
Related
We have a Azure based system which is growing in complexity, and we need to monitor chains of events and ensure they arrive where we expect them to arrive.
We have a on-prem Java application, which sends events to an IoT Hub. The IoT hub routes to service bus queues. We have functions that update a cosmos database, trigger other functions or route to additional queues. Some functions are also callable through an API Management instance.
Our functions are already connected to Application Insights, and here the Application Insights instance is named the same as the Function App (IIRC this naming was suggested through the form that created the AI resource)
The application map in Application Insights make me lean toward one AI per environment, to have a complete map of the system. Log Analytics also seems logical to use one per environment to be able to potentially correlate data if needed.
What is the correct path for Log Analytics and Application Insights, respectively?
If it is not as clear-cut as stated in my title, what factors do I need to consider when I start to use these services?
The correct number of instances is the one that works best for you, whether that exactly follows recommended practices or not.
The recommendation is to use one workspace per environment and make sure the cloud_RoleName in App Insights to distinguish parts of the system. Log Analytics has similar considerations.
Functions defaults to spinning up an App Insights instance along with the app because if you don't use App Insights you loose most of the logging ability- it's important to connect it to App Insights, but overriding the default behavior and connecting to a centralized workspace is common in larger systems.
There are certainly reasons you might want to split the workspaces, and you can union data across workspaces as needed to pull data together from both Log Analytics and App Insights instances.
Data access control or geographic locations. If you need to keep a portion of the data within certain geographic boundaries or limit access to certain people, then split that portion off.
Similar to the security concern is a billing one. If for whatever reason, billing for different portions of the application needs to be split, then you would also want to split the logging portion.
Different portions of the system rarely interact, or are maintained by different teams, and organizing the data into separate workspaces will provide more benefits over the hassle of cross-
You are going to surpass the limitations on a single resource. Very few applications actually hit these limits, but they are there.
I have several Azure WebJobs (.Net Framework, Not .Net Core) running which interact with an Azure Service Bus. Now I want to have a convenient way to store and analyze their Log-Messages (incl. the related Message from the Service Bus). We are talking about a lot of Log Messages per Day.
My Idea is to send the Logs to an Azure Event Hub and store them in an Azure SQL Database. Later I can have for example a WebApp that enables Users to conveniently browse and analyze the Logs and view the Messages.
Is this a bad Idea? Should I instead use Application Insights?
Application insight charges are more than your implementation. So i would say this is good idea. Just one change i would send each logs to logic apps and do some processing like sorting error logs, info logs etc differently. Also why are you thinking about SQL when this can be stored in non SQL Azure tables and fetch them from there.
Following is the proposed transition in our application:
Web Application is deployed in on-premises IIS (Web Server 1).
Web Application has one functionality (for example, Generate Invoice for selected customer).
For each new request of Generate Invoice, the web application is writing message to the Azure Service Bus Queue.
Azure function gets triggered for each new message in Azure Service Bus Queue.
Azure function triggers Web API (deployed on-premises).
Web API generates Invoice for the customer and stores in the local file storage.
As of now, we have everything setup on-premises, and instead of Service Bus and Azure function, we directly consume Web API. With this type of infrastructure in place, we are currently logging all events in an MongoDB collection, and providing single consolidated view to the user. So they can identify what happened to the Generate Invoice request, and at which level and with which error it got failed (in case of failures).
With the new proposed architecture, we are in process of identifying ways for logging and tracing here, and display consolidated view to the users.
The only option, I can think of is to log all events in Azure Cosmos DB from everywhere (i.e., Website, Service bus, function, Web API), and then provide consolidated view.
Can anyone suggest if the suggested approach looks OK? Or if anyone has some better solution?
Application Insights monitors the availability, performance, and usage of your web applications whether they're hosted in the cloud or on-premises. It leverages the powerful data analysis platform in Azure Monitor to provide you with deep insights into your application's operations and diagnose errors without waiting for a user to report them.
Workbooks combine data visualizations, Analytics queries, and text into interactive documents. You can use workbooks to group together common usage information, consolidate information from a particular incident, or report back to your team on your application's usage.
For more details, you could refer to this article.
I'm trying to find any documentation/advisory info on whether or not to use the same App Insights instance from multiple regions. I'm assuming that if I have an API App Service in useast, it's not recommended to use an App Insights instance from West region as it would add latency.
I just got the feedback from MS application insights team, the answer is no performance issue:
Application insights sends data to their backend asynchronously - so
the actual network RT time should not matter.
Also, even though the
app insights is in West Region, the 'ingest' endpoints are globally
distributed, and telemetry is always sent to the nearest available
'ingest' point.
Details are here.
For the official document, I'm asking for it(but not sure if they would have one).
Hope it helps.
I have an Azure Logic App that monitors an SFTP site for new files, and if it finds one, it sends a message to an Azure Queue for subsequent processing, then deletes the file. My application has grown in scale and a single logic app seems to only be grabbing 5-10 files a minute.
Is it possible to setup a second (third, fourth, etc.) Logic App that monitors the same SFTP site, without the two apps conflicting/colliding with each other. I also see that there is a "High Throughput" setting that seems interesting, but I'm not sure it is what I need. My ultimate goal is to process more files faster, and I am considering changing the Logic App out for a scheduled Web Job that monitors the SFTP site. Since I am live and files are pouring in, I am a little reluctant to change anything until I know it's safe.
Any insight would be appreciated.
Thanks!!
Logic app comes under server less architecture, IF we select the pricing model based on 'number of executions' then it impact on the performance since Microsoft allocates the resources for such kind of pricing model as shared one and which server is free up the processing. I would recommend to attache service plan to it and select the pricing model 'per minute'
One more point, If you want longer operations to be done then Azure logic app is not appropriate one but since you are connecting to enterprise integration then logic app is good choice. I would recommend to divide this functionality between logic app with Azure function OR Microsoft flow.