I am trying to clean up App Insights logs. I noticed in the Traces table, I get thousands of the same 2 messages. I was wondering if anyone knew where they came from, and can they be prevented from being added further? They don't seem to provide any value to my team, and if they just take up a ton of unnecessary space, I'd like them removed, at least moving forward. The messages in particular are:
Message="Goal state with incarnation 1 retrieved."
and
Message="Retrieving goal state from fabric..."
These literally get added multiple times, every minute. I have not found any evidence of these strings in our code, nor have I found any information about this on the web. Thank you in advance.
Related
So you can view past runs of Logic Apps(LA)...but if a Loop(with many steps within it) is present in your Logic App, and you stop the LA run(because it seems to run forever/isnt doing what you expect) you cant see what happened in the the Loop.
I want to be able to track the Logic App(LA) progress, i thought about adding an additional table storage step between every step to log where its at, this would work, but thats a daft amount of work just see what your LA is doing.
I tried adding diagnostic/log analytics to the LA but it just seems to give a broader view of the LA runs...not the detail i need. Can someone tell me if diagnositcs can give me the detail im looking for OR if there is another way of doing this. There must be a way.
Thanks.
The past runs should allow you iterate though the runs of the loop, showing the detail of actions within.
If this doesn't satisfy, you can also add Tracked Properties to log specific values from within the loop execution to Log Analytics in an AzureDiagnostics table
As seen in the above graphs, Graph no. 1 is the Response time graph and it showed a sudden spike in the middle of the test. But then it seems to be running consistently.
On the other hand, the throughput graph, Graph no. 2, showed a down spike but not a sudden spike, it gradually decreased. Also, I got two different throughput values, before and after a down spike.
I first thought it to be a memory issue, but then it should have affected response time as well.
Could anyone help me in knowing the reason behind the sudden spike in the Response Time graph?
And also what could be the possible bottleneck if not memory leakage issue?
Unfortunately these 2 charts don't tell the full story and not knowing your application details technology stack it's quite hard to suggest anything meaningful.
A couple of possible reasons could be:
Your application is capable of autoscaling so when the load reaches certain threshold it either adds more resources or kicks off another node of the cluster
Your application is going i.e. Garbage Collection as its heap is busy with stale objects and once the collection is done it starts working at full speed again. You might want to run a Soak Test to see whether the pattern repeats or not
Going forward consider collecting information on what's going on at your application under test side using i.e. JMeter PerfMon Plugin or SSHMon Listener
I am trying to troubleshoot a problem where I run an Azure Function locally on my instance and have it disabled on the Portal. After sending some data through I can see that it successfully hits my local Azure Function but never hits it again after. Strangely enough the data appears to still go through my channels of Queue - Function - Queue - Function but never hits the breakpoints on my local machine after the first successful run. Triple checking the Portal I can see that it is definitely disabled which leads me to believe there might be another instance of the Azure Function running about. I've confirmed that no other devs are working on it so I've also ruled that out...
Looking at https://[MY_FUNCTION_NAME].scm.azurewebsites.net/azurejobs/#/functions I see that there seem to be duplicates of some of my functions with varying statistics on the repeats. My guess is that Azure might be tracking my local instances when I start them but I see the "Successful" green numbers go up on both versions of the function when I pass data through. Blocked out the function names but replaced the matching ones with matching colors (blacked out bars are just single functions I was too lazy to color code). The red circles indicate the function of interest that have different success statistics.
Has anyone else run into this issue?
Turns out there were duplicate functions in a Slot setting... Someone put them there to get deployment options set up but they left the project and never noted it.
Hope this saves someone some frustrations at some point!
I'm relatively new to Azure and am trying to see if there's a way to create notifications to occur in real time (or close to) whenever only certain exceptions occur using Application Insights.
Right now I'm able to track exceptions and to trigger metric alerts for when a threshold of exceptions occur over a certain amount of time but can't seem to figure out how to make these alerts sensitive to only certain kinds of exceptions. My first thoughts were to add properties to an exception as I used a telemetry client to track it with the 'TrackException' method then create an alert specific to that property but I'm still unable to figure out how to do it.
Any help is appreciated.
A couple years later now, there's a way to mostly do this with built in functionality.
There isn't an easy way to do this on every exception as it occurs, though. some apps have literally billions of exceptions per day, so evaluating your function every time a exception occurs would be pretty expensive.
Things like this are generally done with custom alerts that do a query and see if anything that meets the criteria exists in the new time period.
you'd do this with "log alerts", documented here: https://learn.microsoft.com/en-us/azure/azure-monitor/platform/alerts-unified-log
instead of getting an email every time a specific exception occurred, your query would run every N minutes, and if any rows meet the criteria, you'd get a single mail (or whatever you have the alert configured to do), and you keep getting mails every N minutes where rows that meet the criteria are found.
There are two options:
Call TrackMetric (provide some metric name) when exception of particular type happens in addition to TrackException. Then configure alert based on this metric.
Write a tool/service/azure function which every few minutes runs a query in Application Insights Analytics and posts result as metric (using TrackMetric). Then configure alert from portal.
Right now AI team is working on providing #2 out of the box.
I have a console/desktop application that crawls a lot (think million calls) of data from various webservices. At any given time I have about 10 threads performing these call and aggregating the data into a MySql database. All seeds are also stored in a database.
What would be the best way to report it's progress? By progress I mean:
How many calls already executed
How many failed
What's the average call duration
How much is left
I thought about logging all of them somehow and tailing the log to get the data. Another idea was to offer some kind of output to a always open TCP endpoint where some form of UI could read the data and display some aggregation. Both ways look too rough and too complicated.
Any other ideas?
The "best way" depends on your requirements. If you use a logging framework like NLog, you can plug in a variety of logging targets like files, databases, the console or TCP endpoints.
You can also use a viewer like Harvester as a logging target.
When logging multi-threaded applications I sometimes have an additional thread that writes a summary of progress to the logger once every so often (e.g. every 15 seconds).
since it is a Console Application, just use Writeline, just have the application spit the important stuff out to the Console.
I did something Similar in an application that I created to export PDF's from a SQL Server Database back into PDF Format
you can do it many different ways. if you are counting records and their size you can run a tally of sorts and have it show the total every so many records..
I also wrote out to a Text File, so that I could keep track of all the PDFs and what case numbers they went to and things like that. that information is in the answer that I gave to the above linked question.
you could also write things out to a Text File every so often with the statistics.
the logger that Eric J. mentions is probably going to be a little bit easier to implement, and would be a nice tool for your toolbox.
these options are just as valid depending on your specific needs.