Find specific console log in an application among all the application logs in the browser - browser

When you apply a filter in the console and find your log among many, you made it, you found your log. But when the filter is removed, scrolls goes to the top or down, so cannot see where de log is respect to all the rest of the logs in the app.
I'd like that when a filtered log in the console is selected and the filter is removed, to have the scroll at that point with all the rest of the logs of the application. Either because there is a console.group I'd like to expand and check, or because I would like to see the log position of my log of interest respect to other logs.
Is this possible?

Related

Is there a way to get notified by Azure ApplicationInsights when a new exception appears?

We are using Application Insights by Azure. At the moment I have to manually check the exceptions after each deployment to see if a new one appeared. Has anyone figured out a way to get notified (via Azure alert) once a new exception appears? For example, other error trackers like Sentry support this.
Example:
We did a deployment at 15:15
A previously unknown exception appears at 15:17
An email is sent to me with content "New exception X appeared in project Y"
Here is a screenshot demonstrating this a bit more clearly:
Smart detections are being replaced by alets. The only way to get notifications is to write a query that will see your new exceptions. Configure the period to let alerts activate.
Navigate to the Application insights resource on Azure Portal.
Select Logs under the Monitoring blade.
Construct your log query and check the results.
Click on + New alert rule.
Configure your alert as follows:
The above alert fires whenever the count of results in Custom log search log query for the last 1 day is greater than 0, and is evaluated every 6 hours. You can customize the Period and frequency as needed.
You can also run through this detailed guide for troubleshooting problems with Azure Monitor alerts. Please check if this helps.
You can try Smart Detection, specifically the alert for abnormal rise in exception volume.
When would I get this type of smart detection notification?
You get this type of notification if your app is showing an abnormal rise in the number of exceptions of a specific type, during a day. This number is compared to a baseline calculated over the previous seven days. Machine learning algorithms are used for detecting the rise in exception count, while taking into account a natural growth in your application usage.
If you never got a specificy exception before a release, I would consider that a rise in exceptions for that type and you should get an alert. Though the alert won't happen if there are very few exceptions occuring, and it won't be as detailed as you described in your question.

How to debug an XPages application in Bluemix?

I have deployed an XPages app to bluemix and untill now I have not come further than an Error 500 message.
The error page does not seem to come up. I can not access log.nsf. Can I bring in openlog.nsf?
I am not sure what my options are so far?
When you click on your application in the Bluemix dashboard
In the left menu there will be an entry 'Logs'
There you see your log files, via dropdowns you can narrow down your log by logtype or channel,

powerbi not refreshing in time

I am having the same issues with powerBI ,seems the automatic refresh its failing ,i currently have to click refresh to see new data coming in ,i have configure the the tumblingwindow part e.g tumblingwindow(second,3),done the live to dashboard are the any other settings/factors i have to set for the automatic refresh to work.(its a console app that selects data from database and sends each row to event hubs from event hubs to stream analytics then output is powerBi ).i am assuming the is time restrictions depending on throughput but how do i really calculate the time for tumblingwindow i should set ,i have tried the equation entitycount*60*60/throughput = seconds still no success.
below is a code but still the events take time to reach powerBi even after tumblingwindow(second,3) ,i could stop my application running then delete the dataset from powerbi ,but then the dataset will reappear
EventData data = new EventData(Encoding.UTF8.GetBytes(serializedobjects));
eventHubClient.SendAsync(data);
Overall workflow you describe seems fine and tumbling window size should not cause the data to never show up. You will have to debug this issue, with following steps
Goto Azure portal, inputs page for the stream analytics job and get a "sample". Does it return you any samples?
Goto Monitoring page in azure portal and check if input events and output events are greater than zero. If you see input events but don't see output events, you are likely hitting an error writing to output.
Also check if operation logs has any errors for this job.
Above steps should tell you if something is wrong with event format or the output.
Are you pinning the individual live tile to the dashboard or the entire report? Pinning the entire report does not appear to work.
If you pin the single tile containing the data you want, does that refresh in real time?

Set order of viewmodel creation in catel

Is it possible to set a order in which the viewmodels are created.
I want to display logmessages in a own view. Some other views are created before, and their logs on startup ar not displayed because they are created befor this logview.
This is not possible, since they are loaded on-demand when the views are loaded (and that's up to WPF).
There are a few options:
[Recommended] Create a service that holds the log messages from the start. Then the log vm loads the service and retrieves the messages it missed. Then the service (which you have full control over) will keep track of the log, and you can even hide the log view in between and still show all messages.
Create a custom implementation of the IViewModelFactory. Then you have full control of when and how vm's are created. Then you can create a LogViewModel at the beginning of the factory and return it once it is required by Catel.

unable to save kibana dashboards correctly

I am trying to save kibana dashboards - I have tried the regular save as well as save to file option. In either case, I am unable to get the same dashboard to open up - the error I see at the top is as follows:
Error Alert
No time filter Timestamped indices are configured without a failover. Waiting for time filter
Before saving the dashboard, I can see the logs correctly in kibana. Any thoughts on troubleshooting or fixing this will be greatly appreciated.
That error happens when you try to refresh the dashboard and you do not have a Time Filter. Try to select for example, Last 15m and the error should not appear more.

Resources