Spring integration really helps us a lot during application integration, it make us more focus on flow design.
However, we want to log all file processing steps and use log analytic tools to check how one specific file(message) been processed.
Question is how to log a grouping id for each message in order to group them for checking by another logging anlytic tools?
thanks
Consider to turn on Message History for your application. This way in the end of flow you can extract such a MessageHistory.HEADER_NAME with all the traveling information for the message.
Otherwise you really don't have choice unless to add some business header in the beginning of the flow and parse logs for such a common key.
Related
Just landed a new position where I will be in charge of doing some system integrations and automations in regards to security. I have never done any integrations or automations so this is my first rodeo. I have the following tools at my disposal:
ZScaler
Azure Sentinel
Microsoft Cloud App Security
Microsoft Power Suite
Mcafee EPO
I have been given a list of action items to complete. A lot of them require responding to an incident as soon as it occurs, which is where I am lost. So for example, say Zscaler detects an IA infection and we want X and X actions to happen once detected. How do I ensure our systems are alerted immediately after the incident occurs? I am guessing this is a matter of querying the API, but what is the proper way of setting this up with the tools I have?
Normally you would send the logs of those security tools to Log Analytics, and could construct KQL queries based on them.
For example after having a custom log source for McAfee EPO, you could create a recurring query such as
McafeeEPO
| where EventType = ThreatEventLog
| extend HostCustomEntity = hostname_s, AccountCustomEntity = username_s, IPCustomEntity = ipv4_s
I used https://github.com/Azure/Azure-Sentinel/blob/master/Detections/EsetSMC/eset-threats.yaml as an example, you can check for others as well.
There is an one year old question How can I retrieve through an API *Live Metrics* of Microsoft Application Insights about is it possible to pull LiveMetrics data that appInsights generate for the application trough some API
Right now i don't see anything live related in the official documentation - https://dev.applicationinsights.io/reference . And the answer for old question was also that there is no any way to get them.
But maybe someone knows if AppInsights team plans were changed in this year and they are working on that API?
It might be really useful to pull that data in realtime through API to own alerting\metrics system to process data from different microservices\applications and display them in aggregated way in realtime.
As example we can build something like OpServer has but based on different applications and their AppInsights data .
As right now there is no any way to get it
Note: I work in Application Insights team at Microsoft.
LiveMetrics data is not persistently stored anywhere, and there is no API to retrieve it. The data is collected only when someone is actively on the Live Metrics portal page. The moment browser window is closed, data is gone as well.
If your goal is to get metrics/other in real-time, then you can do that by implementing own ITelemetryProcessor. Most people use ITelemetryProcessor to "filter" out unwanted telemetry. But that is not a rule. All telemetry passes through TelemetryProcessor, and you can chose to filter the data or do something else with it. In your case you want to send it instantly to some real-time service. In fact, LiveMetrics (internally known as QuickPulse) is implemented as a TelemetryProcessor. (https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/WEB/Src/PerformanceCollector/Perf.Shared/QuickPulseTelemetryProcessor.cs#L158)
General doc about TelemetryProcessor:
https://learn.microsoft.com/en-us/azure/azure-monitor/app/api-filtering-sampling#create-a-telemetry-processor-c
I am in need of a way to implement the error logging and providing a way to the admins to rtry any failure that occurs within a suitescript.
Here are my thoughts on the implementation:
Lets say for restlet i can log the datain, or the incoming data in any userevent script in a text file along with its status as success or failure. Later have a scheduled script to process that text file that may send those errors to my .Net Api and I can provide a way for Admins to retry.
Could anyone suggest me how its normally done in netsuite projects?
For similar systems, I typically advise you create Custom Records. Your custom records can have a field to store the raw data (JSON, xml, etc) as well as a Status (Succeeded, Failed, Retry, etc). You could consider retry mechanisms like having a User Event on the Custom Record that immediately retries upon creation of the record, then if that fails have a Map/Reduce that runs on a regular schedule to clean things up.
If the native Execution Logs aren't providing enough functionality for you in that respect, you can add a Custom Record for "logging" as well, but I'd suggest trying to use the native logs first. The Script Execution Logs UI provides reasonable searching/filtering capabilities.
I'm having trouble modelling a continuous event as well as an activity that happens as part of that continuous event.
I am modelling the tracking of a website by a Marketing Automation platform. However the tracking is a continuous event constantly happening. A part of this tracking is the 'form submit'. This however triggers a new flow of activities creating a contact in the CRM.
Questions:
1. How do I make an activity in 'Tracking' invoke a new flow of activities if a form is submitted?
2. If I model this by using a fork: both (Tracking and form submit) need to occur before the flow is continued.
I basicly need a fork where one flow is optional.
Kind regards,
--- If I made any mistakes regarding posting, please direct me to the correct location / nescecary corrections ---
Basically you are right: you use a fork to create a "forked off" flow. Like #JimlL. I have trouble understanding the details of your question. But basically fork is the way to go.
A few remarks to your diagrams:
The first has a ConditionalNode with no alternatives. That's pretty pointless. I guess you just miss a path down to bypass Log Tracking
The second has an unguarded, unconditional flow top from a fork to a join. That's pointless and you should just remove the flow.
The Form not filled and No form on page flows are also pointless without connected actions (which should likely be some error actions).
The Yes/No should be written in brackets (like [Yes]) to make them guards, which the actually are. I guess the other named flows should also rather be guards.
The fork following Form submit actually does, what you are asking for.
Activity diagrams are derived from Petri nets. You need to imagine a virtual token traveling along the control flows. A fork will fire as many tokens as it has outgoing flows on receipt of an ingoing token. A join will only send outgoing token(s; depending on the number of outgoing flows) once it has received a single token on each incoming flow.
I have a web application that monitors farms in certain areas. Right now I am having a problem of performing automation with some of the tasks.
Users of the web application can send reports or checkins using keywords. If the reports or checkins correspond to certain keywords, for example "alert", I need the web application to send an alert to the user via email using that web application. But that alert must be sent two weeks after the date of the report received, and to that particular user only.
Would it be possible to use cron to perform this? If not, can anyone suggest me a workaround?
A possible approach you might consider is to store an entry in a database for each of these reminder emails you need to send, at the time your user does whatever action in your application that determines the need to send that email exists. Include the recipient, the date to be sent, and the email content as content you store for each entry. Schedule a single cron job to run periodically to process these database records by due date, and populate an email template to be sent out. You can then either delete the database records, or a better option, include a column that indicates they were sent and mark them as sent.
It would help to provide which technology stack you're operating on and what the application is developed in. Others might be able to point you to technology specific approaches or pre-built plugins/extensions that already do this for the situation you're in, to help you avoid the need to write your own code for the solution.