Existing tool to parse and analyze logs - node.js

I'm coding an application via nodejs that parses APIs to collect data and organize it. However, the need for systematic logging and display of differential logs has risen. The application needs to show users what changed with each consecutive state changes or within a specified time span. Is there any existing tool that would help me achieve that?

Related

Cognos REST API and scheduling schema loading

I am trying to find out more informations about using the REST API in order to create a schedule for schema loading. Right now, I have to reload the particular schemas via my data server connections manually (click on every schema and Load Metadata) and would like to automate this process.
Any pointers will be much appreciated.
Thank you
If the metadata of your data warehouse is so in flux that you need to reload the metadata so frequently that you want to automate the process then you need to understand that your data warehouse is in no way ready for use.
So, the question becomes why would you want to frequently reload the metadata of a data source schema? I'm guessing that you are refreshing the data of your data base and, because your query cache has not expired, you are not seeing the new data.
So the answer is, you probably don't want to do what you think you need to do unless you can convince me otherwise.
Also, if you enter some obvious search terms you will find the Cognos analytics REST api documentation without too much difficulty.

Would Prometheus and Grafana be an incorrect tool to use for request logging, tracking and analysis?

I currently am creating a faster test harness for our team and will be recording a baseline from our prod sdk run and our staging sdk run. I am running the tests via jest and want to eventually fire the parsed requests and their query params to a datastore of sorts and have a nice UI around it for tracking.
I thought that Prometheus and Grafana would be able to provide that, but after getting a little POC for myself working yesterday it seems that this combo is more used for tracking application performance rather than request log handling/manipulation/tracking.
Is this the right tool to be using for what I am trying to achieve and if so might someone shed some light on where I might find some more reading aligned with what I am trying to do?
Prometheus does only one thing and it well. It collects metrics and store them. It is used for monitoring your infrastructure or applications to monitor performance, availability, error rates etc. You can write rules using PromQL expression to create alert based on conditions and send them to alert manager which can send it to Pager duty, slack, email or any ticketing system. Even though Prometheus comes with a UI for visualising the data it's better to use Grafana since it's pretty good with it and easy to analyse data.
If you are looking tools for distributed tracing you can check Jaeger

best practice for logging mechanisam in ETL processing

What is the best practice for logging mechanisam in ETL processing?
Actually we are developing ETL application .in this we want to use log analaytics to log data
Could anybody provide best practice for logging mechanism at industry standards.
i have googled below link :https://www.timmitchell.net/post/2016/03/14/etl-logging/
any help is appreciated.
Thanks in advance
I recently implemented one in one of the organisation. It is a custom built because of the technology choice. Following is what is included in the logging.
It acts as a wrapper around any ETL job aka there is a template developed and the template has in built logging
The template has a feature of master and child job and logs based on master or child
The logging captures the following:
Status of the job - success, failure, warning
Source details (e.g name of file or source table etc name)
data classification tagging
business owner of the incoming data source
row count of raw file vs the row count loaded
Send an alert to a distribution list if the job fails
Raises a ticket via service desk if job fails
It depends on your requirements, you may want to capture more or less.
Good luck

Capture the name of the previous processor in NiFi

I'm looking to create an Error Handling Flow, and need to capture the name of the failing processor on particular points only. An Update Attribute would be last resort as it would clutter up the templates. Ideally I'm looking for a script or similar, but I'm open to suggestions from NiFi experts.
You can use the Data Provenance feature for this via manual inspection or REST API, but by design ("Flow Based Programming"), components in Apache NiFi are black boxes independent and unaware of their predecessors and successors.
If you need a programmatic capability to access the error messages, look at SiteToSiteBulletinReportingTask. With this component, you can send the bulletins back to the same (or a different) NiFi instance via Site-to-Site and ingest and process them as any other arbitrary data.

Dynamically change REST connector Source

I am developing an application to track performance metrics being pulled from a REST service, but I need to dynamically change the data being visualized based on user input. Is there a way I can automate the data being loaded into qlik sense?
I am new to qlik sense, so it might just be I cannot find the right documentation on this, even just a link could be helpful.

Resources