Configure logstash input from DB values - logstash

as I am new to the ELK technology i need some help on this. I've a requirement in which i need to get log location(More than 1 for sure)s from a DB table and pass those values in logstash input to view the data in kibana. Can you please share some examples from which I'll refer and go ahead .

Related

Opensearch Grafana: how to visualize text fields

this is my first ever post on stackoverflow
Im sending json logs from filebeat to logstash to opensearch to grafana
and everything is working perfectly (if it comes to integer data)
i can even see that opensearch receives my string fields and boolean fields and even reads them.
but when i want to make a dashboard to visualize some strings and booleans, it only finds my integer fields
Can someone help me visualize Strings on grafana and not only numbers.
this is an image of what i can see when i try to select data, i only see the number field names
thanks andrew, now i see this, but i want to only see 1 field
and not all of them
logs added to grafana
You can try using the Logs panel
And an example of how I use - the request is something like this:
{namespace=~"$namespace", stream=~"$stream", container =~"$container"} |= "$query"
But I'm using fluent-bit + loki

Extract Alerts logs from Azure without Azure security Centre

I want to extract alerts log in CSV format to show that I have received this type of alerts.
But unable to extract from azure log query Or I have to install some agent?
You may list all existing alerts, where the results can be filtered on the basis of multiple parameters (e.g. time range). The results can then be sorted on the basis specific fields, with the default being lastModifiedDateTime:
GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/alerts?api-version=2018-05-05
Similar with Optional Parameters:
GET https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.AlertsManagement/alerts?targetResource={targetResource}&targetResourceType={targetResourceType}&targetResourceGroup={targetResourceGroup}&monitorService={monitorService}&monitorCondition={monitorCondition}&severity={severity}&alertState={alertState}&alertRule={alertRule}&smartGroupId={smartGroupId}&includeContext={includeContext}&includeEgressConfig={includeEgressConfig}&pageCount={pageCount}&sortBy={sortBy}&sortOrder={sortOrder}&select={select}&timeRange={timeRange}&customTimeRange={customTimeRange}&api-version=2018-05-05
To check other URI parameter for Logging, you may refer this URL.
And finally when you have availed response(s) in JSON format, you may get that automatically converted into CSV format using any of the freely available online conversion utilities (like this service HERE)

How do I group logs in Kibana/Logstash?

We have an ELK setup and the Logstash is receiving all the logs from the Filebeat installed on the server. So when I open Kibana and it asks for an index I put just a * for the index value and go to the Discover tab to check the logs and it shows each line of the log in a separate expandable section.
I want to be able to group the logs based on the timestamp first and then on a common ID that is generated in our logs per request to identify it from the rest. An example of the logs we get :
DEBUG [2018-11-23 11:28:22,847][298b364850d8] Some information
INFO [2018-11-23 11:27:33,152][298b364850d8] Some information
INFO [2018-11-24 11:31:20,407][b66a88287eeb] Some information
DEBUG [2018-11-23 11:31:20,407][b66a88287eeb] Some information
I would like to see all logs for request ID : 298b364850d8 in the same drop down given they are continuous logs. Then it can break into the second dropdown again grouped by the request ID : b66a88287eeb in the order of timestamp.
Is this even possible or am I expecting too much from the tool?
OR if there is a better strategy to grouping of logs I'm more than happy to listen to suggestions.
I have been told by a friend that I could configure this in logstash to group logs based on some regex n stuff but I just don't know where and how to configure it to fo the grouping.
I am completely new to the whole ELK stack to bear with my questions which might be quite elementary in nature.
Your question is truly a little vague and broad as you say. However, I will try to help :)
Check the index that you define in the logstash output. This is the index that need to be defined Kibana - not *.
Create an Index Pattern to Connect to Elasticsearch. This will parse the fields of the logs and will allow you to filter as you want.
It recommend using a GUI tool (like Cerebro) to better understand what is going on in you ES. It would also help you to get better clue of the indices you have there.
Good Luck
You can use #timeStamp filter and search query as below sample image to filter what you want.

How to make the aliases with Uppercase in stream analytics?

I have a simple json message that I receive from a device, this is the message
{"A":3,"B":4}
Also I set a query in the stream job to send the data to Power Bi, this is the query
SELECT * INTO [OutputBI] FROM [Input] WHERE deviceId='device1'
When I check the dataset in Power BI the name of columns were in uppercase |A|B| but when I used the alias in the query my columns were changed to lowercase |a|b|. This is the new query
SELECT v1 as A, v2 as B INTO [OutputBI] FROM [Input] WHERE deviceId='device1'
The reason why I change the query is because the variable names in the message were changed to A->v1, B->v2
My question is, Is there any way to use the alias in uppercase in the output of the job(Power BI in this case) ?
The problem is in the dataset of power BI, the first dataset recognized the column names in uppercase and when the query was changed, the column names were in lowercase, this is a trouble because of the dataset change, reports in power bi will not work, and I would have to do the reports again.
In the Configure section of the Stream Analytics job pane, selecting the Compatibility level and changing it to 1.1 should be able to solve the problem.
In this new version, case-sensitivity is persisted for field names when they are processed by the Azure Stream Analytics engine. However, persisting case-sensitivity isn't yet available for ASA jobs hosted by using Edge environment.
You could create a calculated column in PowerBI using the UPPER function. For example, Col2=UPPER(Column1)
You can also do this in the query editior / M Query using Text.Upper. Alternatively, I'm pretty sure there is a way to do it in the GUI.

How can I have log4net help me log structured data inside %message?

We had our own custom logger in a C# program and now are trying to port to log4net.
In our app, there is further structure to what would normally go into %message. It may contain requestid, associated users, and other structure where requestid and user have internal significance to the program.
The hope is to ultimately be able to search on the fields inside %message, say requestid so we can collect all log entries with the same requestid for example.
Does log4net assist in anyway in creating own custom fields? The reason we ask is that currently the entire %message is logged as one string by default.
Any other suggestions on how to provide further formatting for %message? Otherwise we would have to pre-format %message inside our own code as, say, a CSV format
You can use event context to add additional structured data to a log entry:
http://www.beefycode.com/post/Log4Net-Tutorial-pt-6-Log-Event-Context.aspx
Depending on what kind of information you want to log you may need to create a wrapper that accepts additional parameters or else you have to write verbose code like this:
log4net.ThreadContext.Properties["myInformation"] = yourAdditionalInformation;
log.Info("info message");
Other information can be calculated and thus can be set once (for instance on application start up). Have a look at the calculated context properties in the above tutorial.

Resources