How can I see the matched results of a configured Extractor on an Input in Graylog - graylog2

I tried to add an Extractor for Key/Value Pairs in a Graylog Input according to http://docs.graylog.org/en/2.3/pages/extractors.html#automatically-extract-all-key-value-pairs.
I did set up the Extractor like in the example and can also see that on the Manage Extractors tab if i click Details on my Extractor that there were hits for the extractor.
But none of the Messages the extractor matched are to be seen in any of my streams. So I did not manage to see the extracted output of any of my matches so far. Needs anything else to be done in order for extractors to work?

So nothing needs to be done in order to see the results, the problem was that my extractor threw a parsing exception that I did not notice.

Related

Opensearch Grafana: how to visualize text fields

this is my first ever post on stackoverflow
Im sending json logs from filebeat to logstash to opensearch to grafana
and everything is working perfectly (if it comes to integer data)
i can even see that opensearch receives my string fields and boolean fields and even reads them.
but when i want to make a dashboard to visualize some strings and booleans, it only finds my integer fields
Can someone help me visualize Strings on grafana and not only numbers.
this is an image of what i can see when i try to select data, i only see the number field names
thanks andrew, now i see this, but i want to only see 1 field
and not all of them
logs added to grafana
You can try using the Logs panel
And an example of how I use - the request is something like this:
{namespace=~"$namespace", stream=~"$stream", container =~"$container"} |= "$query"
But I'm using fluent-bit + loki

Azure form recognizer does not identifies any Keys

I'm using the Microsoft custom model API for the form recognizer, I test it first with the example they have in this link:
https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool
The problem I have now, is that for any other form, that is not the one in the example, the recognizer does not recognizes properly any key-value pair.
E.G.
For the below form:
I get the response as:
Where any of the values is mapped to its key. E.G. for "Receiving Officer" the value should be "Ramon" but instead I'm getting them as token_2 and token_5, which is information I can not use.
It is suspicious to me, that this happens for all the forms I have tried, aside from the example.
Can you please try to train with labels follow this quick start and see if it extracts the values you need - https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/quickstarts/label-tool
try out site - https://fott.azurewebsites.net/
How do you get the response as this looks like the train without labels response where text which is not associated with a key is outputted as Tokens.

How do I group logs in Kibana/Logstash?

We have an ELK setup and the Logstash is receiving all the logs from the Filebeat installed on the server. So when I open Kibana and it asks for an index I put just a * for the index value and go to the Discover tab to check the logs and it shows each line of the log in a separate expandable section.
I want to be able to group the logs based on the timestamp first and then on a common ID that is generated in our logs per request to identify it from the rest. An example of the logs we get :
DEBUG [2018-11-23 11:28:22,847][298b364850d8] Some information
INFO [2018-11-23 11:27:33,152][298b364850d8] Some information
INFO [2018-11-24 11:31:20,407][b66a88287eeb] Some information
DEBUG [2018-11-23 11:31:20,407][b66a88287eeb] Some information
I would like to see all logs for request ID : 298b364850d8 in the same drop down given they are continuous logs. Then it can break into the second dropdown again grouped by the request ID : b66a88287eeb in the order of timestamp.
Is this even possible or am I expecting too much from the tool?
OR if there is a better strategy to grouping of logs I'm more than happy to listen to suggestions.
I have been told by a friend that I could configure this in logstash to group logs based on some regex n stuff but I just don't know where and how to configure it to fo the grouping.
I am completely new to the whole ELK stack to bear with my questions which might be quite elementary in nature.
Your question is truly a little vague and broad as you say. However, I will try to help :)
Check the index that you define in the logstash output. This is the index that need to be defined Kibana - not *.
Create an Index Pattern to Connect to Elasticsearch. This will parse the fields of the logs and will allow you to filter as you want.
It recommend using a GUI tool (like Cerebro) to better understand what is going on in you ES. It would also help you to get better clue of the indices you have there.
Good Luck
You can use #timeStamp filter and search query as below sample image to filter what you want.

How can I reliable match link, link_title and link_description

When parsing the message request object in my connector how can I reliable match a link to its title and description attribute? Are they always sorted in a special order in the parts array or is there only one link per message allowed?
I didn't find anything about this in the documentation.
Currently in unificationengine, it seems that you can send only one link at a time by using v2/message/send api command.

What is Lucene query to search for a wild character string

In Kibana I am trying to pull the my application log messages that had masked fields.
Example log message:
***statusMessage=, displayMessage=, securityInfoOutput=securityPin=pin=****, pinHint=*************
I want to search and pull the messages that have masked data - more than two consecutive *'s in the message.
Trying with search term message:"pin=\*\*\*\*"
but it didn't work
You seem to be thinking of search in the same way you'd type CTRL+F and search in a file. Search engines don't work that way. Search works based on exact matches of tokens. Tokens typically correspond to words extracted from text.
You can control how text is transformed into tokens using a process known as analysis. Analysis runs text through tokenization and various filters that decide how text is broken up into tokens and other pieces of metadata associated with each token.
This blog post I wrote might help put some of this into context.

Resources