Grok logstash pipeline not filtering texts - logstash-grok

I am new to Elasticstack. I am trying to implement a logstash pipeline in which the a file would be processed and it would filter and output if the line of file contains following keyword -
java.lang.Exception - Any line of file containing Exception should be filtered and be available on Kibana
XYZ process completed.
I tried following but it seems to outputting all the contents that do not match the Exception too-
input {
beats {
port => 5044
tags => "exception"
}
}
filter{
if "exception" in [tags]{
grok {
match => { message => "Exception"
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
Please help and advise

If this is the only filter you are applying in the pipeline, everything that does not match your GROK filter will have a _grokparsefailure tag added.
You can use it to drop the events. using below after all the options in your filter are done.
if "_grokparsefailure" in [tags] {
drop{}
}
So a sample filter can be
filter {
grok {
match => { message => "Exception" }
}
if "_grokparsefailure" in [tags] {
drop{}
}
}
Another way to do this can be, add a tag when match happens and then only ingest events that have the tag.
Example
input {
beats {
port => 5044
tags => "exception"
}
}
filter{
if "exception" in [tags]{
grok {
match => { message => "Exception"}
add_tag => [ "exception_match" ]
}
}
output {
if"exception_match" in [tags]
{
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
}

Related

Accept(Stream) returns HandshakeError::Interrupted(...), but if there is println!() then everything works

I want to write a simple non-blocking single threaded websocket server. However, the accept function returns an error which I don't understand. Moreover, if you write println!() (where it is commented out), then the server starts working properly. What could be the problem, any ideas...
code:
use std::net::TcpListener;
use tungstenite::accept;
fn main() {
let server = TcpListener::bind("127.0.0.1:8080").unwrap();
server.set_nonblocking(true).unwrap();
for stream_res in server.incoming() {
match stream_res {
Ok(stream) => {
//println!(""); //if you uncomment it works, why???
match accept(stream) {
Ok(websocket) => {
println!("Accepted");
}
Err(err) => {
println!("Accept error: {:?}", err);
}
}
}
Err(ref e) if e.kind() == std::io::ErrorKind::WouldBlock => {
// some code
}
Err(ref e) => {
panic!("{:?}", e);
}
}
}
}
I use crate: tungstenite = "0.17.3"

Singular query won’t return nested arrays

I have a query that tries to fetch a single document, here is the resolver for that query.
const singleDoc = async (_parent, args, context, info) => {
try {
return await context.prisma.doc({ id: args.docId },info )
} catch (error) {
console.log(error)
}
}
If I call the query in GraphQL, it returns this:
"data": {
"singleDoc": {
"name": "Sample doc",
"teams": null,
"description": "This holds doc description"
}
}
}
I queried for the teams field but they weren’t returned.
I feel like there is something wrong with the query resolver? What am I missing?
I was able to achieve by using graphQL fragments
Fragment
const docFragment = `
fragment DocWithDetails on Doc {
name
teams{
id
role
}
}`
Then I passed the graphQL fragment into the resolver. This was I was able to retrieve the nested relations
const singleDoc = async (_parent, args, context, info) => {
try {
return await context.prisma.doc({ id: args.docId }).$fragment(docFragment)
} catch (error) {
console.log(error)
}
}

How to wait till all data is loaded in Angular 2 async dependent http calls?

I'm working on a tool that deals with extracting data from Jira. I can find plenty of examples with chaining multiple http calls to make one after another using data from the previous call. But what I stumble upon is how to wait for all the inner calls to resolve and do stuff with the data and only after that to resolve the outer.
What is happening right here is that the method this.developmentService.getData(this.filter) doesn't wait for the completion of the counting of the stories in each epic in the inner this.developmentService.getStoriesForEpic(epic.key) and this is problematic because after that i need to apply additional filters based on those counts.
updateChart() {
this.loading = true;
if (this.dataRequest) { this.dataRequest.unsubscribe(); }
this.developmentService.getData(this.filter).toPromise().then(initiatives => {
initiatives.map((initiative) => {
initiative.devEpics.map((epic) => {
return this.developmentService.getStoriesForEpic(epic.key).toPromise().then(stories => {
Promise.all(stories.map((story) => {
if (story.status == "To Do") {
epic.storiesToDo++;
}
else if (story.status == "In Progress") {
epic.storiesInProgress++;
}
else if (story.status == "Done") {
epic.storiesDone++;
}
}))
})
})
})
this.data=initiatives;
})
I have tried multiple approaches but can't quite seem to get there. Any help is appreciated! Thanks in advance
Don't convert them into promises, and use RXJS.
forkJoin or combineLatest - depends on your expected behavior.
https://rxjs-dev.firebaseapp.com/api/index/function/combineLatest
https://rxjs-dev.firebaseapp.com/api/index/function/forkJoin
You can try like this
async updateChart() {
this.loading = true;
if (this.dataRequest) { this.dataRequest.unsubscribe(); }
this.developmentService.getData(this.filter).toPromise().then(initiatives => {
initiatives.map((initiative) => {
initiative.devEpics.map((epic) => {
let dataFromGetStoriesForEpic = await getStoriesForEpic(epic);
})
})
})
this.data=initiatives;
})
here we are creating one function for getStoriesForEpic
getStoriesForEpic(epic) {
return promise = new Promise ((resolve, reject) => {
this.developmentService.getStoriesForEpic(epic.key).toPromise().then(stories => {
Promise.all(stories.map((story) => {
if (story.status == "To Do") {
epic.storiesToDo++;
}
else if (story.status == "In Progress") {
epic.storiesInProgress++;
}
else if (story.status == "Done") {
epic.storiesDone++;
}
resolve(epic);
}))
})
}

Logstash - sleep until there is a specific message

In Logstash, let's say I have the following lines in my logs:
Message: msisdn: 111111111
Message: msisdn: 222222222
Answer: msisdn: 111111111
Answer: msisdn: 222222222
Now, whenever I get the Message, I'd like to wait for X seconds.
If within this time period I get the matching Answer (i.e. with the same msisdn), mark it as OK, else mark it as ERROR.
How can I do that?
Thanks
**EDIT**
Fairy, I tried to work with the aggregate filter but with no success, can you please help me with that?
input {
stdin {}
}
filter {
grok {
match => {"message" => "%{GREEDYDATA:type}: msisdn: %{INT:msisdn}"}
}
if [type] == "Message" {
aggregate {
task_id => "%{msisdn}"
code => "event.set('result', 'OK')"
timeout => 5
timeout_code => "event.set('result', 'error')"
}
}
if [logger] == "Answer" {
aggregate {
task_id => "%{msisdn}"
code => "event.set('result', 'OK')"
end_of_task => true
}
}
if "_grokparsefailure" in [tags] {
drop {}
}
}
output {
stdout { codec => rubydebug }
}
After writing this line to the stdin:
Message: msisdn: 111111111
There was a response immediately (it didn't wait 5 seconds) with status OK
{
"result" => "OK",
"#timestamp" => 2017-03-30T17:58:39.940Z,
"#version" => "1",
"host" => "31634cf481d5",
"message" => "Message: msisdn: 111111111",
"type" => "Message",
"msisdn" => "111111111"
}
Should I write it differently?
Many thanks ;)
The aggregate filter pretty much does what you are describing. First you need to identify the task_id to which you correlate your messages. Then you set up your logstash filter so it get the type of the message and your task_id. Then we have to seperate aggregate filters, one for the first theother for the second event.
The first has the timeout and the code to set the event to error should it timeout. The second filter checks for the Answer message and ends the aggregation.
grok {
match => {%{DATA:type}: %{DATA:name}: %{DATA:id}}
}
if [type] == "Message" {
aggregate {
task_id => "%{id}"
code => "event.set('result', 'OK')"
timeout => 5
timeout_code => "event.set('result', 'error')"
}
}
if [type] == "Answer" {
aggregate {
task_id => "%{id}"
code => "event.set('result', 'OK')"
end_of_task => true
}
}
Check out the documentation for the aggregate filter here.

Automapper Custom mapping or ignore

I have tried something similar to AutoMapper Custom Mappings
however, what i really want is to not map to another property but ignore it.
i have tried:
.ForMember(m=>m.BillingAddress,m=>m.ResolveUsing((result, card) => {
if (!string.IsNullOrEmpty(card.BillingDetails?.Address1)) {
return card.BillingDetails.Address1;
}
else {
return result.Ignore();
}
}))
but this just sets some type of resolution result to the property i'm trying to map to.
What i'd really like to do is what I attempted to ask in this issue:
https://github.com/AutoMapper/AutoMapper/issues/1690
i.e.
.ForMember(m=>m.BillingAddress, m=>{
m.Condition(s=>!String.IsNullOrEmpty(s.BillingDetails?.Address1), m.MapFrom(...), m.Ignore())
}
right now it's nulling out anything i have in those fields if i use the .condition
and a .MapFrom after it.
This isn't really how i'd like this to work, but it worked for this particular situation. It would still be nice to have what i wanted before but it looks like if you don't do a mapFrom at all it simply ignores it.
.ForMember(m => m.BillingAddress, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Address1));
m.MapFrom(i => i.BillingDetails.Address1);
})
.ForMember(m => m.BillingAddress2, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Address2));
m.MapFrom(i => i.BillingDetails.Address2);
})
.ForMember(m => m.City, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.City));
m.MapFrom(i => i.BillingDetails.City);
})
.ForMember(m => m.State, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.State));
m.MapFrom(i => i.BillingDetails.State);
})
.ForMember(m => m.Zip, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Zip));
m.MapFrom(i => i.BillingDetails.Zip);
})

Resources