Logstash - sleep until there is a specific message - logstash

In Logstash, let's say I have the following lines in my logs:
Message: msisdn: 111111111
Message: msisdn: 222222222
Answer: msisdn: 111111111
Answer: msisdn: 222222222
Now, whenever I get the Message, I'd like to wait for X seconds.
If within this time period I get the matching Answer (i.e. with the same msisdn), mark it as OK, else mark it as ERROR.
How can I do that?
Thanks
**EDIT**
Fairy, I tried to work with the aggregate filter but with no success, can you please help me with that?
input {
stdin {}
}
filter {
grok {
match => {"message" => "%{GREEDYDATA:type}: msisdn: %{INT:msisdn}"}
}
if [type] == "Message" {
aggregate {
task_id => "%{msisdn}"
code => "event.set('result', 'OK')"
timeout => 5
timeout_code => "event.set('result', 'error')"
}
}
if [logger] == "Answer" {
aggregate {
task_id => "%{msisdn}"
code => "event.set('result', 'OK')"
end_of_task => true
}
}
if "_grokparsefailure" in [tags] {
drop {}
}
}
output {
stdout { codec => rubydebug }
}
After writing this line to the stdin:
Message: msisdn: 111111111
There was a response immediately (it didn't wait 5 seconds) with status OK
{
"result" => "OK",
"#timestamp" => 2017-03-30T17:58:39.940Z,
"#version" => "1",
"host" => "31634cf481d5",
"message" => "Message: msisdn: 111111111",
"type" => "Message",
"msisdn" => "111111111"
}
Should I write it differently?
Many thanks ;)

The aggregate filter pretty much does what you are describing. First you need to identify the task_id to which you correlate your messages. Then you set up your logstash filter so it get the type of the message and your task_id. Then we have to seperate aggregate filters, one for the first theother for the second event.
The first has the timeout and the code to set the event to error should it timeout. The second filter checks for the Answer message and ends the aggregation.
grok {
match => {%{DATA:type}: %{DATA:name}: %{DATA:id}}
}
if [type] == "Message" {
aggregate {
task_id => "%{id}"
code => "event.set('result', 'OK')"
timeout => 5
timeout_code => "event.set('result', 'error')"
}
}
if [type] == "Answer" {
aggregate {
task_id => "%{id}"
code => "event.set('result', 'OK')"
end_of_task => true
}
}
Check out the documentation for the aggregate filter here.

Related

How can I do a command on a query basis discord.js?

I want a command like -clear to be done on a query basis.
e.g.
How many messages do you want to clear?
User: 8
8 messages successfully deleted
Thank you very much in advance for your replies!
(V.12)
Ok!
I changed the code a bit and it works, but the problem is that the user can type anything, in this example:
User: -clearr
Bot: How many messages do you want to delete?
User: asd
Bot: asd message successfully deleted!
module.exports = {
name: 'clearr',
description: "Clear messages!",
async execute(client, message, args) {
if(!args[0]) {
let filter = m => m.author.id === '365113443898097666'
message.channel.send(`How many messages do you want to delete?`).then(() => {
message.channel.awaitMessages(filter, {
max: 1,
time: 10000,
errors: ['time']
})
.then(message => {
message = message.first()
message.channel.bulkDelete(message);
message.channel.send (`\`${message} message\` successfully deleted!`)
.then(message => {
message.delete({ timeout: 5000 })
})
.catch(console.error);
})
})
}
}
}
So I would like to eliminate this, so that it can only type a number, because if it types any other character, it will say "Not a valid value"
Thank you very much in advance for your replies!
This should use message collectors. There are 2 ways to make them, but since you are only listening for 1 message, you can use TextChannel.awaitMessages.
client.on("messageCreate", async msg => {
if (msg.content === "-clear") { //only respond to -clear
const filter = (m) => m.author.id === msg.author.id //only accept from original author
const message = await msg.channel.awaitMessages(filter, {
max: 1, //only collect 1 message
time: 10000 //time in ms they have to respond
})
if (!parseInt(message.first()) return; //use .first() because it returns collection
msg.channel.bulkDelete(parseInt(message.first())).then(ms => {
msg.channel.send(`Deleted ${ms.size} messages.`)
})
}
})
This is untested, and "base" code. You will need to tweak this to your liking

Grok logstash pipeline not filtering texts

I am new to Elasticstack. I am trying to implement a logstash pipeline in which the a file would be processed and it would filter and output if the line of file contains following keyword -
java.lang.Exception - Any line of file containing Exception should be filtered and be available on Kibana
XYZ process completed.
I tried following but it seems to outputting all the contents that do not match the Exception too-
input {
beats {
port => 5044
tags => "exception"
}
}
filter{
if "exception" in [tags]{
grok {
match => { message => "Exception"
}
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
Please help and advise
If this is the only filter you are applying in the pipeline, everything that does not match your GROK filter will have a _grokparsefailure tag added.
You can use it to drop the events. using below after all the options in your filter are done.
if "_grokparsefailure" in [tags] {
drop{}
}
So a sample filter can be
filter {
grok {
match => { message => "Exception" }
}
if "_grokparsefailure" in [tags] {
drop{}
}
}
Another way to do this can be, add a tag when match happens and then only ingest events that have the tag.
Example
input {
beats {
port => 5044
tags => "exception"
}
}
filter{
if "exception" in [tags]{
grok {
match => { message => "Exception"}
add_tag => [ "exception_match" ]
}
}
output {
if"exception_match" in [tags]
{
elasticsearch {
hosts => ["http://localhost:9200"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}"
}
}
}

How to wait till all data is loaded in Angular 2 async dependent http calls?

I'm working on a tool that deals with extracting data from Jira. I can find plenty of examples with chaining multiple http calls to make one after another using data from the previous call. But what I stumble upon is how to wait for all the inner calls to resolve and do stuff with the data and only after that to resolve the outer.
What is happening right here is that the method this.developmentService.getData(this.filter) doesn't wait for the completion of the counting of the stories in each epic in the inner this.developmentService.getStoriesForEpic(epic.key) and this is problematic because after that i need to apply additional filters based on those counts.
updateChart() {
this.loading = true;
if (this.dataRequest) { this.dataRequest.unsubscribe(); }
this.developmentService.getData(this.filter).toPromise().then(initiatives => {
initiatives.map((initiative) => {
initiative.devEpics.map((epic) => {
return this.developmentService.getStoriesForEpic(epic.key).toPromise().then(stories => {
Promise.all(stories.map((story) => {
if (story.status == "To Do") {
epic.storiesToDo++;
}
else if (story.status == "In Progress") {
epic.storiesInProgress++;
}
else if (story.status == "Done") {
epic.storiesDone++;
}
}))
})
})
})
this.data=initiatives;
})
I have tried multiple approaches but can't quite seem to get there. Any help is appreciated! Thanks in advance
Don't convert them into promises, and use RXJS.
forkJoin or combineLatest - depends on your expected behavior.
https://rxjs-dev.firebaseapp.com/api/index/function/combineLatest
https://rxjs-dev.firebaseapp.com/api/index/function/forkJoin
You can try like this
async updateChart() {
this.loading = true;
if (this.dataRequest) { this.dataRequest.unsubscribe(); }
this.developmentService.getData(this.filter).toPromise().then(initiatives => {
initiatives.map((initiative) => {
initiative.devEpics.map((epic) => {
let dataFromGetStoriesForEpic = await getStoriesForEpic(epic);
})
})
})
this.data=initiatives;
})
here we are creating one function for getStoriesForEpic
getStoriesForEpic(epic) {
return promise = new Promise ((resolve, reject) => {
this.developmentService.getStoriesForEpic(epic.key).toPromise().then(stories => {
Promise.all(stories.map((story) => {
if (story.status == "To Do") {
epic.storiesToDo++;
}
else if (story.status == "In Progress") {
epic.storiesInProgress++;
}
else if (story.status == "Done") {
epic.storiesDone++;
}
resolve(epic);
}))
})
}

MongoDB insertMany BulkWriteError, avoid catch if duplicate key and perform then function

I'm importing multiple items in DB calling with a cron job the following function:
async function importItems(items){
return item_schema.insertMany(items, {ordered: false})
.then(docs => {
const n = docs ? docs.length : 0
docs.map((item) => {
/* Do something */
})
return `New items imported ${n} / ${items.length}`
})
.catch(error(500, `Error importing items.`))
}
Since a few of items could be previously imported, I'm getting the BulkWriteError due to duplicate key ('item_id') that always trigger catch.
My problem is that I need to "do something" with the n new items successfully imported that I get in the docs array in then function, ignoring the catch.
Is there any way to do that?
Thanks
function importItems(items) {
return item_schema.find({
item_id: {
$in: items.map((item) => item.item_id) // Check if item_id is one of the array values
}
})
.then((documents) => {
// documents is all the items that already exists
const newItems = items.filter((item) => !documents.find((doc) => doc.item_id === item.item_id));
return item_schema.insertMany(newItems, { ordered: false })
.then((docs) => {
const n = docs ? docs.length : 0
docs.map((item) => {
/* Do something */
})
return `New items imported ${n} / ${items.length}`
})
.catch(error(500, `Error importing items.`));
})
.catch(error(500, `Error importing items.`));
}

Automapper Custom mapping or ignore

I have tried something similar to AutoMapper Custom Mappings
however, what i really want is to not map to another property but ignore it.
i have tried:
.ForMember(m=>m.BillingAddress,m=>m.ResolveUsing((result, card) => {
if (!string.IsNullOrEmpty(card.BillingDetails?.Address1)) {
return card.BillingDetails.Address1;
}
else {
return result.Ignore();
}
}))
but this just sets some type of resolution result to the property i'm trying to map to.
What i'd really like to do is what I attempted to ask in this issue:
https://github.com/AutoMapper/AutoMapper/issues/1690
i.e.
.ForMember(m=>m.BillingAddress, m=>{
m.Condition(s=>!String.IsNullOrEmpty(s.BillingDetails?.Address1), m.MapFrom(...), m.Ignore())
}
right now it's nulling out anything i have in those fields if i use the .condition
and a .MapFrom after it.
This isn't really how i'd like this to work, but it worked for this particular situation. It would still be nice to have what i wanted before but it looks like if you don't do a mapFrom at all it simply ignores it.
.ForMember(m => m.BillingAddress, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Address1));
m.MapFrom(i => i.BillingDetails.Address1);
})
.ForMember(m => m.BillingAddress2, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Address2));
m.MapFrom(i => i.BillingDetails.Address2);
})
.ForMember(m => m.City, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.City));
m.MapFrom(i => i.BillingDetails.City);
})
.ForMember(m => m.State, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.State));
m.MapFrom(i => i.BillingDetails.State);
})
.ForMember(m => m.Zip, m => {
m.Condition(s => !String.IsNullOrEmpty(s.BillingDetails?.Zip));
m.MapFrom(i => i.BillingDetails.Zip);
})

Resources