Logstash - read and parse a string with multiple records on same line - logstash

Below is my config file,
input {
file{
path => "/Users/.../Work/Projects/ELK/Logstash/Input/*.txt"
start_position => beginning
codec => "json"
type => "data"
sincedb_path => "NUL"
}
}
filter {
grok { match => { "message" => "<%{NUMBER}>%{SYSLOGTIMESTAMP} %{IPV4} %{HOSTNAME:nodeName}: %{NUMBER} %{WORD} \[%{DATA}\]: %{GREEDYDATA:[anotherField]} \n" } }
# Create array of strings
mutate { split => { "[anotherField]" => "| " } }
# Create a separate event for each array entry
split { field => "[anotherField]" }
}
output {
stdout{}
file {
path => "/Users/...../Work/Projects/ELK/Logstash/test.csv"
codec => line { format => "%{nodeName},%{anotherField}"}
}
}
Input to the above is:
{"#timestamp":"2022-12-01T13:30:00.004Z","message":"<190>Dec 1 14:29:59 10.62.161.199 AA-AMG3U: 0950198238 NN [MDA 8/4]: LN44 SA 2022 Dec 1 14:29:59:87 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467000XXXX1_35292011220XXXXX_string1 | LN44 SD 2022 Dec 1 14:29:59:89 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467000XXXX1_35292011220XXXXX_string2 | LN44 SA 2022 Dec 1 14:29:59:87 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467679XXXX2_35292011220XXXXX_string1 \n","#version":"1","host":"100.62.161.XXX"}
Current output is:
AA-AMG3U,LN44 SA 2022 Dec 1 14:29:59:87 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467000XXXX1_35292011220XXXXX_string1
AA-AMG3U,LN44 SD 2022 Dec 1 14:29:59:89 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467000XXXX1_35292011220XXXXX_string2
AA-AMG3U,LN44 SA 2022 Dec 1 14:29:59:87 CET 17 4001 10.XX.133.XX 56560 401 91.235.10.25 15179 2400160261XXXXX_467679XXXX2_35292011220XXXXX_string1
How do I change to get the output as:
AA-AMG3U,LN44,SA,2022 Dec 1 14:29:59:87 CET,17,4001,10.XX.133.XX 56560,401 91.235.10.25 15179,2400160261XXXXX,467000XXXX1,35292011220XXXXX,string1
AA-AMG3U,LN44,SD,2022 Dec 1 14:29:59:87 CET,17,4001,10.XX.133.XX 56560,401 91.235.10.25 15179,2400160261XXXXX,467000XXXX1,35292011220XXXXX,string1
AA-AMG3U,LN44,SA,2022 Dec 1 14:29:59:87 CET,17,4001,10.XX.133.XX 56560,401 91.235.10.25 15179,2400160261XXXXX,467000XXXX2,35292011220XXXXX,string1
I am not sure how to iterate through arrays that are inside a specific string?

Related

what the correct template mapping should be in the Integration Request aws api-getway?

I am working on an event stream system. And my solution is: api-gatway->firehose data stream->data stream delivery->s3.
1.Send a json request(action is PUT) to api-getway
2.Integrate the request with Kinesis service
3.Valid data in the template mapping of the Integration Request
4.Delivery the data to data stream by firehose data delivery
5.Consume the stream and save into S3
It works with below data
1.Request data
{
"Data" : "this is a test"
}
2.Template mapping code of the Integration Request with firehose
{
"StreamName" : "Stream",
"Data" : "$util.base64Encode($input.json('$.Data'))",
"PartitionKey" : "1"
}
Execution log for request 4ad5f3a1-da7d-436f-bd3f-879ba045c622
Fri Jan 15 15:14:19 UTC 2021 : Starting execution for request: 4ad5f3a1-da7d-436f-bd3f-879ba045c622
Fri Jan 15 15:14:19 UTC 2021 : HTTP Method: PUT, Resource Path: /record
Fri Jan 15 15:14:19 UTC 2021 : Method request path: {}
Fri Jan 15 15:14:19 UTC 2021 : Method request query string: {}
Fri Jan 15 15:14:19 UTC 2021 : Method request headers: {}
Fri Jan 15 15:14:19 UTC 2021 : Method request body before transformations: {
"property":"1",
"name":"name"
}
Fri Jan 15 15:14:19 UTC 2021 : Endpoint request URI: https://kinesis.us-east-1.amazonaws.com/?Action=PutRecord
Fri Jan 15 15:14:19 UTC 2021 : Endpoint request headers: {Authorization=***************************************************************************************************************************************************************************************************************************************************************************************9bb292, X-Amz-Date=20210115T151419Z, x-amzn-apigateway-api-id=450acvde3l, Accept=application/json, User-Agent=AmazonAPIGateway_450acvde3l, X-Amz-Security-Token=IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMSJHMEUCIQC7W693nrtuUGrgUvOpn+PUKTRYTGAzRYVSWUPKoAc4eAIgGp/uH4yV/vynaMQUBNEBTwP79m0P9nVfCmw9CMWjY0Mq5QIIGBACGgw1MzUxMTE0MzMzNDAiDKy2z/4lnGJJJAWaOyrCAnANknvMUIbvf9diTZ4qYsdJ3LjETy9hE0f9pUZTBh7aKRjn2y4hbZc/oXGpsmHcgsgJAlIGD51gD0O5LR+N2EKjQeeo5GijkYvZvdbmxVRAZR27EhFzujCk3okePxAzVb3SeCQDWrBnEWn2VwqX4Y75QWwOgL8ZfcuxNKF/wbks0ukPFBo0p68KQ24PIXOYGMs9IAxHYa195+5mUMSi5Av62QKPpBbUAgZ2IJt9EKvRKyLYCs5XzS4lPCgsIbwouVB38f6UQKdQ9XsmD0cX04ODkdj2/0GHne6ufar7nANd3o08PmALd9mXK4z4gt/OxkZUe2AVLXvNQ4IodzW0WOUe1nFvq6YxpveDVfXKDRSK [TRUNCATED]
Fri Jan 15 15:14:19 UTC 2021 : Endpoint request body after transformations: {
"StreamName" : "EventStream",
"Data" : "IiI=",
"PartitionKey" : "1"
}
Fri Jan 15 15:14:19 UTC 2021 : Sending request to https://kinesis.us-east-1.amazonaws.com/?Action=PutRecord
Fri Jan 15 15:14:19 UTC 2021 : Received response. Status: 200, Integration latency: 21 ms
Fri Jan 15 15:14:19 UTC 2021 : Endpoint response headers: {x-amzn-RequestId=c028bf47-0059-8b93-9853-0cccfda9a977, x-amz-id-2=L15V+8AS8gN3HxJwiz0qVqi/UJn2xSWueRMnXdhqgjFw6TeuaRlYX62DZK9pa+O1PcKopTP55aHRLdhX0cQwovKefVpiuRtv, Date=Fri, 15 Jan 2021 15:14:19 GMT, Content-Type=application/x-amz-json-1.1, Content-Length=110}
Fri Jan 15 15:14:19 UTC 2021 : Endpoint response body before transformations: {"SequenceNumber":"49614572211779959518343530489315214421429290892684951554","ShardId":"shardId-000000000000"}
Fri Jan 15 15:14:19 UTC 2021 : Method response body after transformations: {"SequenceNumber":"49614572211779959518343530489315214421429290892684951554","ShardId":"shardId-000000000000"}
Fri Jan 15 15:14:19 UTC 2021 : Method response headers: {X-Amzn-Trace-Id=Root=1-6001b14b-bbccae126175224317a10a4e, Content-Type=application/json}
Fri Jan 15 15:14:19 UTC 2021 : Successfully completed execution
Fri Jan 15 15:14:19 UTC 2021 : Method completed with status: 200
But does not work with below:
1.Request data
{
"Data" : {
"property":"1",
"name":"name"
}
}
2.Template mapping code of the Integration Request with firehose
{
"StreamName" : "Stream",
"Data" : {
"property" : $input.json('$.property'),
"name" : $input.json('$.name')
},
"PartitionKey" : "1"
}
Execution log for request 05b707cc-d95c-40bf-8b75-375048697414
Fri Jan 15 15:39:34 UTC 2021 : Starting execution for request: 05b707cc-d95c-40bf-8b75-375048697414
Fri Jan 15 15:39:34 UTC 2021 : HTTP Method: PUT, Resource Path: /record
Fri Jan 15 15:39:34 UTC 2021 : Method request path: {}
Fri Jan 15 15:39:34 UTC 2021 : Method request query string: {}
Fri Jan 15 15:39:34 UTC 2021 : Method request headers: {}
Fri Jan 15 15:39:34 UTC 2021 : Method request body before transformations: {
"property":"1",
"name":"name"
}
Fri Jan 15 15:39:34 UTC 2021 : Endpoint request URI: https://kinesis.us-east-1.amazonaws.com/?Action=PutRecord
Fri Jan 15 15:39:34 UTC 2021 : Endpoint request headers: {Authorization=***************************************************************************************************************************************************************************************************************************************************************************************a43c70, X-Amz-Date=20210115T153934Z, x-amzn-apigateway-api-id=450acvde3l, Accept=application/json, User-Agent=AmazonAPIGateway_450acvde3l, X-Amz-Security-Token=IQoJb3JpZ2luX2VjEEAaCXVzLWVhc3QtMSJFMEMCHw0tTLi/gwtxUjwAQSl/CIY8aie2nmayl+Qsm6/i520CID9iXCFafaQTh4YqE1/tzvKgMO5IlYgFJrcNbAQB2Nc+KuUCCBkQAhoMNTM1MTExNDMzMzQwIgxeE22ch5hdFy3nB+UqwgJyKnQpnLuEY3zpcbRdEO5jks7yfx2+o1xfIz9Kga0S1PojPfzxh5aD/PthhP8D0jutv96ZVe8p52TwfSnv/z3YeDCFzsnw/U9kGFzGVt1pY2JMB4sg1vU7li8pFP/qiUQ3QA8cXbp4nWeE3kQGlPG4pjH0MsOvowTxM8G6yKosvCdD8fVyCJxWIjFnn1+dK9GGV/MnZlnaVqc57z0n0nrHgLjxBzDcDKJ5/xrgcqcYmUETFj8NyDJ9ESzCp0PhKJV9tGF4LgxbAgffe2Yw/3qpQyB6JNqJrZEczADp3gL0rjIBXhbnx5Yizs9MBMtoB9L22mAwEeqJRx4lK12wOqQZ5+0homLCugauYVoy3juNv/zW [TRUNCATED]
Fri Jan 15 15:39:34 UTC 2021 : Endpoint request body after transformations: {
"StreamName" : "EventStream",
"Data" : {
"property" : "1",
"name" : "name"
},
"PartitionKey" : "1"
}
Fri Jan 15 15:39:34 UTC 2021 : Sending request to https://kinesis.us-east-1.amazonaws.com/?Action=PutRecord
Fri Jan 15 15:39:34 UTC 2021 : Received response. Status: 400, Integration latency: 2 ms
Fri Jan 15 15:39:34 UTC 2021 : Endpoint response headers: {x-amzn-RequestId=cc937c57-744b-fca2-94e8-c5214b4386dd, x-amz-id-2=fqg8MLIbHOfeeFXp6wpii3l4yl32mI/5RyTwYQyzw9/OqpdLNqCBvBbTp8x7Q4YWAroefbfHb5IUEYeD68SQqh2bq87nL4vp, connection=close, Date=Fri, 15 Jan 2021 15:39:34 GMT, Content-Type=application/x-amz-json-1.1, Content-Length=99}
Fri Jan 15 15:39:34 UTC 2021 : Endpoint response body before transformations: {"__type":"SerializationException","Message":"Start of structure or map found where not expected."}
Fri Jan 15 15:39:34 UTC 2021 : Method response body after transformations: {"__type":"SerializationException","Message":"Start of structure or map found where not expected."}
Fri Jan 15 15:39:34 UTC 2021 : Method response headers: {X-Amzn-Trace-Id=Root=1-6001b736-328415f2db20383946bffd9e, Content-Type=application/json}
Fri Jan 15 15:39:34 UTC 2021 : Successfully completed execution
Fri Jan 15 15:39:34 UTC 2021 : Method completed with status: 200
reference: https://www.youtube.com/watch?v=0UxiV5sUlcA
My bad and I forgot to encode in template mapping.
The code should be:
{
"StreamName" : "Stream",
"Data" : "$util.base64Encode(
{
"property" : $input.json('$.property'),
"name" : $input.json('$.name')
}
),
"PartitionKey" : "1"
}

AWS adding instance via api gateway

So I have function in Lambda. Function is connected to the api gateway and it should add EC2 instance. When im reaching the endpoint by api gateway method test, it returns status 200 but no instance has been added. Maybe the instance params are wrong? Basically the function is modified version of documentation example.
var AWS = require('aws-sdk');
AWS.config.update({region: 'us-east-2'});
exports.handler = function index(event, context, callback) {
// Load the AWS SDK for Node.js
// Load credentials and set region from JSON file
// Create EC2 service object
var ec2 = new AWS.EC2({apiVersion: '2016-11-15'});
// AMI is amzn-ami-2011.09.1.x86_64-ebs
var instanceParams = {
InstanceType: 't2.micro',
KeyName: 'firstkeypair',
ImageId: 'ami-0bbe28eb2173f6167'
};
// Create a promise on an EC2 service object
var instancePromise = new AWS.EC2({apiVersion: '2016-11-15'}).runInstances(instanceParams).promise();
// Handle promise's fulfilled/rejected states
instancePromise.then(
function(data) {
console.log(data);
var instanceId = data.Instances[0].InstanceId;
console.log("Created instance", instanceId);
// Add tags to the instance
tagParams = {Resources: [instanceId], Tags: [
{
Key: 'Name',
Value: 'SDK Sample'
}
]};
// Create a promise on an EC2 service object
var tagPromise = new AWS.EC2({apiVersion: '2016-11-15'}).createTags(tagParams).promise();
// Handle promise's fulfilled/rejected states
tagPromise.then(
function(data) {
console.log("Instance tagged");
}).catch(
function(err) {
console.error(err, err.stack);
});
}).catch(
function(err) {
console.error(err, err.stack);
});
}
AWS test logs:
Execution log for request a83bae6e-2fbf-4d88-ad70-a683a83bdc41
Sun Aug 16 16:56:00 UTC 2020 : Starting execution for request: a83bae6e-2fbf-4d88-ad70-a683a83bdc41
Sun Aug 16 16:56:00 UTC 2020 : HTTP Method: GET, Resource Path: /
Sun Aug 16 16:56:00 UTC 2020 : Method request path: {}
Sun Aug 16 16:56:00 UTC 2020 : Method request query string: {}
Sun Aug 16 16:56:00 UTC 2020 : Method request headers: {}
Sun Aug 16 16:56:00 UTC 2020 : Method request body before transformations:
Sun Aug 16 16:56:00 UTC 2020 : Endpoint request URI: https://lambda.us-east-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-2:081348884123:function:hello/invocations
Sun Aug 16 16:56:00 UTC 2020 : Endpoint request headers: {x-amzn-lambda-integration-tag=a83bae6e-2fbf-4d88-ad70-a683a83bdc41, Authorization=**************************************************************************************************************************************************************************************************************************************************************************************59de14, X-Amz-Date=20200816T165600Z, x-amzn-apigateway-api-id=o2hkrbm1o4, X-Amz-Source-Arn=arn:aws:execute-api:us-east-2:081348884123:o2hkrbm1o4/test-invoke-stage/GET/, Accept=application/json, User-Agent=AmazonAPIGateway_o2hkrbm1o4, X-Amz-Security-Token=IQoJb3JpZ2luX2VjEAAaCXVzLWVhc3QtMiJIMEYCIQCPi2S8PtDGsVK3w101D8B05/BCFGyUCzHeX8CT6tC7pAIhAJZCgpbZN94qCVdAgrQGlIIE+ABsO9MDkzh6Lf3WGq3IKr0DCNn//////////wEQARoMNzE4NzcwNDUzMTk1IgxILUqxpu50pB1cJmcqkQP/g+OuOqP7/zXYq8IAzTMolDThuprxjuzwDbmtAmS3adcmmHO25YxBQrId1XiR7ZEU7mq52k4A0nIFhBPkz2dZZIfr8MiLVCDx5tLok8j3lPZJOW+I3n7BVglTMtfQDpPYRSUcIQhOfsSnEEc+FKPzHyrzGsLeazIUHItf5L3xY4QO9tyDWnTXfcM2pp [TRUNCATED]
Sun Aug 16 16:56:00 UTC 2020 : Endpoint request body after transformations:
Sun Aug 16 16:56:00 UTC 2020 : Sending request to https://lambda.us-east-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:us-east-2:081348884123:function:hello/invocations
Sun Aug 16 16:56:02 UTC 2020 : Received response. Status: 200, Integration latency: 1952 ms
Sun Aug 16 16:56:02 UTC 2020 : Endpoint response headers: {Date=Sun, 16 Aug 2020 16:56:02 GMT, Content-Type=application/json, Content-Length=4, Connection=keep-alive, x-amzn-RequestId=f84212ea-38f8-40cc-b5c6-c12885e78392, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5f396520-4d9dfcb6b965192c5fea0df6;sampled=0}
Sun Aug 16 16:56:02 UTC 2020 : Endpoint response body before transformations: null
Sun Aug 16 16:56:02 UTC 2020 : Method response body after transformations: null
Sun Aug 16 16:56:02 UTC 2020 : Method response headers: {X-Amzn-Trace-Id=Root=1-5f396520-4d9dfcb6b965192c5fea0df6;Sampled=0, Content-Type=application/json}
Sun Aug 16 16:56:02 UTC 2020 : Successfully completed execution
Sun Aug 16 16:56:02 UTC 2020 : Method completed with status: 200
Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:*",
"organizations:DescribeAccount",
"organizations:DescribeOrganization",
"organizations:DescribeOrganizationalUnit",
"organizations:DescribePolicy",
"organizations:ListChildren",
"organizations:ListParents",
"organizations:ListPoliciesForTarget",
"organizations:ListRoots",
"organizations:ListPolicies",
"organizations:ListTargetsForPolicy"
],
"Resource": "*"
}
]
}
Edit:
Solved by adding EC2 Full Access permission to Lambda Function.
There were 2 issues as discovered through the comments.
The first was that the RunInstances task was not including the MinCount and MaxCount properties which led to no instances being launched.
Once this was fixed the next issue was a permissions issue due to the lack of permissions to run ec2:RunInstance or e2:CreateTags.
It is worth stating the best practice with permissions is to scope down to the minimal permissions that you require to successfully run.

Token is getting invalidated using nodejs speakeasy library

My purpose is to expire a token after 1hr (3600 secs). While trying with nodejs speakeasy the token is getting invalidated much before that. The below logs are for "1, 10 and 60 minutes" and that also getting invalidated muche before the 1 minute. Max of the time I am getting inconsistent results.
Partial code snippet
let secret = speakeasy.generateSecret({
length: 10
});
let seconds= 3600; //1Hr
let token = speakeasy.totp({
secret: secret.base32,
step: seconds
});
let otp = {
"secret": secret.base32.toString(),
"token": token
};
function checkOTP(otp) {
let verified = speakeasy.totp.verify({
secret: otp.secret,
token: otp.token,
step: seconds
});
return verified;
}
Am I doing something wrong? Few console logs from a sample script:
For 1 minute - Invalidated before 18secs
[ Fri Dec 08 2017 09:16:18 GMT-0800 (Pacific Standard Time) ](true) 9:16:59 AM
[ Fri Dec 08 2017 09:16:18 GMT-0800 (Pacific Standard Time) ](false) 9:17:00 AM
For 10Mins - Invalidated before 7minutes
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](true) 9:19:59 AM
[ Fri Dec 08 2017 09:18:28 GMT-0800 (Pacific Standard Time) ](false) 9:20:00 AM
For 1Hr - Invalidated before 7minutes
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](true) 11:56:41 AM
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](true) 11:56:43 AM
[ Fri Dec 08 2017 11:07:01 GMT-0800 (Pacific Standard Time) ](false) 12:00:37 PM
What is the appropriate way to validate within the above window?
From the readme of speakeasy it looks like your token parameters are wrong:
var token = speakeasy.totp({
secret: secret.base32,
encoding: 'base32',
time: 1453667708 // You have this as 'step' not 'time'
});

How to format date in Filter in Logstash

I am using Logstash to output JSON message to an API. I am reading logs from a log file. My configurations are working fine and it is also sending all the messages to the API. Following is the sample log file:
Log File:
2014 Jun 01 18:57:34:158 GMT +5 BW.Customer_01_001_009-Process_Archive Info [BW-Core] BWENGINE-300009 BW Plugins: version 5.10.0, build V48, 2012-6-3
2014 Jun 01 18:57:34:162 GMT +5 BW.Customer_01_001_009-Process_Archive Info [BW-Core] BWENGINE-300010 XML Support: TIBCOXML Version 5.51.500.003
2014 Jun 01 18:57:34:162 GMT +5 BW.Customer_01_001_009-Process_Archive Info [BW-Core] BWENGINE-300011 Java version: Java HotSpot(TM) Server VM 20.5-b03
2014 Jun 01 18:57:34:162 GMT +5 BW.Customer_01_001_009-Process_Archive Info [BW-Core] BWENGINE-300012 OS version: i386 Linux 3.11.0-12-generic
2014 Jun 01 18:57:41:018 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100118
2014 Jun 01 18:57:41:027 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100206
2014 Jun 01 18:57:41:408 GMT +5 BW.Customer_01_001_009-Process_Archive Info [BW-Core] BWENGINE-300013 Tibrv string encoding: ISO8859-1
2014 Jun 01 18:57:42:408 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100118
2014 Jun 01 18:57:42:408 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100206
2014 Jun 01 18:57:42:555 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100118
2014 Jun 01 18:57:42:555 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100206
2014 Jun 01 18:57:42:557 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100118
2014 Jun 01 18:57:42:557 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100206
2014 Jun 01 18:57:42:595 GMT +5 BW.Customer_01_001_009-Process_Archive Warn [BW_Core] Duplicate message map entry for BW-HTTP-100118
I am using grok pattern to parse this log file, Following is my sample configuration file:
Config File:
filter {
if [type] == "bw5applog" {
grok {
match => [ "message", "(?<log_timestamp>%{YEAR}\s%{MONTH}\s%{MONTHDAY}\s%{TIME}:\d{3})\s(?<log_Timezone>%{DATA}\s%{DATA})\s(?<log_MessageTitle>%{DATA})(?<MessageType>%{LOGLEVEL})%{SPACE}\[%{DATA:ProcessName}\]%{SPACE}%{GREEDYDATA:Message}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
}
}
I am able to parse the complete log entry according to my requirement, But I want to format the date.
Problem Statement:
Currently I am getting date in the following format from the parsed log entries:
log_timestamp: 2014·May·28·12:07:35:927
But the format in which my API is expecting the date is as below:
Expected Output:
log_timestamp: 2014-05-28T12:07:35:927
How can I achieve that by using the above mentioned filter configurations, I tried doing something with the following configurations but I wasn't able to succeed.
You are applying the date filter on the wrong field. Instead of timestamp, you have to apply it on the log_timestamp field, which contains the date you want to parse:
date {
match => [ "log_timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "log_timestamp"
timezone => "UTC"
}
In addition, the mutate filter is useless since it is applied on a field which does not exist (Timestamp).

qmail logstash multiline filtering

I've been using logstash for a while now with great success for apache access logs and occasional mysql logs. I've just started to use it for qmail logs but wanted a better way to group qmail logs based on the qmail ID and be able to track bounces or other delivery failures and statuses. I've seen some stuff regarding postfix but not qmail.
Has anyone used logstash like this with qmail? How does your logstash config look? How does your Kibana dashboards look?
Any help would be appreciated.
Here's an example of some qmail logs:
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.510987 new msg 33592
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.511087 info msg 33592: bytes 10820 from <SmallBusinessLoan.martin.cota-martin.cota=example1.com#example.com> qp 3740 uid 89
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.513616 starting delivery 1314142: msg 33592 to local example1.com-martin.cota#example1.com
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.513686 status: local 1/4 remote 1/120
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576361 delivery 1314142: success: did_0+0+1/
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576491 status: local 0/4 remote 1/120
Oct 15 09:26:08 imappop1-mail qmail: 1413379568.576548 end msg 33592
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.579644 new msg 33603
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.579790 info msg 33603: bytes 4370 from <loansfidelity#example2.com> qp 5037 uid 89
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.582804 starting delivery 1314143: msg 33603 to local example3.com-daniel#example3.com
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.582967 status: local 1/4 remote 1/120
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619422 delivery 1314143: success: did_0+0+1/
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619512 status: local 0/4 remote 1/120
Oct 15 09:26:09 imappop1-mail qmail: 1413379569.619561 end msg 33603
Ideally I'd like to be able to track the entire anatomy of these logs. Here's my input and filter in logstash now:
{
"network": {
"servers": [ "192.168.115.61:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure",
"/var/log/haraka.log",
"/var/log/maillog"
],
"fields": { "type": "syslog" }
}
]
}
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
multiline {
pattern => "(([^\s]+)Exception.+)|(at:.+)"
stream_identity => "%{logsource}.%{#type}"
what => "previous"
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}

Resources