Logstash grok config error - while parsing file name - logstash

I am new to logstash and grok and I am trying to parse AWS ECS logs in an S3 bucket in the following format -
File Name - my-logs-s3-bucket/3d265ee3-d2ee-4029-a3d9-fd2255d69b92/ecs-fargate-container-8ff0e472-c76f-4f61-a363-64c2b80aa842/000000.gz
Sample Lines -
2019-05-09T16:16:16.983Z JBoss Bootstrap Environment
2019-05-09T16:16:16.983Z JBOSS_HOME: /app/jboss
2019-05-09T16:16:16.983Z JAVA_OPTS: -server -XX:+UseCompressedOops -Djboss.server.log.dir=/var/log/jboss -Xms128m -Xmx4096m
And logstash.conf
input {
s3 {
region => "us-east-1"
bucket => "my-logs-s3-bucket"
interval => "7200"
}
}
filter {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:tstamp}"]
}
date {
match => ["tstamp", "ISO8601"]
}
mutate {
remove_field => ["tstamp"]
add_field => {
"file" => "%{[#metadata][s3][key]}"
}
######### NEED HELP HERE - START #########
#grok {
# match => [ "file", "ecs-fargate-container-%{DATA:containerlogname}"]
#}
######### NEED HELP HERE - END #########
}
}
output {
stdout { codec => rubydebug {
#metadata => true
}
}
}
I am able to see all the logs parsed and the file name extracted when I run logstash using the above configuration and the file name from the output looks like below -
"file" => "myapp-logs/3d265ee3-d2ee-4029-a3d9-fd2255d69b92/ecs-fargate-container-8ff0e472-c76f-4f61-a363-64c2b80aa842/000000.gz",
I am trying to use grok to extract the file name as either ecs-fargate-container-8ff0e472-c76f-4f61-a363-64c2b80aa842 or 8ff0e472-c76f-4f61-a363-64c2b80aa842 by uncommenting grok config lines between #NEED HELP HERE - START and ending with the below error -
Expected one of #, => at line 21, column 10 (byte 536) after filter {\n grok {\n match => [\"message\", \"%{TIMESTAMP_ISO8601:tstamp}\"]\n }\n date {\n match => [\"tstamp\", \"ISO8601\"]\n }\n mutate {\n #remove_field => [\"tstamp\"]\n add_field => {\n \"file\" => \"%{[#metadata][s3][key]}\"\n }\n grok ", :
I am not sure where I am going wrong with this. Please advice.

Your grok filter was inside the mutate filter, try the following.
filter {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:tstamp}"]
}
date {
match => ["tstamp", "ISO8601"]
}
mutate {
remove_field => ["tstamp"]
add_field => { "file" => "%{[#metadata][s3][key]}" }
}
grok {
match => [ "file", "ecs-fargate-container-%{DATA:containerlogname}"]
}
}

Related

logstash configuration grok parse timestamp

I am trying to parse
[7/1/05 13:41:00:516 PDT]
This is the configuration grok I have written for the same :
\[%{DD/MM/YY HH:MM:SS:S Z}\]
With the date filter :
input {
file {
path => "logstash-5.0.0/bin/sta.log"
start_position => "beginning"
}
}
filter {
grok {
match =>" \[%{DATA:timestamp}\] "
}
date {
match => ["timestamp","DD/MM/YY HH:MM:SS:S ZZZ"]
}
}
output {
stdout{codec => "json"}
}
above is the configuration I have used.
And consider this as my sta.log file content:
[7/1/05 13:41:00:516 PDT]
Getting this error :
[2017-01-31T12:37:47,444][ERROR][logstash.agent ] fetched an invalid config {:config=>"input {\nfile {\npath => \"logstash-5.0.0/bin/sta.log\"\nstart_position => \"beginning\"\n}\n}\nfilter {\ngrok {\nmatch =>\"\\[%{DATA:timestamp}\\]\"\n}\ndate {\nmatch => [\"timestamp\"=>\"DD/MM/YY HH:MM:SS:S ZZZ\"]\n}\n}\noutput {\nstdout{codec => \"json\"}\n}\n\n", :reason=>"Expected one of #, {, ,, ] at line 12, column 22 (byte 184) after filter {\ngrok {\nmatch =>\"\\[%{DATA:timestamp}\\]\"\n}\ndate {\nmatch => [\"timestamp\""}
Can anyone help here?
You forgot to specify the input for your grokfilter. A correct configuration would look like this:
input {
file {
path => "logstash-5.0.0/bin/sta.log"
start_position => "beginning"
}
}
filter {
grok {
match => {"message" => "\[%{DATA:timestamp} PDT\]"}
}
date {
match => ["timestamp","dd/MM/yy HH:mm:ss:SSS"]
}
}
output {
stdout{codec => "json"}
}
For further reference check out the grok documentation here.

Logstash 1.4.2 grok filter: _grokparsefailure

i am trying to parse this log line:
- 2014-04-29 13:04:23,733 [main] INFO (api.batch.ThreadPoolWorker) Command-line options for this run:
here's the logstash config file i use:
input {
stdin {}
}
filter {
grok {
match => [ "message", " - %{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel} %{JAVACLASS:class} %{DATA:mydata} "]
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
output {
elasticsearch {
host => "localhost"
}
stdout { codec => rubydebug }
}
Here's the output i get:
{
"message" => " - 2014-04-29 13:04:23,733 [main] INFO (api.batch.ThreadPoolWorker) Commans run:",
"#version" => "1",
"#timestamp" => "2015-02-02T10:53:58.282Z",
"host" => "NAME_001.corp.com",
"tags" => [
[0] "_grokparsefailure"
]
}
Please if anyone can help me find where the problem is on the gork pattern.
I tried to parse that line in http://grokdebug.herokuapp.com/ but it parses only the timestamp, %{WORD} and %{LOGLEVEL} the rest is ignored!
There are two error in your config.
First
The error in GROK is the JAVACLASS, you have to include ( ) in the pattern, For example: \(%{JAVACLASS:class}\.
Second
The date filter match have two value, first is the field you want to parse, so in your example it is time, not timestamp. The second value is the date pattern. You can refer to here
Here is the config
input {
stdin {
}
}
filter {
grok {
match => [ "message", " - %{TIMESTAMP_ISO8601:time} \[%{WORD:main}\] %{LOGLEVEL:loglevel} \(%{JAVACLASS:class}\) %{GREEDYDATA:mydata}"
]
}
date {
match => [ "time" , "YYYY-MM-dd HH:mm:ss,SSS" ]
}
}
output
{
stdout {
codec => rubydebug
}
}
FYI. Hope this can help you.

logstash output not showing the desired timestamp

I am trying to get the desired time stamp format from logstash output. I can''t get that if I use this format in syslog
Please share your thoughts about convert to the other format that’s in the _source field like Yyyy-mm-ddThh:mm:ss.sssZ format?
filter {
grok {
match => [ "logdate", "Yyyy-mm-ddThh:mm:ss.sssZ" ]
overwrite => ["host", "message"]
}
_source: {
message: "activity_log: {"created_at":1421114642210,"actor_ip":"192.168.1.1","note":"From system","user":"4561c9d7aaa9705a25f66d","user_id":null,"actor":"4561c9d7aaa9705a25f66d","actor_id":null,"org_id":null,"action":"user.failed_login","data":{"transaction_id":"d6768c473e366594","name":"user.failed_login","timing":{"start":1422127860691,"end":14288720480691,"duration":0.00257},"actor_locatio
I am using this code in syslog file
filter {
if [message] =~ /^activity_log: / {
grok {
match => ["message", "^activity_log: %{GREEDYDATA:json_message}"]
}
json {
source => "json_message"
remove_field => "json_message"
}
date {
match => ["created_at", "UNIX_MS"]
}
mutate {
rename => ["[json][repo]", "repo"]
remove_field => "json"
}
}
}
output {
elasticsearch { host => localhost }
stdout { codec => rubydebug }
}
thanks
"message" => "<134>feb 1 20:06:12 {\"created_at\":1422765535789, pid=5450 tid=28643 version=b0b45ac proto=http ip=192.168.1.1 duration_ms=0.165809 fs_sent=0 fs_recv=0 client_recv=386 client_sent=0 log_level=INFO msg=\"http op done: (401)\" code=401" }
"#version" => "1",
"#timestamp" => "2015-02-01T20:06:12.726Z",
"type" => "activity_log",
"host" => "192.168.1.1"
The pattern in your grok filter doesn't make sense. You're using a Joda-Time pattern (normally used for the date filter) and not a grok pattern.
It seems your message field contains a JSON object. That's good, because it makes it easy to parse. Extract the part that comes after "activity_log: " to a temporary json_message field,
grok {
match => ["message", "^activity_log: %{GREEDYDATA:json_message}"]
}
and parse that field as JSON with the json filter (removing the temporary field if the operation was successful):
json {
source => "json_message"
remove_field => ["json_message"]
}
Now you should have the fields from the original message field at the top level of your message, including the created_at field with the timestamp you want to extract. That number is the number of milliseconds since the epoch so you can use the UNIX_MS pattern in a date filter to extract it into #timestamp:
date {
match => ["created_at", "UNIX_MS"]
}

Facing errors in logstash

When i defined the pattern for parsing apache tomcat and application log files in logstash we are getting the following error .
Sample log file is :
2014-08-20 12:35:26,037 INFO [routerMessageListener-74] PoolableRuleEngineFactory Executing the rule -->ECE Tagging Rule
config file is :
filter{
grok{
type => "log4j"
#pattern => "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:severity} \[\w+\[% {GREEDYDATA:thread},.*\]\] %{JAVACLASS:class} - %{GREEDYDATA:message}"
pattern => "%{TIMESTAMP_ISO8601:logdate}"
#add_tag => [ "level_%{level}" ]
}
date {
match => [ "logdate", "YYYY-MM-dd HH:mm:ss,SSS"]
}
}
Unknown setting 'timestamp' for date {:level=>:error}
Your post does not show a setting 'timestamp' for your date filter. I suspect you had started with the example here which used timestamp setting that used to be in older versions of date filter. You correctly fixed it for newer version of logstash to use match setting but perhaps had not saved your change. I have no problems using above filter with logstash-1.5.3.
Here is my complete config file. Note I am still testing it but it seems to be working to import a JBoss log with Log4J log messages imported from an existing log file.
input {
tcp {
type => "log4j"
port => 4560
}
stdin {
type => "log4j"
}
}
filter {
grok{
type => "log4j"
#pattern => "%{TIMESTAMP_ISO8601:logdate} %{LOGLEVEL:severity} \[\w+\[%{GREEDYDATA:thread},.*\]\] %{JAVACLASS:class} - %GREEDYDATA:message}"
pattern => "%{TIMESTAMP_ISO8601:logdate}"
#add_tag => [ "level_%{level}" ]
}
date {
type => "log4j"
match => [ "logdate", "YYYY-MM-dd HH:mm:ss,SSS"]
exclude_tags => "_grokparsefailure"
}
# Catches normal space indented type things, probably could be removed b/c the other multiline should do everythign we need
multiline {
type => "log4j"
tags => ["_grokparsefailure"] # exclude anything we already handled
pattern => ".*"
what => "previous"
add_tag => "notgrok"
}
}
output {
gelf {
host => "localhost"
custom_fields => ["environment", "PROD", "service", "BestServiceInTheWorld"]
}
# Print each event to stdout.
stdout {
codec => json
}
}

How to remove trailing newline from message field

I am shipping Glassfish 4 logfiles with Logstash to an ElasticSearch sink. How can I remove with Logstash the trailing newline from a message field?
My event looks like this:
{
"#timestamp" => "2013-11-21T13:29:33.081Z",
"message" => "[2013-11-21T13:29:32.577+0000] [glassfish 4.0] [INFO] [] [javax.resourceadapter.mqjmsra.lifecycle] [tid: _ThreadID=142 _ThreadName=Thread-43] [timeMillis: 1385040572577] [levelValue: 800] [[\n MQJMSRA_RA1101: GlassFish MQ JMS Resource Adapter stopped.]]\n",
"#version" => "1",
"tags" => ["multiline", "date_filtered"],
"host" => "myhost",
"path" => "../server.log"
}
A second solution is using the mutate filter of Logstash. It allows you to strip the value of a field.
filter {
# Remove leading and trailing whitspaces (including newline etc. etc.)
mutate {
strip => "message"
}
}
You have to use the multiline filter with the correct pattern, to tell logstash, that every line with precending whitespace belongs to the line before. Add this lines to your conf file.
filter{
...
multiline {
type => "gflogs"
pattern => "\[\#\|\d{4}"
negate => true
what => "previous"
}
...
}
You can also include grok plugin to handle timestamp and filter irregular lines from beeing indexed.
See complete stack with single logstash instance on same machine
input {
stdin {
type => "stdin-type"
}
file {
path => "/path/to/glassfish/logs/*.log"
type => "gflogs"
}
}
filter{
multiline {
type => "gflogs"
pattern => "\[\#\|\d{4}"
negate => true
what => "previous"
}
grok {
type => "gflogs"
pattern => "(?m)\[\#\|%{TIMESTAMP_ISO8601:timestamp}\|%{LOGLEVEL:loglevel}\|%{DATA:server_version}\|%{JAVACLASS:category}\|%{DATA:kv}\|%{DATA:message}\|\#\]"
named_captures_only => true
singles => true
}
date {
type => "gflogs"
match => [ "timestamp", "ISO8601" ]
}
kv {
type => "gflogs"
exclude_tags => "_grokparsefailure"
source => "kv"
field_split => ";"
value_split => "="
}
}
output {
stdout { codec => rubydebug }
elasticsearch { embedded => true }
}
This worked for me. Pleas look also this post on logstash-usergroup. I can also advice the great and up to date logstash book. Its also a good way to support the work of the logstash author.
Hope to see you on any JUG-Berlin Event!

Resources