LOGSTASH - Filter JSON - logstash

I need to save only the contents of the ship node in a Kafka topic, unfortunately I have already performed several tests and the filter is not working
My json is similar to that
{
"_index": "abd",
"type" : "doc",
"_source":{
"response_body": {
"ship":[
{
"type" : "iPhone",
"number": "0123-4567-8888"
},
{
"type" : "iPhone",
"number": "0123-4567-4444"
}
]
}}}
My logstash is configured like this
input {
file {
path => "${PWD}/logstash_input"
start_position => "beginning"
sincedb_path => "/dev/null"
type => "json"
}
}
filter{
json{
source => "message"
target => "_source.response_body"
}
}
output {
kafka {
bootstrap_servers => "localhost:9092"
codec => json{}
topic_id => "testtopic"
}

Related

How to Create Separate index created for separate input types

I have Below logstash-syslog.conf file where it has two different input types one as type => "syslog" and another is type => "APIC" . So, i need two separate output index created as syslog-2018.08.25 and APIC-2018.08.05 .
I want these index to be created Dynamically, i tried something index => "%{[type]}-%{+YYYY.MM.dd}" but it did not worked and killed the logstash.
Could you please suggest what's wrong i'm doing in the below config which needs to be fixed for both config and Index type.
Below is the configuration logstash file:
logstash Version is : 6.2
$ vi logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "APIC"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "APIC" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}
output {
elasticsearch {
hosts => "noida-elk:9200"
index => "syslog-%{+YYYY.MM.dd}"
#index => "%{[type]}-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
Fixed for me as its working for me.
$ cat logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "apic_logs"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "apic_logs" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} (?<prog>[\w._/%-]+) %{SYSLOG5424SD:f1}%{SYSLOG5424SD:f2}%{SYSLOG5424SD:f3}%{SYSLOG5424SD:f4}%{SYSLOG5424SD:f5} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "syslog-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
output {
if [type] == "apic_logs" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "apic_logs-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}

How to add a field via logstash

I want to insert following field:
"date": {
"type": "date",
"format": "YYYY-MM-DD HH:mm:ss,SSS"
}
In my Logstash configuration I tried the following:
grok {
patterns_dir => "/etc/logstash/conf.d/patterns"
match => { "message" => "%{USERACTIVITY}" }
}
mutate {
add_field => {
"type" => "date"
"format" => "%{date}"
}
}
mutate {
add_field => {
"timestamp" => "{ %{type} , %{fomat} }"
}
}
But it is not working. Is it possible to add a key value pair from exitsting?
Try,
mutate {
add_field => {
"type" => "date"
"format" => "%{[date][format]}"
}
}

Logstash is sending a log twice. Repeating logs Issue

I am parsing logs of a file of my server and sending only info, warning and error level logs to my API but problem is that I am receiving a log two times. In output I am mapping parsed logs values to on my JSON fields and I am send that json to my API but I am receiving that mapping of json twice.
I am analyzing my logstash log file but a log entry is only appeared once in log file.
{
"log_EventMessage" => "Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read timed ",
"message" => "TID: [-1234] [] [2017-08-11 12:03:11,545] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[http://ubuntu:8280/services/TestProxy.TestProxyHttpSoap12Endpoint] Read time",
"type" => "carbon",
"TimeStamp" => "2017-08-11T12:03:11.545",
"tags" => [
[0] "grokked",
[1] "loglevelinfo",
[2] "_grokparsefailure"
],
"log_EventTitle" => "org.apache.axis2.transport.http.HTTPSender",
"path" => "/home/waqas/Documents/repository/logs/carbon.log",
"#timestamp" => 2017-08-11T07:03:13.668Z,
"#version" => "1",
"host" => "ubuntu",
"log_SourceSystemId" => "-1234",
"EventId" => "b81a054e-babb-426c-b0a0-268494d14a0e",
"log_EventType" => "INFO"
}
Following are my configuration.
Need help. Unable to figure out the reason that why this is happening.
input {
file {
path => "LOG_FILE_PATH"
type => "carbon"
start_position => "end"
codec => multiline {
pattern => "(^\s*at .+)|^(?!TID).*$"
negate => false
what => "previous"
auto_flush_interval => 1
}
}
}
filter {
#***********************************************************
# Grok Pattern to parse Single Line Log Entries
#**********************************************************
if [type] == "carbon" {
grok {
match => [ "message", "TID:%{SPACE}\[%{INT:log_SourceSystemId}\]%{SPACE}\[%{DATA:log_ProcessName}\]%{SPACE}\[%{TIMESTAMP_ISO8601:TimeStamp}\]%{SPACE}%{LOGLEVEL:log_EventType}%{SPACE}{%{JAVACLASS:log_EventTitle}}%{SPACE}-%{SPACE}%{GREEDYDATA:log_EventMessage}" ]
add_tag => [ "grokked" ]
}
mutate {
gsub => [
"TimeStamp", "\s", "T",
"TimeStamp", ",", "."
]
}
if "grokked" in [tags] {
grok {
match => ["log_EventType", "INFO"]
add_tag => [ "loglevelinfo" ]
}
grok {
match => ["log_EventType", "ERROR"]
add_tag => [ "loglevelerror" ]
}
grok {
match => ["log_EventType", "WARN"]
add_tag => [ "loglevelwarn" ]
}
}
#*****************************************************
# Grok Pattern in Case of Failure
#*****************************************************
if !( "_grokparsefailure" in [tags] ) {
grok{
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "grokked" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
timezone => "UTC"
}
}
}
#*******************************************************************
# Grok Pattern to handle MultiLines Exceptions and StackTraces
#*******************************************************************
if ( "multiline" in [tags] ) {
grok {
match => [ "message", "%{GREEDYDATA:log_StackTrace}" ]
add_tag => [ "multiline" ]
tag_on_failure => [ "multiline" ]
}
date {
match => [ "timestamp", "yyyy MMM dd HH:mm:ss:SSS" ]
target => "TimeStamp"
}
}
}
filter {
uuid {
target => "EventId"
}
}
output {
if [type] == "carbon" {
if "loglevelerror" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Error Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","High","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelinfo" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Info Messages to API
#*******************************************************************
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Low","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
if [type] == "carbon" {
if "loglevelwarn" in [tags] {
stdout{codec => rubydebug}
#*******************************************************************
# Sending Warn Messages to API
http {
url => "https://localhost:8000/logs"
headers => {
"Accept" => "application/json"
}
connect_timeout => 60
socket_timeout => 60
http_method => "post"
format => "json"
mapping => ["EventId","%{EventId}","EventSeverity","Medium","TimeStamp","%{TimeStamp}","EventType","%{log_EventType}","EventTitle","%{log_EventTitle}","EventMessage","%{log_EventMessage}","SourceSystemId","%{log_SourceSystemId}","StackTrace","%{log_StackTrace}"]
}
}
}
}

Logstash: TestResult comes out as an array

The generated results of running the config below show the TestResult section as an array. I am trying to get rid of that array to make sending the data to Elasticsearch.
I have the following XML file:
<tem:SubmitTestResult xmlns:tem="http://www.example.com" xmlns:acs="http://www.example.com" xmlns:acs1="http://www.example.com">
<tem:LabId>123</tem:LabId>
<tem:userId>123</tem:userId>
<tem:TestResult>
<acs:CreatedBy>123</acs:CreatedBy>
<acs:CreatedDate>123</acs:CreatedDate>
<acs:LastUpdatedBy>123</acs:LastUpdatedBy>
<acs:LastUpdatedDate>123</acs:LastUpdatedDate>
<acs1:Capacity95FHigh>123</acs1:Capacity95FHigh>
<acs1:Capacity95FHigh_AHRI>123</acs1:Capacity95FHigh_AHRI>
<acs1:CondensateDisposal_AHRI>123</acs1:CondensateDisposal_AHRI>
<acs1:DegradationCoeffCool>123</acs1:DegradationCoeffCool>
</tem:TestResult>
</tem:SubmitTestResult>
And I am using this config:
input {
file {
path => "/var/log/logstash/test3.xml"
}
}
filter {
multiline {
pattern => "<tem:SubmitTestResult>"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
mutate {
gsub => ["message", "\n", ""]
}
mutate {
replace => ["message", '<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>%{message}']
}
xml {
source => "message"
target => "SubmitTestResult"
}
mutate {
remove_field => ["message", "#version", "host", "#timestamp", "path", "tags", "type"]
remove_field => ["[SubmitTestResult][xmlns:tem]","[SubmitTestResult][xmlns:acs]","[SubmitTestResult][xmlns:acs1]"]
}
mutate {
replace => [ "[SubmitTestResult][LabId]", "%{[SubmitTestResult][LabId]}" ]
replace => [ "[SubmitTestResult][userId]", "%{[SubmitTestResult][userId]}" ]
}
mutate {
replace => [ "[SubmitTestResult][TestResult][0][CreatedBy]", "%{[SubmitTestResult][TestResult][0][CreatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CreatedDate]", "%{[SubmitTestResult][TestResult][0][CreatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedBy]", "%{[SubmitTestResult][TestResult][0][LastUpdatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedDate]", "%{[SubmitTestResult][TestResult][0][LastUpdatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]", "%{[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][DegradationCoeffCool]", "%{[SubmitTestResult][TestResult][0][DegradationCoeffCool]}" ]
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
The result is:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => [
[0] {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
]
}
As you can see, TestResult has the "[0]" array in there. Is there some config change I can do to make sure that it doesn't come out as an array? I want to send this to Elasticsearch and want the data correct.
I figured this out. After the last mutate block, I added one more mutate block. All I had to do was rename the field and that did the trick.
mutate {
rename => {"[SubmitTestResult][TestResult][0]" => "[SubmitTestResult][TestResult]"}
}
The result now looks proper:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
}

how to make two condition check in logstash and write better configuration file

I am using logstash 1.4.2,
I have logstash-forwarder.conf in client log-server like this
{
"network": {
"servers": [ "xxx.xxx.xxx.xxx:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [ "/var/log/messages" ],
"fields": { "type": "syslog" }
},
{
"paths": [ "/var/log/secure" ],
"fields": { "type": "linux-syslog" }
}
]
}
=========================================================
In logstash server
1. filter.conf
filter {
if [type] == "syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "\[%{WORD:messagetype}\]%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
}
if [type] == "linux-syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] }
}
}
=======================================================
2. output.conf
output {
if [messagetype] == "WARNING" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [messagetype] == "ERROR" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [type] == "linux-syslog" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
}
=======================================================
I want all the logs to forward from /var/log/secure and only ERROR and WARNING log from /var/log/messages, I know this is not a good configuration. I want someone to show me a better way to do this.
I prefer to make decisions about events in the filter block. My input and output blocks are usually quite simple. From there, I see two options.
Use the drop filter
The drop filter causes an event to be dropped. It won't ever make it to your outputs:
filter {
#other processing goes here
if [type] == "syslog" and [messagetype] not in ["ERROR", "WARNING"] {
drop {}
}
}
The upside of this is that it's very simple.
The downside is that the event is just dropped. It won't be output at all. Which is fine, if that's what you want.
Use a tag
Many filters allow you to add tags, which are useful for communicating decisions between plugins. You could attach a tag telling your output block to send the event to ES:
filter {
#other processing goes here
if [type] == "linux-syslog" or [messagetype] in ["ERROR", "WARNING"] {
mutate {
add_tag => "send_to_es"
}
}
}
output {
if "send_to_es" in [tags] {
elasticsearch {
#config goes here
}
}
}
The upside of this is that it allows fine control.
The downside of this is that it's a bit more work, and your ES data ends up a little bit polluted (the tag will be visible and searchable in ES).

Resources