logstash date format, not getting any data - logstash

Hi following are my configuration in centos6 logstash server. I am using logstash 1.4.2 and elasticsearch 1.2.1. I am forwarding logs from /var/log/messages and /var/log/secure and there time format is "Sep 1 22:15:34"
1. input.conf
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "certs/logstash-forwarder.crt"
ssl_key => "private/logstash-forwarder.key"
}
}
2. filter.conf
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
locale => "en" // possibly this didn't work in logstash 1.4.2
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss", "ISO8601"]
add_field => { "debug" => "timestampMatched"}
timezone => "UTC"
}
ruby { code => "event['#timestamp'] = event['#timestamp'].getlocal"} //I saw somewhere instead of "locale => en " we have to use this in logstash 1.4.2
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] } //this probably won't work and give date parsing error
}
}
3. output.conf
output {
elasticsearch { host => "logstash_server_ip" }
stdout { codec => rubydebug }
}
Below is logstash-forwarder conf in all client server
{
"network": {
"servers": [ "logstash_server_ip:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [
"/var/log/messages",
"/var/log/secure"
],
"fields": { "type": "syslog" }
}
]
}
Here is the problem. I am forwarding logs from 5 servers with different timezone eg: EDT, NDT, NST, NPT. The logstash_server timezone is in NPT (Nepali time) [UTC + 5:45]
All server giving following
2014/09/02 08:09:02.204882 Setting trusted CA from file: certs/logstash-forwarder.crt
2014/09/02 08:09:02.205372 Connecting to logstash_server_ip:5000 (logstash_server_ip)
2014/09/02 08:09:02.205600 Launching harvester on new file: /var/log/secure
2014/09/02 08:09:02.205615 Starting harvester at position 5426763: /var/log/messages
2014/09/02 08:09:02.205742 Current file offset: 5426763
2014/09/02 08:09:02.279715 Starting harvester: /var/log/secure
2014/09/02 08:09:02.279756 Current file offset: 12841221
2014/09/02 08:09:02.638448 Connected to logstash_server_ip
2014/09/02 08:09:09.998098 Registrar received 1024 events
2014/09/02 08:09:15.189079 Registrar received 1024 events
which I hope is good but only one with timezone NPT is forwarding the logs and I am able to see it in kibana, all others gave me above logs but I am not able to see it in kibana. I think the problem is in DATE since it is not able to parse the date from different server. Also there is no log showing error in logstash.
How do I solve the problem in this case?

In logstash-forwarder config change
"fields": { "type": "syslog" }
to
"fields": { "type": "syslog", "syslog_timezone": "Asia/Kathmandu" }
And change filter.conf to
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
# Set timezone appropriately
if [syslog_timezone] in [ "Asia/Kathmandu" ] {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
remove_field => [ "syslog_timezone" ]
timezone => "Asia/Kathmandu"
}
} else if [syslog_timezone] in [ "America/Chicago", "US/Central" ] {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
remove_field => [ "syslog_timezone" ]
timezone => "America/Chicago"
}
} else if [syslog_timezone] =~ /.+/ {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
add_tag => [ "unknown_timezone" ]
timezone => "Etc/UTC"
}
} else {
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
timezone => "Etc/UTC"
}
}
}
}

Related

Convert a string to date in logstash in json DATA

From this source data
2022-01-21 12:25:01,339 {"category":"runtime","some_id":"order","correlation_id":"OEID_1","servid":"143","provision_id":"898769049","operation_name":"CREATE", "processing_state":"ACTIVE","lifecycle_state":"ACTIVE","created":"2022-01-21 12:25:00,369","changed":"2022-01-21 12:25:00,806","runtime":"0.437"}
and my basic logstash config
filter {
grok {
match => { message => "^%{TIMESTAMP_ISO8601:logdate}%{SPACE}*%{DATA:json}$" }
add_tag => [ "matched", "provisioning_runtime" ]
}
json {
source => "json"
add_tag => [ "json" ]
}
# matcher for the #timestamp
date {
match => [ "logdate", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSS"]
}
i tried to convert the created field from string to a date field, but not replacing the #timestamp field. How to insert this in the config, i dont understand this, all i tried doesnt work
From what I understand, you want to convert created and changed to date values as well. This can be done like this:
filter {
grok {
match => { message => "^%{TIMESTAMP_ISO8601:logdate}%{SPACE}*%{DATA:json}$" }
add_tag => [ "matched", "provisioning_runtime" ]
}
json {
source => "json"
add_tag => [ "json" ]
}
# matcher for the #timestamp
date {
match => [ "logdate", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSS"]
}
# matcher for the created
date {
match => [ "created", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "created"
}
# matcher for the changed
date {
match => [ "changed", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "changed"
}
}
You can use something like
date {
match => [ "logdate", "ISO8601", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "logdate"
}
Here's the documentation.

How to Create Separate index created for separate input types

I have Below logstash-syslog.conf file where it has two different input types one as type => "syslog" and another is type => "APIC" . So, i need two separate output index created as syslog-2018.08.25 and APIC-2018.08.05 .
I want these index to be created Dynamically, i tried something index => "%{[type]}-%{+YYYY.MM.dd}" but it did not worked and killed the logstash.
Could you please suggest what's wrong i'm doing in the below config which needs to be fixed for both config and Index type.
Below is the configuration logstash file:
logstash Version is : 6.2
$ vi logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "APIC"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "APIC" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}
output {
elasticsearch {
hosts => "noida-elk:9200"
index => "syslog-%{+YYYY.MM.dd}"
#index => "%{[type]}-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
Fixed for me as its working for me.
$ cat logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "apic_logs"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "apic_logs" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} (?<prog>[\w._/%-]+) %{SYSLOG5424SD:f1}%{SYSLOG5424SD:f2}%{SYSLOG5424SD:f3}%{SYSLOG5424SD:f4}%{SYSLOG5424SD:f5} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "syslog-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
output {
if [type] == "apic_logs" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "apic_logs-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}

Logstash conditional not matching

Im trying to match a substring in my conditional filter, but it doesn't seem to work.
I have a log like this:
<30>ddns[21535]: Dynamic DNS update for xxx (Duck DNS) successful
And I am trying to match the ddns part of the log, since logs can also be sent by different services.
Currently my filter looks like this:
filter {
if [program] =~ "ddns" {
grok {
match => { "message" => "<%{PROG:syslog_pri}>%{DATA:program}[%{INT:pid}]: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "yyyy:MM:dd-HH:mm:ss" ]
}
mutate {
replace => [ "#source_host", "sflne01.sarandasnet.local" ]
replace => [ "#message", "%{syslog_message}" ]
remove_field => [ "syslog_message", "syslog_timestamp" ]
}
}
I have also tried using if [program] =~ /^ddns$/, but without success.
UPDATED CONFIG:
filter {
################
# START IPFIRE #
################
if [host] =~ /172\.16\.0\.1/ {
if [program] =~ /(?:k|kernel)/ {
grok {
match => { "message" => "<%{PROG:syslog_pri}>%{DATA:program}: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
if [prog] =~ /^ddns$/ {
grok {
match => { "message" => "<%{PROG:syslog_pri}>%{DATA:program}\[%{INT:pid}\]: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
}
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "yyyy:MM:dd-HH:mm:ss" ]
}
mutate {
replace => [ "#source_host", "sflne01.sarandasnet.local" ]
replace => [ "#message", "%{syslog_message}" ]
remove_field => [ "syslog_message", "syslog_timestamp" ]
}
kv {
source => "#message"
}
geoip {
source => "SRC"
target => "geoip"
database => "/etc/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}
################
# END IPFIRE #
################
}
I made the conditional work using this:
if [message] =~ /ddns/
I think you have to use / instead of " so that ddns is used as a regex.
There is an error with /^ddns$/: The ^ anchors at the start of the string and $ at the end. So the only thing this regex will match is ddns. You'll have to remove both if you want the regex to match ddns anywhere in the string

Remove multiple date fields in syslog filter

I have set up logstash and am using the "default" syslog filter as follows:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
This results in two fields being created, #timestamp and syslog_timestamp which essentially contain the same value, albeit in different formats.
Is there a way to create a temporary syslog_timestamp field in grok so it can be passed into the date plugin, or do I have to explicitly remove the field via mutate after I've "used" it? For example:
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
mutate {
remove_field => [ "syslog_timestamp" ]
}
}
}
Thanks for any pointers.
There are not really temporary fields, if you don't want syslog_timestamp you'll have to remove it.
One thing I advise though is that you perform the remove_field in your date filter. Doing this will only remove the field when the date filter is successfully applied, meaning that if the date filter fails, it will leave the syslog_timestamp field behind, potentially revealing the cause of the failure.
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
remove_field => [ "syslog_timestamp" ]
}

how to make two condition check in logstash and write better configuration file

I am using logstash 1.4.2,
I have logstash-forwarder.conf in client log-server like this
{
"network": {
"servers": [ "xxx.xxx.xxx.xxx:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [ "/var/log/messages" ],
"fields": { "type": "syslog" }
},
{
"paths": [ "/var/log/secure" ],
"fields": { "type": "linux-syslog" }
}
]
}
=========================================================
In logstash server
1. filter.conf
filter {
if [type] == "syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "\[%{WORD:messagetype}\]%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
}
if [type] == "linux-syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] }
}
}
=======================================================
2. output.conf
output {
if [messagetype] == "WARNING" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [messagetype] == "ERROR" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [type] == "linux-syslog" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
}
=======================================================
I want all the logs to forward from /var/log/secure and only ERROR and WARNING log from /var/log/messages, I know this is not a good configuration. I want someone to show me a better way to do this.
I prefer to make decisions about events in the filter block. My input and output blocks are usually quite simple. From there, I see two options.
Use the drop filter
The drop filter causes an event to be dropped. It won't ever make it to your outputs:
filter {
#other processing goes here
if [type] == "syslog" and [messagetype] not in ["ERROR", "WARNING"] {
drop {}
}
}
The upside of this is that it's very simple.
The downside is that the event is just dropped. It won't be output at all. Which is fine, if that's what you want.
Use a tag
Many filters allow you to add tags, which are useful for communicating decisions between plugins. You could attach a tag telling your output block to send the event to ES:
filter {
#other processing goes here
if [type] == "linux-syslog" or [messagetype] in ["ERROR", "WARNING"] {
mutate {
add_tag => "send_to_es"
}
}
}
output {
if "send_to_es" in [tags] {
elasticsearch {
#config goes here
}
}
}
The upside of this is that it allows fine control.
The downside of this is that it's a bit more work, and your ES data ends up a little bit polluted (the tag will be visible and searchable in ES).

Resources