currently, I'm trying to create a grok pattern for this log
2020-03-11 05:54:26,174 JMXINSTRUMENTS-Threading [{"timestamp":"1583906066","label":"Threading","ObjectName":"java.lang:type\u003dThreading","attributes":[{"name":"CurrentThreadUserTime","value":18600000000},{"name":"ThreadCount","value":152},{"name":"TotalStartedThreadCount","value":1138},{"name":"CurrentThreadCpuTime","value":20804323112},{"name":"PeakThreadCount","value":164},{"name":"DaemonThreadCount","value":136}]}]
At the moment I can match correctly until the JMXINTRUMENTS-Threading by using this pattern:
%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}
But I can not seem to match all the values after this. Has anybody got an idea as to what pattern I should use?
It worked for me after defining a different source and target in the JSON filter. Thanks for the help!
filter {
if "atlassian-jira-perf" in [tags] {
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message_raw}" }
tag_on_failure => ["no_match"]
add_tag => ["bananas"]
}
if "no_match" not in [tags] {
json {
source => "log_message_raw"
target => "parsed"
}
}
mutate {
remove_field => ["message"]
}
}
}
i'm trying your pattern in https://grokdebug.herokuapp.com/ (which is the official debugger for logstash) and it does match everything after "JMXINTRUMENTS-Threading" with your pattern in a big field called log message, in this way:
{
"timestamp": [
[
"2020-03-11 05:54:26,174"
]
],
"YEAR": [
[
"2020"
]
],
"MONTHNUM": [
[
"03"
]
],
"MONTHDAY": [
[
"11"
]
],
"HOUR": [
[
"05",
null
]
],
"MINUTE": [
[
"54",
null
]
],
"SECOND": [
[
"26,174"
]
],
"ISO8601_TIMEZONE": [
[
null
]
],
"instrument": [
[
"JMXINSTRUMENTS-Threading"
]
],
"log_message": [
[
"[{"timestamp":"1583906066","label":"Threading","ObjectName":"java.lang:type\\u003dThreading","attributes":[{"name":"CurrentThreadUserTime","value":18600000000},{"name":"ThreadCount","value":152},{"name":"TotalStartedThreadCount","value":1138},{"name":"CurrentThreadCpuTime","value":20804323112},{"name":"PeakThreadCount","value":164},{"name":"DaemonThreadCount","value":136}]}]"
]
]
}
if you wish to match all the field contained in log message you should use a json filter in your logstash pipeline filter section, just right below your grok filter:
For example:
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}" }
tag_on_failure => ["no_match"]
}
if "no_match" not in [tags] {
json {
source => "log_message"
}
}
In that way your json will be splitted in key: value and parsed.
EDIT:
You could try to use a kv filter instead of json, here the docs: https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html
grok {
match => { "message" =>"%{TIMESTAMP_ISO8601:timestamp} (?<instrument>[^\ ]*) ?%{GREEDYDATA:log_message}" }
tag_on_failure => ["no_match"]
}
if "no_match" not in [tags] {
kv {
source => "log_message"
value_split => ":"
include_brackets => true #remove brackets
remove_char_key => "\""
remove_char_value => "\""
field_split => ","
}
}
I have Below logstash-syslog.conf file where it has two different input types one as type => "syslog" and another is type => "APIC" . So, i need two separate output index created as syslog-2018.08.25 and APIC-2018.08.05 .
I want these index to be created Dynamically, i tried something index => "%{[type]}-%{+YYYY.MM.dd}" but it did not worked and killed the logstash.
Could you please suggest what's wrong i'm doing in the below config which needs to be fixed for both config and Index type.
Below is the configuration logstash file:
logstash Version is : 6.2
$ vi logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "APIC"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "APIC" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}
output {
elasticsearch {
hosts => "noida-elk:9200"
index => "syslog-%{+YYYY.MM.dd}"
#index => "%{[type]}-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
Fixed for me as its working for me.
$ cat logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "apic_logs"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "apic_logs" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} (?<prog>[\w._/%-]+) %{SYSLOG5424SD:f1}%{SYSLOG5424SD:f2}%{SYSLOG5424SD:f3}%{SYSLOG5424SD:f4}%{SYSLOG5424SD:f5} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "syslog-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
output {
if [type] == "apic_logs" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "apic_logs-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
Ok - I've been racking my head over this config file for days with little success (I'm very new to logstash/ELK stack). The problem I'm having is when I place two logstash configs in the same directory I get a grok error on the second config. Meaning, 001 will work and 002 will produce the error. If I run logstash with only one config (doesn't matter which one) everything runs great. When combined, one works the other fails. I have combined the two conf files into a single conf file but the same issue persists. Below is the combined version of the config and a sample of the syslogs. Any assistance would be greatly appreciated!
input {
file {
path => ["/var/log/pantraffic.log"]
#start_position => "beginning"
type => "pantraffic"
}
file {
path => ["/var/log/panthreat.log"]
#start_position => "beginning"
type => "panthreat"
}
}
filter {
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
csv {
source => "traffic_message"
columns => [ "PaloAltoDomain","ReceiveTime","SerialNum","Type","Threat- ContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","Bytes","BytesSent","BytesReceived","Packets","StartTime","ElapsedTimeInSec","Category","Padding","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","pkts_sent","pkts_received","sessionEndReason" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
convert => [ "Bytes", "integer" ]
convert => [ "BytesReceived", "integer" ]
convert => [ "BytesSent", "integer" ]
convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
convert => [ "Packets", "integer" ]
convert => [ "pkts_received", "integer" ]
convert => [ "pkts_sent", "integer" ]
convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message_traffic", "traffic_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
###########################################################################
if [type] == "panthreat" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:threat_message}"]
}
syslog_pri { }
}
csv {
source => "threat_message"
columns => [ "Domain","ReceiveTime","Serial","Type","ThreatContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","URL","ThreatContentName","Category","Severity","Direction","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","contenttype","pcap_id","filedigest","cloud","url_idx","user_agent","filetype","xff","referer","sender","subject","recipient","reportid" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
#convert => [ "Bytes", "integer" ]
#convert => [ "BytesReceived", "integer" ]
#convert => [ "BytesSent", "integer" ]
#convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
#convert => [ "Packets", "integer" ]
#convert => [ "pkts_received", "integer" ]
#convert => [ "pkts_sent", "integer" ]
#convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message", "threat_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
Log Samples:
panthreat log:
015-11-13T04:53:28-06:00 PA-200 1,2015/11/13 04:53:28,0011122223333,THREAT,vulnerability,1,2015/11/13 04:53:28,73.222.111.1,4.4.4.4,0.0.0.0,0.0.0.0,rule1,,,dns,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 04:53:28,3602,1,34830,53,0,0,0x0,udp,drop-all-packets,"",Test(41000),0,any,high,client-to-server,37,0x0,US,US,0,,0,,,0,,,,,,,
pantraffic log:
2015-11-13T07:34:22-06:00 PA-200 1,2015/11/13 07:34:21,001112223334,TRAFFIC,end,1,2015/11/13 07:34:21,73.22.111.1,4.3.2.1,0.0.0.0,0.0.0.0,rule1,,,facebook-base,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 07:34:21,6385,1,63121,443,0,0,0x53,tcp,allow,6063,2285,3778,29,2015/11/13 07:34:05,2,social-networking,0,15951,0x0,US,IE,0,17,12,tcp-fin
I think you messed up your closing brackets. Check this block (your first if)
for instance:
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
The last closing bracket is likely wrong here. You don't want to close your if block here but just before you start your "panthreat" block further down. The "panthreat" if block has the same problem.
I am using logstash 1.4.2,
I have logstash-forwarder.conf in client log-server like this
{
"network": {
"servers": [ "xxx.xxx.xxx.xxx:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [ "/var/log/messages" ],
"fields": { "type": "syslog" }
},
{
"paths": [ "/var/log/secure" ],
"fields": { "type": "linux-syslog" }
}
]
}
=========================================================
In logstash server
1. filter.conf
filter {
if [type] == "syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "\[%{WORD:messagetype}\]%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
}
if [type] == "linux-syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] }
}
}
=======================================================
2. output.conf
output {
if [messagetype] == "WARNING" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [messagetype] == "ERROR" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [type] == "linux-syslog" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
}
=======================================================
I want all the logs to forward from /var/log/secure and only ERROR and WARNING log from /var/log/messages, I know this is not a good configuration. I want someone to show me a better way to do this.
I prefer to make decisions about events in the filter block. My input and output blocks are usually quite simple. From there, I see two options.
Use the drop filter
The drop filter causes an event to be dropped. It won't ever make it to your outputs:
filter {
#other processing goes here
if [type] == "syslog" and [messagetype] not in ["ERROR", "WARNING"] {
drop {}
}
}
The upside of this is that it's very simple.
The downside is that the event is just dropped. It won't be output at all. Which is fine, if that's what you want.
Use a tag
Many filters allow you to add tags, which are useful for communicating decisions between plugins. You could attach a tag telling your output block to send the event to ES:
filter {
#other processing goes here
if [type] == "linux-syslog" or [messagetype] in ["ERROR", "WARNING"] {
mutate {
add_tag => "send_to_es"
}
}
}
output {
if "send_to_es" in [tags] {
elasticsearch {
#config goes here
}
}
}
The upside of this is that it allows fine control.
The downside of this is that it's a bit more work, and your ES data ends up a little bit polluted (the tag will be visible and searchable in ES).