I have Below logstash-syslog.conf file where it has two different input types one as type => "syslog" and another is type => "APIC" . So, i need two separate output index created as syslog-2018.08.25 and APIC-2018.08.05 .
I want these index to be created Dynamically, i tried something index => "%{[type]}-%{+YYYY.MM.dd}" but it did not worked and killed the logstash.
Could you please suggest what's wrong i'm doing in the below config which needs to be fixed for both config and Index type.
Below is the configuration logstash file:
logstash Version is : 6.2
$ vi logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "APIC"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "APIC" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
}
}
output {
elasticsearch {
hosts => "noida-elk:9200"
index => "syslog-%{+YYYY.MM.dd}"
#index => "%{[type]}-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
Fixed for me as its working for me.
$ cat logstash-syslog.conf
input {
file {
path => [ "/scratch/rsyslog/*/messages.log" ]
type => "syslog"
}
file {
path => [ "/scratch/rsyslog/Aug/messages.log" ]
type => "apic_logs"
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp } %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
syslog_pri { }
date {
match => [ "syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
if [type] == "apic_logs" {
grok {
match => { "message" => "%{CISCOTIMESTAMP:syslog_timestamp} %{CISCOTIMESTAMP} %{SYSLOGHOST:syslog_hostname} (?<prog>[\w._/%-]+) %{SYSLOG5424SD:f1}%{SYSLOG5424SD:f2}%{SYSLOG5424SD:f3}%{SYSLOG5424SD:f4}%{SYSLOG5424SD:f5} %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
remove_field => ["#version", "host", "message", "_type", "_index", "_score", "path"]
}
}
}
output {
if [type] == "syslog" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "syslog-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
output {
if [type] == "apic_logs" {
elasticsearch {
hosts => "noida-elk:9200"
manage_template => false
index => "apic_logs-%{+YYYY.MM.dd}"
document_type => "messages"
}
}
}
The generated results of running the config below show the TestResult section as an array. I am trying to get rid of that array to make sending the data to Elasticsearch.
I have the following XML file:
<tem:SubmitTestResult xmlns:tem="http://www.example.com" xmlns:acs="http://www.example.com" xmlns:acs1="http://www.example.com">
<tem:LabId>123</tem:LabId>
<tem:userId>123</tem:userId>
<tem:TestResult>
<acs:CreatedBy>123</acs:CreatedBy>
<acs:CreatedDate>123</acs:CreatedDate>
<acs:LastUpdatedBy>123</acs:LastUpdatedBy>
<acs:LastUpdatedDate>123</acs:LastUpdatedDate>
<acs1:Capacity95FHigh>123</acs1:Capacity95FHigh>
<acs1:Capacity95FHigh_AHRI>123</acs1:Capacity95FHigh_AHRI>
<acs1:CondensateDisposal_AHRI>123</acs1:CondensateDisposal_AHRI>
<acs1:DegradationCoeffCool>123</acs1:DegradationCoeffCool>
</tem:TestResult>
</tem:SubmitTestResult>
And I am using this config:
input {
file {
path => "/var/log/logstash/test3.xml"
}
}
filter {
multiline {
pattern => "<tem:SubmitTestResult>"
negate => "true"
what => "previous"
}
if "multiline" in [tags] {
mutate {
gsub => ["message", "\n", ""]
}
mutate {
replace => ["message", '<?xml version="1.0" encoding="UTF-8" standalone="yes" ?>%{message}']
}
xml {
source => "message"
target => "SubmitTestResult"
}
mutate {
remove_field => ["message", "#version", "host", "#timestamp", "path", "tags", "type"]
remove_field => ["[SubmitTestResult][xmlns:tem]","[SubmitTestResult][xmlns:acs]","[SubmitTestResult][xmlns:acs1]"]
}
mutate {
replace => [ "[SubmitTestResult][LabId]", "%{[SubmitTestResult][LabId]}" ]
replace => [ "[SubmitTestResult][userId]", "%{[SubmitTestResult][userId]}" ]
}
mutate {
replace => [ "[SubmitTestResult][TestResult][0][CreatedBy]", "%{[SubmitTestResult][TestResult][0][CreatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CreatedDate]", "%{[SubmitTestResult][TestResult][0][CreatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedBy]", "%{[SubmitTestResult][TestResult][0][LastUpdatedBy]}" ]
replace => [ "[SubmitTestResult][TestResult][0][LastUpdatedDate]", "%{[SubmitTestResult][TestResult][0][LastUpdatedDate]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh]}" ]
replace => [ "[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]", "%{[SubmitTestResult][TestResult][0][Capacity95FHigh_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]", "%{[SubmitTestResult][TestResult][0][CondensateDisposal_AHRI]}" ]
replace => [ "[SubmitTestResult][TestResult][0][DegradationCoeffCool]", "%{[SubmitTestResult][TestResult][0][DegradationCoeffCool]}" ]
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
The result is:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => [
[0] {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
]
}
As you can see, TestResult has the "[0]" array in there. Is there some config change I can do to make sure that it doesn't come out as an array? I want to send this to Elasticsearch and want the data correct.
I figured this out. After the last mutate block, I added one more mutate block. All I had to do was rename the field and that did the trick.
mutate {
rename => {"[SubmitTestResult][TestResult][0]" => "[SubmitTestResult][TestResult]"}
}
The result now looks proper:
"SubmitTestResult" => {
"LabId" => "123",
"userId" => "123",
"TestResult" => {
"CreatedBy" => "123",
"CreatedDate" => "123",
"LastUpdatedBy" => "123",
"LastUpdatedDate" => "123",
"Capacity95FHigh" => "123",
"Capacity95FHigh_AHRI" => "123",
"CondensateDisposal_AHRI" => "123",
"DegradationCoeffCool" => "123"
}
}
Ok - I've been racking my head over this config file for days with little success (I'm very new to logstash/ELK stack). The problem I'm having is when I place two logstash configs in the same directory I get a grok error on the second config. Meaning, 001 will work and 002 will produce the error. If I run logstash with only one config (doesn't matter which one) everything runs great. When combined, one works the other fails. I have combined the two conf files into a single conf file but the same issue persists. Below is the combined version of the config and a sample of the syslogs. Any assistance would be greatly appreciated!
input {
file {
path => ["/var/log/pantraffic.log"]
#start_position => "beginning"
type => "pantraffic"
}
file {
path => ["/var/log/panthreat.log"]
#start_position => "beginning"
type => "panthreat"
}
}
filter {
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
csv {
source => "traffic_message"
columns => [ "PaloAltoDomain","ReceiveTime","SerialNum","Type","Threat- ContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","Bytes","BytesSent","BytesReceived","Packets","StartTime","ElapsedTimeInSec","Category","Padding","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","pkts_sent","pkts_received","sessionEndReason" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
convert => [ "Bytes", "integer" ]
convert => [ "BytesReceived", "integer" ]
convert => [ "BytesSent", "integer" ]
convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
convert => [ "Packets", "integer" ]
convert => [ "pkts_received", "integer" ]
convert => [ "pkts_sent", "integer" ]
convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message_traffic", "traffic_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
###########################################################################
if [type] == "panthreat" {
grok {
match => [ "message", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:threat_message}"]
}
syslog_pri { }
}
csv {
source => "threat_message"
columns => [ "Domain","ReceiveTime","Serial","Type","ThreatContentType","ConfigVersion","GenerateTime","SourceAddress","DestinationAddress","NATSourceIP","NATDestinationIP","Rule","SourceUser","DestinationUser","Application","VirtualSystem","SourceZone","DestinationZone","InboundInterface","OutboundInterface","LogAction","TimeLogged","SessionID","RepeatCount","SourcePort","DestinationPort","NATSourcePort","NATDestinationPort","Flags","IPProtocol","Action","URL","ThreatContentName","Category","Severity","Direction","seqno","actionflags","SourceCountry","DestinationCountry","cpadding","contenttype","pcap_id","filedigest","cloud","url_idx","user_agent","filetype","xff","referer","sender","subject","recipient","reportid" ]
}
date {
#timezone => "America/Chicago"
match => [ "GenerateTime", "YYYY/MM/dd HH:mm:ss" ]
}
mutate {
#convert => [ "Bytes", "integer" ]
#convert => [ "BytesReceived", "integer" ]
#convert => [ "BytesSent", "integer" ]
#convert => [ "ElapsedTimeInSec", "integer" ]
convert => [ "geoip.area_code", "integer" ]
convert => [ "geoip.dma_code", "integer" ]
convert => [ "geoip.latitude", "float" ]
convert => [ "geoip.longitude", "float" ]
convert => [ "NATDestinationPort", "integer" ]
convert => [ "NATSourcePort", "integer" ]
#convert => [ "Packets", "integer" ]
#convert => [ "pkts_received", "integer" ]
#convert => [ "pkts_sent", "integer" ]
#convert => [ "seqno", "integer" ]
gsub => [ "Rule", " ", "_",
"Application", "( |-)", "_" ]
remove_field => [ "message", "threat_message" ]
}
if [SourceAddress] and [SourceAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6- 9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "SourceAddress"
target => "SourceGeo"
}
if ([SourceGeo.location] and [SourceGeo.location] =~ "0,0") {
mutate {
replace => [ "SourceGeo.location", "" ]
}
}
}
if [DestinationAddress] and [DestinationAddress] !~ "(^127\.0\.0\.1)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^192\.168\.)|(^169\.254\.)" {
geoip {
database => "/opt/logstash/GeoLiteCity.dat"
source => "DestinationAddress"
target => "DestinationGeo"
}
if ([DestinationGeo.location] and [DestinationGeo.location] =~ "0,0") {
mutate {
replace => [ "DestinationAddress.location", "" ]
}
}
}
if [SourceAddress] and [DestinationAddress] {
fingerprint {
concatenate_sources => true
method => "SHA1"
key => "logstash"
source => [ "SourceAddress", "SourcePort", "DestinationAddress", "DestinationPort", "IPProtocol" ]
}
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
Log Samples:
panthreat log:
015-11-13T04:53:28-06:00 PA-200 1,2015/11/13 04:53:28,0011122223333,THREAT,vulnerability,1,2015/11/13 04:53:28,73.222.111.1,4.4.4.4,0.0.0.0,0.0.0.0,rule1,,,dns,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 04:53:28,3602,1,34830,53,0,0,0x0,udp,drop-all-packets,"",Test(41000),0,any,high,client-to-server,37,0x0,US,US,0,,0,,,0,,,,,,,
pantraffic log:
2015-11-13T07:34:22-06:00 PA-200 1,2015/11/13 07:34:21,001112223334,TRAFFIC,end,1,2015/11/13 07:34:21,73.22.111.1,4.3.2.1,0.0.0.0,0.0.0.0,rule1,,,facebook-base,vsys1,trust,untrust,ethernet1/2,ethernet1/1,Default_Forwarder,2015/11/13 07:34:21,6385,1,63121,443,0,0,0x53,tcp,allow,6063,2285,3778,29,2015/11/13 07:34:05,2,social-networking,0,15951,0x0,US,IE,0,17,12,tcp-fin
I think you messed up your closing brackets. Check this block (your first if)
for instance:
if [type] == "pantraffic" {
grok {
#patterns_dir => "/opt/logstash/patterns"
match => [ "message_traffic", "%{TIMESTAMP_ISO8601:#timestamp} % {HOSTNAME:syslog_host} %{GREEDYDATA:traffic_message}"]
}
syslog_pri { }
}
The last closing bracket is likely wrong here. You don't want to close your if block here but just before you start your "panthreat" block further down. The "panthreat" if block has the same problem.
I am using logstash 1.4.2,
I have logstash-forwarder.conf in client log-server like this
{
"network": {
"servers": [ "xxx.xxx.xxx.xxx:5000" ],
"timeout": 15,
"ssl ca": "certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [ "/var/log/messages" ],
"fields": { "type": "syslog" }
},
{
"paths": [ "/var/log/secure" ],
"fields": { "type": "linux-syslog" }
}
]
}
=========================================================
In logstash server
1. filter.conf
filter {
if [type] == "syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "\[%{WORD:messagetype}\]%{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
}
if [type] == "linux-syslog" {
date {
locale => "en"
match => ["syslog_timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss"]
timezone => "Asia/Kathmandu"
target => "#timestamp"
add_field => { "debug" => "timestampMatched"}
}
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
add_field => [ "received_at", "%{#timestamp}" ]
add_field => [ "received_from", "%{host}" ]
}
syslog_pri { }
mutate { replace => [ "syslog_timestamp", "%{syslog_timestamp} +0545" ] }
}
}
=======================================================
2. output.conf
output {
if [messagetype] == "WARNING" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [messagetype] == "ERROR" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
if [type] == "linux-syslog" {
elasticsearch { host => "xxx.xxx.xxx.xxx" }
stdout { codec => rubydebug }
}
}
=======================================================
I want all the logs to forward from /var/log/secure and only ERROR and WARNING log from /var/log/messages, I know this is not a good configuration. I want someone to show me a better way to do this.
I prefer to make decisions about events in the filter block. My input and output blocks are usually quite simple. From there, I see two options.
Use the drop filter
The drop filter causes an event to be dropped. It won't ever make it to your outputs:
filter {
#other processing goes here
if [type] == "syslog" and [messagetype] not in ["ERROR", "WARNING"] {
drop {}
}
}
The upside of this is that it's very simple.
The downside is that the event is just dropped. It won't be output at all. Which is fine, if that's what you want.
Use a tag
Many filters allow you to add tags, which are useful for communicating decisions between plugins. You could attach a tag telling your output block to send the event to ES:
filter {
#other processing goes here
if [type] == "linux-syslog" or [messagetype] in ["ERROR", "WARNING"] {
mutate {
add_tag => "send_to_es"
}
}
}
output {
if "send_to_es" in [tags] {
elasticsearch {
#config goes here
}
}
}
The upside of this is that it allows fine control.
The downside of this is that it's a bit more work, and your ES data ends up a little bit polluted (the tag will be visible and searchable in ES).