Kibana keeps complaining of dateparse and grokparse error - logstash

I have the following in my logstash pipeline (which receives multiline logs from Filebeat):
filter {
if [type] == "oracle" {
grok {
match => { "message" => "(?<day>[A-Za-z]{3})(\s*)(?<month>[A-Za-z]{3})(\s*)(?<monthday>[0-9]{1,2})(\s*)(?<hour>[0-9]{1,2}):(?<min>[0-9]{1,2}):(?<sec>[0-9]{2})(\s*)(?<year>[0-9]{4})(\s*)%{GREEDYDATA:audit_message}" }
add_tag => [ "oracle_audit" ]
}
grok {
match => { "audit_message" => "ACTION :\[[0-9]*\] '(?<ora_audit_action>.*)'.*DATABASE USER:\[[0-9]*\] '(?<ora_audit_dbuser>.*)'.*PRIVILEGE :\[[0-9]*\] '(?<ora_audit_priv>.*)'.*CLIENT USER:\[[0-9]*\] '(?<ora_audit_osuser>.*)'.*CLIENT TERMINAL:\[[0-9]*\] '(?<ora_audit_term>.*)'.*STATUS:\[[0-9]*\] '(?<ora_audit_status>.*)'.*DBID:\[[0-9]*\] '(?<ora_audit_dbid>.*)'.*SESSIONID:\[[0-9]*\] '(?<ora_audit_sessionid>.*)'.*USERHOST:\[[0-9]*\] '(?<ora_audit_dbhost>.*)'.*CLIENT ADDRESS:\[[0-9]*\] '(?<ora_audit_clientaddr>.*)'.*ACTION NUMBER:\[[0-9]*\] '(?<ora_audit_actionnum>.*)'" }
}
grok {
match => { "source" => [ ".*/[a-zA-Z0-9_#$]*_[a-z0-9]*_(?<ora_audit_derived_pid>[0-9]*)_[0-9]*\.aud" ] }
}
mutate {
add_field => { "ts" => "%{year}-%{month}-%{monthday} %{hour}:%{min}:%{sec}" }
}
date {
locale => "en"
match => [ "ts", "YYYY-MMM-dd HH:mm:ss", "YYYY-MMM-d HH:mm:ss" ]
}
mutate {
remove_field => [ "ts", "year", "month", "day" , "monthday", "hour", "min", "sec", "audit_message" ]
}
}
}
Sample log (coming from Filebeat and I can confirm they have been chunked correctly) :
Audit file /u01/app/oracle/admin/DEVINST/adump/DEVINST_ora_43619_20200913121607479069143795.aud
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Build label: RDBMS_12.2.0.1.0_LINUX.X64_170125
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1
System name: Linux
Node name: testserver
Release: 3.10.0-862.14.4.el7.x86_64
Version: #1 SMP Fri Sep 21 09:07:21 UTC 2018
Machine: x86_64
Instance name: DEVINST
Redo thread mounted by this instance: 1
Oracle process number: 55
Unix process pid: 43619, image: oracle#testserver (TNS V1-V3)
Sun Sep 13 12:16:07 2020 +00:00
LENGTH : '275'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[9] 'testuser'
CLIENT TERMINAL:[5] 'pts/0'
STATUS:[1] '0'
DBID:[10] '1762369616'
SESSIONID:[10] '4294967295'
USERHOST:[21] 'testserver'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '100'
However although the logs went thru to Kibana, it is not showing me the "right" data, complaining of grokparse and dateparse error (even though the grok rule tested fine in the Kibana debugger!):
Message shown in Kibana:
Audit file /u01/app/oracle/admin/DEVINST/adump/DEVINST_ora_43619_20200913121607479069143795.aud
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Build label: RDBMS_12.2.0.1.0_LINUX.X64_170125
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1
System name: Linux
Node name: testserver
Release: 3.10.0-862.14.4.el7.x86_64
Version: #1 SMP Fri Sep 21 09:07:21 UTC 2018
Machine: x86_64
Instance name: DEVINST
Redo thread mounted by this instance: 1
Oracle process number: 55
Unix process pid: 43619, image: oracle#testserver (TNS V1-V3)
Message expected:
+00:00
LENGTH : '275'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[9] 'testuser'
CLIENT TERMINAL:[5] 'pts/0'
STATUS:[1] '0'
DBID:[10] '1762369616'
SESSIONID:[10] '4294967295'
USERHOST:[21] 'testserver'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '100'
Due to these the fields werent parsed properly either.
What am I doing wrong? Why is it not parsing the message and the date correctly even though the debugger shows the right output?
EDIT:
As per baudsp suggestion I have overwritten my message like so:
filter {
if [type] == "oracle" {
grok {
match => { "message" => "(?<day>[A-Za-z]{3})(\s*)(?<month>[A-Za-z]{3})(\s*)(?<monthday>[0-9]{1,2})(\s*)(?<hour>[0-9]{1,2}):(?<min>[0-9]{1,2}):(?<sec>[0-9]{2})(\s*)(?<year>[0-9]{4})(\s*)(?<message>[\S\s]*)" }
overwrite => [ "message" ]
}
.....
However Kibana is still showing me grokparse and dateparse errors :(
Thanks
J

Related

Why I cannot receive CPU data when using SNMP and logstash

there
I monitor remote Linux with Logstash and SNMP. When I try to get interfaces or ifSpeed, everthing is OK. But when I try to get sysDescr, CPU storage and memory storage, I cannot get any data back!
I dont know why. The logstash log seems normal, too.
The logstash.conf:
input {
snmp {
tables => [
{
"name" => "sysDescr"
"columns" => ["1.3.6.1.2.1.1.1.0"]
}
]
hosts => [{
host => "udp:192.168.131.125/161"
community => "laundry"
version => "2c"
}
]
interval => 5
type => "snmp"
}
beats {
port => 5044
add_field => {"type" => "beat"}
}
tcp {
port => 50000
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "beat" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "beat-logs"
}
}
if [type] == "snmp" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "snmp-logs"
}
}
}
the logstash log is:
root#laundry:/opt/ground/management# docker logs -f -t -n=5 5ae67e146ab0
2023-02-03T02:35:04.639861138Z [2023-02-03T10:35:04,639][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
2023-02-03T02:35:04.873655686Z [2023-02-03T10:35:04,873][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
2023-02-03T02:35:04.885933029Z [2023-02-03T10:35:04,884][INFO ][logstash.inputs.tcp ][main][06f1d7ee5445cc0e11cda56012ef6767600f21acd6133e02e957f761d26bac84] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enable=>false}
2023-02-03T02:35:04.934224084Z [2023-02-03T10:35:04,933][INFO ][org.logstash.beats.Server][main][4b91981ecb09a5d2
the output of snmpwalk and snmpget:
root#laundry:/opt/ground/management# snmpwalk -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"
root#laundry:/opt/ground/management# snmpget -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"

Logstash receive strange "<133>" code at the start of receiving TrendMicro log

My Logstash server is CentOS Linux release 8.1.1911.
logstash.version"=>"7.7.0"
I have a capture of what I received on port UDP 5514 with :
nc -lvu 5514 -o log.txt
The content of log.txt
<133>Jun 05 09:23:35 TMCM:EVT_URL_CONTENT_FILTERING Security product="OfficeScan" Security product node="N/A" Security product IP="xx.xx.xx.xx;xxxx::xxxx:xxxx:xxxx:4490" Event time="4/25/2020 11:46:01 PM (UTC)" URL="http://xxxxxxx.xxxxxxx.intranet/SMS_MP/.sms_pol?DEP-Z0120115-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_57a673e1-3e65-4f1c-8ce2-0f4cc1b38acc.SHA256:5EF20484EEC38EA203D7A885EAA48BE2DFDC4F130ED8BF5BEA333378875B2516" Source IP="" Destination IP="yyy.yyy.yyy.yyy" Policy rule="" Blocking type="Web reputation" Domain="xxxx-xxxxx" Event time (local)="4/25/2020 7:46:01 PM" Client host name="N/A" Reputation Score="81"`
myfilter.conf
input
{
udp
{
port => 5514
type => syslog
}
}
filter
{
grok
{
match =>
{ "message" => "(?<user_agent>[^>]*)(?<user_agent>[^:]*)%{POSINT}\s%{WORD:logfrom}\s%{WORD:logtag}\:\s%{NOTSPACE:eventname}\s([^=]*)\=%{QUOTEDSTRING:security_product} ([^=]*)\=%{QUOTEDSTRING:security_prod_node}\s([^=]*)\=\"%{IPV4:security_prod_ip}([^=]*)\=\"(?<agent_detected_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm)\s*\s\(%{TZ:tz}\)).*URL\=\"%{URI:url}\" ([^=]*)\=%{QUOTEDSTRING:src_ip}\s([^=]*)\=\"%{IPV4:dest_ipv4}\"\s([^=]*)\=%{QUOTEDSTRING:policy_rule} ([^=]*)\=%{QUOTEDSTRING:bloking_type} ([^=]*)\=%{QUOTEDSTRING:domain} ([^=]*)\=\"(?<server_alert_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm))\"\s([^=]*)\=%{QUOTEDSTRING:client_hostname} ([^=]*)\=\"%{BASE10NUM:reputation_score}/?"
}
}
}
output
{
stdout { codec => rubydebug }
}
The example of the output of logstash:
[2020-06-08T13:11:02,253][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"type" => "syslog",
"#timestamp" => 2020-06-08T18:06:39.090Z,
"message" => "<133>Jun 08 14:06:38 TMCM:EVT_URL_CONTENT_FILTERING Security product=\"OfficeScan\" Security product node=\"N/A\" Security product IP=\"xx.xx.xx.xx;xxxx::xxxx:xxx:xxxx:4490\" Event time=\"4/26/2020 7:33:36 AM (UTC)\" URL=\"http://blabnlabla.bla-blabla.intranet/SMS_MP/.sms_pol?DEP-Z0120105-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_2be50193-9121-4239-a70f-ba06ad7bbfbd.SHA256:6FF12991BBA769F9C15F7E1FA3E3058E22B4D918F6C5659CF7B976059082510D\" Source IP=\"\" Destination IP=\"xxx.xx.xxx.xx\" Policy rule=\"\" Blocking type=\"Web reputation\" Domain=\"bla-blabla\" Event time (local)=\"4/26/2020 3:33:36 AM\" Client host name=\"N/A\" Reputation Score=\"81\"",
"#version" => "1",
"host" => "xx.xxx.xx.xx",
"tags" => [
[0] "_grokparsefailure"
]
}
I have tried also "\<133\>" but it still appears. I have no idea what this <133> is.
P.S. I'm learning by myself since last 2 weeks.

ELK | Log file grok filtered format not pushing into elastic search

I have log file having below format to extract into elastic search, but logstash filtered data not pushing into elastic search.
Same grok filtered configuration am able to get it from kibana devtools
Sample logfile:
OCDE - 2019-05-22 13:24:34.000 ERROR org.ramyam.ocde.task.NBALookupTask.checkResponsesToBeProcessed - checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019
Filebeat configuration:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\data\logs\OCDE.log
document_type: ocde
logstash configuration:
input {
file {
type => "ocde"
path => "C:\data\logs\OCDE.log"
}
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ "message" ,'%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}']
}
}
output {
if [type]=="ocde"
{
elasticsearch
{
hosts => ["localhost:9200"]
#manage_template => false
index => "enliven_be_log_yyyymmdd"
document_type=> ocde
}
}
}
I am expecting below result from an above configuration in elastic search
{
"level": "ERROR",
"loggerTime": "2019-05-22 13:24:34.000",
"moduleName": "OCDE",
"methodName": "checkResponsesToBeProcessed",
"className": "org.ramyam.ocde.task.NBALookupTask",
"loggermsg": "checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019"
}
Can anyone please explain or share sample configuration what I am missing
You can try below grok pattern -
%{DATA:moduleName}%{SPACE}*-%{SPACE}*%{TIMESTAMP_ISO8601:loggerTime}%{SPACE}*%{LOGLEVEL:level}%{SPACE}*%{JAVACLASS:className}\.%{DATA:methodName}%{SPACE}*-%{SPACE}*%{GREEDYDATA:loggermsg}
Change your grok from:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}
to:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}
To validate this, use http://grokdebug.herokuapp.com/ and paste the log message you provided into the "
Your pattern works fine, you just had one extra bracket at the end.

GROK pattern cannot parse logs

I want to launch the ELK-stack for gathering syslog from all my network equipment - Cisco, F5, Huawei, CheckPoint, etc. While experimenting with Logstash, writing grok patterns.
Below is an example of messages from Cisco ASR:
<191>Oct 30 16:30:10 evlogd: [local-60sec10.950] [cli 30000 debug] [8/0/30501 cliparse.c:367] [context: local, contextID: 1] [software internal system syslog] CLI command [user root, mode [local]ASR5K]: show ims-authorization policy-control statistics\u0000
<190>Oct 30 16:30:10 evlogd: [local-60sec10.959] [cli 30005 info] [8/0/30501 _commands_cli.c:1792] [software internal system syslog] CLI session ended for Security Administrator root on device /dev/pts/7\u0000
<190>Oct 30 16:30:10 evlogd: [local-60sec10.981] [snmp 22002 info] [8/0/4550 trap_api.c:930] [software internal system syslog] Internal trap notification 53 (CLISessEnd) user root privilege level Security Administrator ttyname /dev/pts/7\u0000
<190>Oct 30 16:30:12 evlogd: [local-60sec12.639] [cli 30004 info] [8/0/30575 cli_sess.c:127] [software internal system syslog] CLI session started for Security Administrator root on device /dev/pts/7 from 192.168.1.1\u0000
<190>Oct 30 16:30:12 evlogd: [local-60sec12.640] [snmp 22002 info] [8/0/30575 trap_api.c:930] [software internal system syslog] Internal trap notification 52 (CLISessStart) user root privilege level Security Administrator ttyname /dev/pts/7\u0000
All of them matching with my pattern here and here.
<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:message}\\u0000
But my simple logstash configuration return tag _grokparsefailure (or _grokparsefailure_sysloginput if I use GROK in syslog input plugin), and doesn't parse my log.
Config using GROK-filter
input { udp {
port => 5140
type => syslog } }
filter {
if [type] == "syslog" {
grok {
match => ["message", "<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \
[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:response}\\u0000"]
}
}
}
output { stdout { codec => rubydebug } }
Output:
{
"#version" => "1",
"host" => "172.17.0.1",
"#timestamp" => 2018-10-31T09:46:51.121Z,
"message" => "<190>Oct 31 15:46:51 evlogd: [local-60sec51.119] [snmp 22002 info] [8/0/4550 <sitmain:80> trap_api.c:930] [software internal system syslog] Internal trap notification 53 (CLISessEnd) user kiwi privilege level Security Administrator ttyname /dev/pts/7\u0000",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
Config syslog-input-plugin:
input {
syslog {
port => 5140
grok_pattern => "<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA
}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:response}\\u0000"
}
}
output {
stdout { codec => rubydebug }
}
Output:
{
"severity" => 0,
"#timestamp" => 2018-10-31T09:54:56.871Z,
"#version" => "1",
"host" => "172.17.0.1",
"message" => "<191>Oct 31 15:54:56 evlogd: [local-60sec56.870] [cli 30000 debug] [8/0/22400 <cli:8022400> cliparse.c:367] [context: local, contextID: 1] [software internal system syslog] CLI command [user kiwi, mode [local]ALA3_ASR5K]: show subscribers ggsn-only sum apn osmp\u0000",
"tags" => [
[0] "_grokparsefailure_sysloginput"
],
}
What am I doing wrong? And can someone help fix it?
PS Tested on logstash 2.4.1 and 5
Unlike the online-debuggers, logstash's GROK didn't like my \\u0000 at the end of pattern.
With single backslash all is working.
Right grok-filter is:
<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:message}\u0000
I have faced similar problem. The workaround is just to received those logs from routers/firewalls/switches to a syslog-ng server and then forward to the logstash.
Following is a sample configuration for syslog-ng,
source s_router1 {
udp(ip(0.0.0.0) port(1514));
tcp(ip(0.0.0.0) port(1514));
};
destination d_router1_logstash { tcp("localhost",port(5045)); };
log { source(s_router1); destination(d_router1_logstash); };

custom grok patterns not work

In telegraf logparser,my config segment like this:
[[inputs.logparser]]
files = ["/home/work/local/monitor/logs/xxx.log"]
from_beginning = false
watch_method = "inotify"
[inputs.logparser.grok]
patterns = ["%{LOG_LINE}"]
measurement = "xxx_log"
custom_pattern_files = ["/etc/telegraf/patterns_xxx.conf"]
timezone = "UTC"
log like this
"a:b"
"c=d"
my custom patterns:
PATTERN1 %{WORD:key}:%{WORD:value}
PATTERN2 %{WORD:key}=%{WORD:value}
LOG_LINE %{PATTERN1}|%{PATTERN2}
for log:
name=jack
LOG_LINE got
{"key": [["a",null]],"value": [["b",null]]}
but i want to get
{"key": ["a"],"value": ["b"]}
what is the correct pattern ? Thanks!
How is your filter configuration?
I tested your grok pattern with that sample and it worked, I used the following filter.
filter {
grok {
patterns_dir => ["/etc/logstash/patterns/"]
break_on_match => false
match => ["message","%{LOG_LINE}"]
tag_on_failure => [ "_grokparsefailure"]
}
}
And in the directory /etc/logstash/patterns/ I put a file with your patterns.
PATTERN1 %{WORD:key}:%{WORD:value}
PATTERN2 %{WORD:key}=%{WORD:value}
LOG_LINE %{PATTERN1}|%{PATTERN2}
This was the logstash output.
{
"#timestamp":"2018-07-13T14:29:25.180Z",
"value":"d",
"host":"logstash-lab",
"message":"\"c=d\"",
"key":"c",
"#version":"1"
}
{
"#timestamp":"2018-07-13T14:29:25.179Z",
"value":"b",
"host":"logstash-lab",
"message":"\"a:b\"",
"key":"a",
"#version":"1"
}
/etc/telegraf/telegraf.conf
[[inputs.logparser]]
files = ["/var/log/auth.log"]
from_beginning = false
watch_method = "inotify"
[inputs.logparser.grok]
patterns = ["%{LOG_LINE}"]
measurement = "auth_log"
custom_pattern_files = ["/home/local/conf.d/09-syslog-filter.conf"]
timezone = "UTC"
cat /home/local/conf.d/09-syslog-filter.conf
filter {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{NUMBER:[system][auth][ssh][port]} ssh2(: %{GREEDYDATA:[system][auth][ssh][signature]})?",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} user %{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: Did not receive identification string from %{IPORHOST:[system][auth][ssh][dropped_ip]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} sudo(?:\[%{POSINT:[system][auth][pid]}\])?: \s*%{DATA:[system][auth][user]} :( %{DATA:[system][auth][sudo][error]} ;)? TTY=%{DATA:[system][auth][sudo][tty]} ; PWD=%{DATA:[system][auth][sudo][pwd]} ; USER=%{DATA:[system][auth][sudo][user]} ; COMMAND=%{GREEDYDATA:[system][auth][sudo][command]}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} groupadd(?:\[%{POSINT:[system][auth][pid]}\])?: new group: name=%{DATA:system.auth.groupadd.name}, GID=%{NUMBER:system.auth.groupadd.gid}",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} useradd(?:\[%{POSINT:[system][auth][pid]}\])?: new user: name=%{DATA:[system][auth][user][add][name]}, UID=%{NUMBER:[system][auth][user][add][uid]}, GID=%{NUMBER:[system][auth][user][add][gid]}, home=%{DATA:[system][auth][user][add][home]}, shell=%{DATA:[system][auth][user][add][shell]}$",
"%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{SYSLOGHOST:[system][auth][hostname]} %{DATA:[system][auth][program]}(?:\[%{POSINT:[system][auth][pid]}\])?: %{GREEDYMULTILINE:[system][auth][message]}"] }
pattern_definitions => {
"GREEDYMULTILINE"=> "(.|\n)*"
}
}
date {
match => [ "[system][auth][timestamp]", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
geoip {
source => "[system][auth][ssh][ip]"
target => "[system][auth][ssh][geoip]"
}
LOG_LINE %{SYSLOGTIMESTAMP}|%{SYSLOGHOST}|%{POSINT}|%{DATA}
}
systemctl status telegraf.service
● telegraf.service - The plugin-driven server agent for reporting metrics into InfluxDB
Loaded: loaded (/lib/systemd/system/telegraf.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Sun 2018-10-21 10:15:00 +06; 6min ago
Docs: https://github.com/influxdata/telegraf
Process: 30366 ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d $TELEGRAF
Main PID: 30366 (code=exited, status=2)
Failed to start The plugin-driven server agent for reporting metrics into InfluxDB.
I need help.
[that grok is ok for logstash filter]

Resources