Why I cannot receive CPU data when using SNMP and logstash - logstash

there
I monitor remote Linux with Logstash and SNMP. When I try to get interfaces or ifSpeed, everthing is OK. But when I try to get sysDescr, CPU storage and memory storage, I cannot get any data back!
I dont know why. The logstash log seems normal, too.
The logstash.conf:
input {
snmp {
tables => [
{
"name" => "sysDescr"
"columns" => ["1.3.6.1.2.1.1.1.0"]
}
]
hosts => [{
host => "udp:192.168.131.125/161"
community => "laundry"
version => "2c"
}
]
interval => 5
type => "snmp"
}
beats {
port => 5044
add_field => {"type" => "beat"}
}
tcp {
port => 50000
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "beat" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "beat-logs"
}
}
if [type] == "snmp" {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:9200"]
index => "snmp-logs"
}
}
}
the logstash log is:
root#laundry:/opt/ground/management# docker logs -f -t -n=5 5ae67e146ab0
2023-02-03T02:35:04.639861138Z [2023-02-03T10:35:04,639][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
2023-02-03T02:35:04.873655686Z [2023-02-03T10:35:04,873][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
2023-02-03T02:35:04.885933029Z [2023-02-03T10:35:04,884][INFO ][logstash.inputs.tcp ][main][06f1d7ee5445cc0e11cda56012ef6767600f21acd6133e02e957f761d26bac84] Starting tcp input listener {:address=>"0.0.0.0:50000", :ssl_enable=>false}
2023-02-03T02:35:04.934224084Z [2023-02-03T10:35:04,933][INFO ][org.logstash.beats.Server][main][4b91981ecb09a5d2
the output of snmpwalk and snmpget:
root#laundry:/opt/ground/management# snmpwalk -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"
root#laundry:/opt/ground/management# snmpget -v 2c -c laundry 192.168.131.125 1.3.6.1.2.1.1.1.0
iso.3.6.1.2.1.1.1.0 = STRING: "Linux laundry 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 12:06:43 UTC 2023 aarch64"

Related

Kibana keeps complaining of dateparse and grokparse error

I have the following in my logstash pipeline (which receives multiline logs from Filebeat):
filter {
if [type] == "oracle" {
grok {
match => { "message" => "(?<day>[A-Za-z]{3})(\s*)(?<month>[A-Za-z]{3})(\s*)(?<monthday>[0-9]{1,2})(\s*)(?<hour>[0-9]{1,2}):(?<min>[0-9]{1,2}):(?<sec>[0-9]{2})(\s*)(?<year>[0-9]{4})(\s*)%{GREEDYDATA:audit_message}" }
add_tag => [ "oracle_audit" ]
}
grok {
match => { "audit_message" => "ACTION :\[[0-9]*\] '(?<ora_audit_action>.*)'.*DATABASE USER:\[[0-9]*\] '(?<ora_audit_dbuser>.*)'.*PRIVILEGE :\[[0-9]*\] '(?<ora_audit_priv>.*)'.*CLIENT USER:\[[0-9]*\] '(?<ora_audit_osuser>.*)'.*CLIENT TERMINAL:\[[0-9]*\] '(?<ora_audit_term>.*)'.*STATUS:\[[0-9]*\] '(?<ora_audit_status>.*)'.*DBID:\[[0-9]*\] '(?<ora_audit_dbid>.*)'.*SESSIONID:\[[0-9]*\] '(?<ora_audit_sessionid>.*)'.*USERHOST:\[[0-9]*\] '(?<ora_audit_dbhost>.*)'.*CLIENT ADDRESS:\[[0-9]*\] '(?<ora_audit_clientaddr>.*)'.*ACTION NUMBER:\[[0-9]*\] '(?<ora_audit_actionnum>.*)'" }
}
grok {
match => { "source" => [ ".*/[a-zA-Z0-9_#$]*_[a-z0-9]*_(?<ora_audit_derived_pid>[0-9]*)_[0-9]*\.aud" ] }
}
mutate {
add_field => { "ts" => "%{year}-%{month}-%{monthday} %{hour}:%{min}:%{sec}" }
}
date {
locale => "en"
match => [ "ts", "YYYY-MMM-dd HH:mm:ss", "YYYY-MMM-d HH:mm:ss" ]
}
mutate {
remove_field => [ "ts", "year", "month", "day" , "monthday", "hour", "min", "sec", "audit_message" ]
}
}
}
Sample log (coming from Filebeat and I can confirm they have been chunked correctly) :
Audit file /u01/app/oracle/admin/DEVINST/adump/DEVINST_ora_43619_20200913121607479069143795.aud
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Build label: RDBMS_12.2.0.1.0_LINUX.X64_170125
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1
System name: Linux
Node name: testserver
Release: 3.10.0-862.14.4.el7.x86_64
Version: #1 SMP Fri Sep 21 09:07:21 UTC 2018
Machine: x86_64
Instance name: DEVINST
Redo thread mounted by this instance: 1
Oracle process number: 55
Unix process pid: 43619, image: oracle#testserver (TNS V1-V3)
Sun Sep 13 12:16:07 2020 +00:00
LENGTH : '275'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[9] 'testuser'
CLIENT TERMINAL:[5] 'pts/0'
STATUS:[1] '0'
DBID:[10] '1762369616'
SESSIONID:[10] '4294967295'
USERHOST:[21] 'testserver'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '100'
However although the logs went thru to Kibana, it is not showing me the "right" data, complaining of grokparse and dateparse error (even though the grok rule tested fine in the Kibana debugger!):
Message shown in Kibana:
Audit file /u01/app/oracle/admin/DEVINST/adump/DEVINST_ora_43619_20200913121607479069143795.aud
Oracle Database 12c Standard Edition Release 12.2.0.1.0 - 64bit Production
Build label: RDBMS_12.2.0.1.0_LINUX.X64_170125
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1
System name: Linux
Node name: testserver
Release: 3.10.0-862.14.4.el7.x86_64
Version: #1 SMP Fri Sep 21 09:07:21 UTC 2018
Machine: x86_64
Instance name: DEVINST
Redo thread mounted by this instance: 1
Oracle process number: 55
Unix process pid: 43619, image: oracle#testserver (TNS V1-V3)
Message expected:
+00:00
LENGTH : '275'
ACTION :[7] 'CONNECT'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[9] 'testuser'
CLIENT TERMINAL:[5] 'pts/0'
STATUS:[1] '0'
DBID:[10] '1762369616'
SESSIONID:[10] '4294967295'
USERHOST:[21] 'testserver'
CLIENT ADDRESS:[0] ''
ACTION NUMBER:[3] '100'
Due to these the fields werent parsed properly either.
What am I doing wrong? Why is it not parsing the message and the date correctly even though the debugger shows the right output?
EDIT:
As per baudsp suggestion I have overwritten my message like so:
filter {
if [type] == "oracle" {
grok {
match => { "message" => "(?<day>[A-Za-z]{3})(\s*)(?<month>[A-Za-z]{3})(\s*)(?<monthday>[0-9]{1,2})(\s*)(?<hour>[0-9]{1,2}):(?<min>[0-9]{1,2}):(?<sec>[0-9]{2})(\s*)(?<year>[0-9]{4})(\s*)(?<message>[\S\s]*)" }
overwrite => [ "message" ]
}
.....
However Kibana is still showing me grokparse and dateparse errors :(
Thanks
J

Logstash receive strange "<133>" code at the start of receiving TrendMicro log

My Logstash server is CentOS Linux release 8.1.1911.
logstash.version"=>"7.7.0"
I have a capture of what I received on port UDP 5514 with :
nc -lvu 5514 -o log.txt
The content of log.txt
<133>Jun 05 09:23:35 TMCM:EVT_URL_CONTENT_FILTERING Security product="OfficeScan" Security product node="N/A" Security product IP="xx.xx.xx.xx;xxxx::xxxx:xxxx:xxxx:4490" Event time="4/25/2020 11:46:01 PM (UTC)" URL="http://xxxxxxx.xxxxxxx.intranet/SMS_MP/.sms_pol?DEP-Z0120115-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_57a673e1-3e65-4f1c-8ce2-0f4cc1b38acc.SHA256:5EF20484EEC38EA203D7A885EAA48BE2DFDC4F130ED8BF5BEA333378875B2516" Source IP="" Destination IP="yyy.yyy.yyy.yyy" Policy rule="" Blocking type="Web reputation" Domain="xxxx-xxxxx" Event time (local)="4/25/2020 7:46:01 PM" Client host name="N/A" Reputation Score="81"`
myfilter.conf
input
{
udp
{
port => 5514
type => syslog
}
}
filter
{
grok
{
match =>
{ "message" => "(?<user_agent>[^>]*)(?<user_agent>[^:]*)%{POSINT}\s%{WORD:logfrom}\s%{WORD:logtag}\:\s%{NOTSPACE:eventname}\s([^=]*)\=%{QUOTEDSTRING:security_product} ([^=]*)\=%{QUOTEDSTRING:security_prod_node}\s([^=]*)\=\"%{IPV4:security_prod_ip}([^=]*)\=\"(?<agent_detected_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm)\s*\s\(%{TZ:tz}\)).*URL\=\"%{URI:url}\" ([^=]*)\=%{QUOTEDSTRING:src_ip}\s([^=]*)\=\"%{IPV4:dest_ipv4}\"\s([^=]*)\=%{QUOTEDSTRING:policy_rule} ([^=]*)\=%{QUOTEDSTRING:bloking_type} ([^=]*)\=%{QUOTEDSTRING:domain} ([^=]*)\=\"(?<server_alert_time>%{MONTHNUM}\/%{MONTHDAY}\/%{YEAR} %{TIME}\s(?:AM|am|PM|pm))\"\s([^=]*)\=%{QUOTEDSTRING:client_hostname} ([^=]*)\=\"%{BASE10NUM:reputation_score}/?"
}
}
}
output
{
stdout { codec => rubydebug }
}
The example of the output of logstash:
[2020-06-08T13:11:02,253][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
"type" => "syslog",
"#timestamp" => 2020-06-08T18:06:39.090Z,
"message" => "<133>Jun 08 14:06:38 TMCM:EVT_URL_CONTENT_FILTERING Security product=\"OfficeScan\" Security product node=\"N/A\" Security product IP=\"xx.xx.xx.xx;xxxx::xxxx:xxx:xxxx:4490\" Event time=\"4/26/2020 7:33:36 AM (UTC)\" URL=\"http://blabnlabla.bla-blabla.intranet/SMS_MP/.sms_pol?DEP-Z0120105-ScopeId_B14503FF-F7AA-49EC-A38C-F50D813EEC6E/Application_2be50193-9121-4239-a70f-ba06ad7bbfbd.SHA256:6FF12991BBA769F9C15F7E1FA3E3058E22B4D918F6C5659CF7B976059082510D\" Source IP=\"\" Destination IP=\"xxx.xx.xxx.xx\" Policy rule=\"\" Blocking type=\"Web reputation\" Domain=\"bla-blabla\" Event time (local)=\"4/26/2020 3:33:36 AM\" Client host name=\"N/A\" Reputation Score=\"81\"",
"#version" => "1",
"host" => "xx.xxx.xx.xx",
"tags" => [
[0] "_grokparsefailure"
]
}
I have tried also "\<133\>" but it still appears. I have no idea what this <133> is.
P.S. I'm learning by myself since last 2 weeks.

GROK pattern cannot parse logs

I want to launch the ELK-stack for gathering syslog from all my network equipment - Cisco, F5, Huawei, CheckPoint, etc. While experimenting with Logstash, writing grok patterns.
Below is an example of messages from Cisco ASR:
<191>Oct 30 16:30:10 evlogd: [local-60sec10.950] [cli 30000 debug] [8/0/30501 cliparse.c:367] [context: local, contextID: 1] [software internal system syslog] CLI command [user root, mode [local]ASR5K]: show ims-authorization policy-control statistics\u0000
<190>Oct 30 16:30:10 evlogd: [local-60sec10.959] [cli 30005 info] [8/0/30501 _commands_cli.c:1792] [software internal system syslog] CLI session ended for Security Administrator root on device /dev/pts/7\u0000
<190>Oct 30 16:30:10 evlogd: [local-60sec10.981] [snmp 22002 info] [8/0/4550 trap_api.c:930] [software internal system syslog] Internal trap notification 53 (CLISessEnd) user root privilege level Security Administrator ttyname /dev/pts/7\u0000
<190>Oct 30 16:30:12 evlogd: [local-60sec12.639] [cli 30004 info] [8/0/30575 cli_sess.c:127] [software internal system syslog] CLI session started for Security Administrator root on device /dev/pts/7 from 192.168.1.1\u0000
<190>Oct 30 16:30:12 evlogd: [local-60sec12.640] [snmp 22002 info] [8/0/30575 trap_api.c:930] [software internal system syslog] Internal trap notification 52 (CLISessStart) user root privilege level Security Administrator ttyname /dev/pts/7\u0000
All of them matching with my pattern here and here.
<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:message}\\u0000
But my simple logstash configuration return tag _grokparsefailure (or _grokparsefailure_sysloginput if I use GROK in syslog input plugin), and doesn't parse my log.
Config using GROK-filter
input { udp {
port => 5140
type => syslog } }
filter {
if [type] == "syslog" {
grok {
match => ["message", "<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \
[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:response}\\u0000"]
}
}
}
output { stdout { codec => rubydebug } }
Output:
{
"#version" => "1",
"host" => "172.17.0.1",
"#timestamp" => 2018-10-31T09:46:51.121Z,
"message" => "<190>Oct 31 15:46:51 evlogd: [local-60sec51.119] [snmp 22002 info] [8/0/4550 <sitmain:80> trap_api.c:930] [software internal system syslog] Internal trap notification 53 (CLISessEnd) user kiwi privilege level Security Administrator ttyname /dev/pts/7\u0000",
"type" => "syslog",
"tags" => [
[0] "_grokparsefailure"
]
}
Config syslog-input-plugin:
input {
syslog {
port => 5140
grok_pattern => "<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA
}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:response}\\u0000"
}
}
output {
stdout { codec => rubydebug }
}
Output:
{
"severity" => 0,
"#timestamp" => 2018-10-31T09:54:56.871Z,
"#version" => "1",
"host" => "172.17.0.1",
"message" => "<191>Oct 31 15:54:56 evlogd: [local-60sec56.870] [cli 30000 debug] [8/0/22400 <cli:8022400> cliparse.c:367] [context: local, contextID: 1] [software internal system syslog] CLI command [user kiwi, mode [local]ALA3_ASR5K]: show subscribers ggsn-only sum apn osmp\u0000",
"tags" => [
[0] "_grokparsefailure_sysloginput"
],
}
What am I doing wrong? And can someone help fix it?
PS Tested on logstash 2.4.1 and 5
Unlike the online-debuggers, logstash's GROK didn't like my \\u0000 at the end of pattern.
With single backslash all is working.
Right grok-filter is:
<%{POSINT:syslog_pri}>%{DATA:month} %{DATA:monthday} %{TIME:time} %{WORD:device}: \[%{WORD:facility}\-%{HOSTNAME}\] \[%{WORD:service} %{POSINT} %{WORD:priority}\] \[%{DATA}\] ?(\[context: %{DATA:context}, %{DATA}\])?%{SPACE}?(\[%{DATA}\] )%{GREEDYDATA:message}\u0000
I have faced similar problem. The workaround is just to received those logs from routers/firewalls/switches to a syslog-ng server and then forward to the logstash.
Following is a sample configuration for syslog-ng,
source s_router1 {
udp(ip(0.0.0.0) port(1514));
tcp(ip(0.0.0.0) port(1514));
};
destination d_router1_logstash { tcp("localhost",port(5045)); };
log { source(s_router1); destination(d_router1_logstash); };

logstash mutate to replace field value in output

I'm trying to replace 10.100.251.98 with another IP 10.100.240.199 in my logstash config, I have tried using filter with mutate function, yet, I'm unable to get the syntax wrtie
Sep 25 15:50:57 10.100.251.98 mail_logs: Info: New SMTP DCID 13417989 interface 172.30.75.10 address 172.30.75.12 port 25
Sep 25 15:50:57 10.100.251.98 local_mail_logs: Info: New SMTP DCID 13417989 interface 172.30.75.10 address 172.30.75.12 port 25
Sep 25 15:51:04 10.100.251.98 cli_logs: Info: PID 35559: User smaduser login from 10.217.3.22 on 172.30.75.10
Sep 25 15:51:22 10.100.251.98 cli_logs: Info: PID 35596: User smaduser login from 10.217.3.22 on 172.30.75.10
Here is my code:
input { file { path => "/data/collected" } }
filter {
if [type] == "syslog" {
mutate {
replace => [ "#source_host", "10.100.251.99" ]
}
}
}
output {
syslog {
facility => "kernel"
host => "10.100.250.199"
port => 514
}
}
I'm noticing a few things about your config. First, you don't have any log parsing. You won't be able to replace a field if it doesn't yet exist. To do this, you can use a codec in your input block or a grok filter. I added a simple grok filter.
You also check if [type] == "syslog". You never set the type, so that check will always fail. If you want to set a type, you can do that in your input block input { file { path => "/data/collected" type => "syslog} }
Here is the sample config I used for testing the grok pattern and replacement of the IP.
input { tcp { port => 5544 } }
filter {
grok { match => { "message" => "%{CISCOTIMESTAMP:log_time} %{IP:#source_host} %{DATA:log_type}: %{DATA:log_level}: %{GREEDYDATA:log_message}" } }
mutate {
replace => [ "#source_host", "10.100.251.199" ]
}
}
output {
stdout { codec => rubydebug }
}
which outputs this:
{
"message" => "Sep 25 15:50:57 10.100.251.98 mail_logs: Info: New SMTP DCID 13417989 interface 172.30.75.10 address 172.30.75.12 port 25",
"#version" => "1",
"#timestamp" => "2016-09-25T14:03:20.332Z",
"host" => "0:0:0:0:0:0:0:1",
"port" => 52175,
"log_time" => "Sep 25 15:50:57",
"#source_host" => "10.100.251.199",
"log_type" => "mail_logs",
"log_level" => "Info",
"log_message" => "New SMTP DCID 13417989 interface 172.30.75.10 address 172.30.75.12 port 25"
}

rsyslog forwarder seems not not work

I would like to send rsyslog message to my ELK stack but it does not work
rsyslog conf
*.* ##127.0.0.1:10514
local6.* /tmp/grenard.log
&~
logstash conf
input {
syslog {
port => 10514
type => "syslog"
}
stdin {}
}
output {
stdout { codec => rubydebug }
}
logstash listens really on 10514 (telnet localhost 10514
)(test with a localhost telent 10514 and I can see it in my stdout
root#VM-GUILLAUME /etc/logstash/conf.d # /opt/logstash/bin/logstash
-f /etc/logstash/conf.d Settings: Default filter workers: 4 Logstash startup completed {
"message" => "bonjour\r\n",
"#version" => "1",
"#timestamp" => "2016-03-01T10:55:41.488Z",
"type" => "syslog",
"host" => "0:0:0:0:0:0:0:1",
"tags" => [
[0] "_grokparsefailure_sysloginput"
Moreover, the logfile is fulfilled so I know my rsyslog conf is OK
logger -t apache -i -p local6.info $(date)
the log file
Mar 1 12:06:04 localhost apache[13700]: mar. mars 1 12:06:04 CET 2016
Problem was due to tcp (##). using udp (#) problem solved. Here my rsyslod.d/grenard.conf
*.* #127.0.0.1:10514
local6.* /tmp/grenard.log
&~

Resources