Logstash + nginx: Fields wont be added - logstash

I'am going to Setup my own ELK Server to centralize logging. So far so good.
I Setup docker-compose.yml to run the ELK stack and another docker-compose.yml to run filebeat, wich will watch logfiles, add env + tags and sent to logstash. The parsing should be made in logstash.
filebeat.yml
name: xyz
fields:
env: xyz
output.logstash:
hosts: ["xyz:5044"]
filebeat.prospectors:
- input_type: log
fields:
app_id: default
type: nginx-access
tags:
- nginx
- access
paths:
- /logs/nginx/access.log*
- input_type: log
fields:
app_id: default
type: nginx-error
tags:
- nginx
- error
paths:
- /logs/nginx/error.log*
and here's the logstash.yml
input {
beats {
port => 5044
}
}
filter {
if ["nginx"] in [tags] and ["access"] in [tags] {
grok {
patterns_dir => "/etc/logstash/patterns"
match => { "message" => "%{NGINXACCESS}" }
}
date {
match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
}
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
stdout { codec => rubydebug }
}
The nginx-pattern is here
NGUSERNAME [a-zA-Z\.\#\-\+_%]+
NGUSER %{NGUSERNAME}
NGINXACCESS %{IPORHOST:clientip} %{NGUSER:indent} %{NGUSER:agent} \[%{HTTPDATE:timestamp}\] \"(?:%{WORD:verb} %{URIPATHPARAM:request}(?: HTTP/%{NUMBER:httpversion})?|)\" %{NUMBER:answer} (?:%{NUMBER:byte}|-) (?:\"(?:%{URI:referrer}|-))\" (?:%{QS:referree}) %{QS:agent}
I tested the Expression on grokconstructor.appspot.com and it hits.
Here's a demo line:
127.0.0.1 - - [11/May/2017:13:49:31 +0200] "GET / HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2486.0 Safari/537.36 Edge/13.10586" "-"
But no fields where added
I think maybe my "if" is wrong, but I tried several alternatives... nothing helped.
Any idea?

I will start by removing the if, then adding the conditions one by one. That is start with
["nginx"] in [tags]
and if that works, then go in for
["nginx"] in [tags] and ["access"] in [tags]
Alternatively you could try using
"nginx" in [tags] and "access" in [tags]
UPDATE:
To used fields.type = nginx-access, try
"nginx-access" in [fields][type]

Related

Filebeat: How to remove log if some key or value exist?

filebeat version 7.17.3
i have 3 different logs for example
{"level":"debug","message":"Start proxy checking","module":"proxy","timestamp":"2022-05-18 23:22:15 +0200"}
{"level":"info","message":"Attempt to get proxy","module":"proxy","timestamp":"2022-05-18 23:22:17 +0200"}
{"campaign":"18","level":"warn","message":"Missed or empty list","module":"loader","session":"pYpifim","timestamp":"2022-05-18 23:27:46 +0200"}
how is it possible to not provide/filter out the log to logstash or elasticsearch if level is equal "info"
how is it possible to not provide/filter out the log to logstash or elasticsearch if key campaign does not exist?
in FileBeat i have following
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
- decode_json_fields:
fields: ["message"]
process_array: false
max_depth: 1
target: ""
overwrite_keys: true
add_error_key: true
- drop_fields:
fields: ["agent", "host", "log", "ecs", "input", "location"]
but with drop_fields i can remove some field and i need to not save completely log if key or value are exist!
in Logstash to delete those events is no problem - see below, but how to do this in filebeats?
/etc/logstash/conf.d/40-filebeat-to-logstash.conf
input {
beats {
port => 5044
include_codec_tag => false
}
}
filter {
if "Start proxy checking" in [message] {
drop { }
}
if "Attempt to get proxy" in [message] {
drop { }
}
}
output {
elasticsearch {
hosts => ["http://xxx.xxx.xxx.xxx:9200"]
# index => "myindex"
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+yyyy.MM.dd}"
}
}
Thank you in Advance
in filebeat there is drop events processor,
processors:
- drop_event:
when:
condition
https://www.elastic.co/guide/en/beats/filebeat/7.17/drop-event.html

Tags in logstash

How to direct postfix logs to index postfix ?
In logstash config
input {
beats {
port => 5044
}
}
filter {
grok {
}
}
output {
if "postfix" in [tags]{
elasticsearch {
hosts => "localhost:9200"
index => "postfix-%{+YYYY.MM.dd}"
}
}
}
In filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/maillog*
exclude_files: [".gz$"]
tags: ["postfix"]
output.logstash:
hosts: ["10.50.11.8:5044"]
In the log logstash a lot
[WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"newrelicdata", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x4ceb504a>], :response=>{"index"=>{"_index"=>"newrelicdata", "_type"=>"_doc", "_id"=>"V7x2z20Bp3jq-MOqpNbt", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text] in document with id 'V7x2z20Bp3jq-MOqpNbt'. Preview of field's value: '{name=mail.domain.com}'", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:521"}}}}}
Why date from mail.domain.com try to get not in index postfix ? And the data is trying to get into all the indexes ? Any help
I think the logs contain field name "host", so it's throwing an error as "failed to parse field [host] of type [text]"
mutate {
rename => ["host", "server"]
convert => {"server" => "string"}
}
As I tried sending the logs with the below filebeat.yml configurations,
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/varsha/ELK7.4/logs/*.log
tags: ["postfix"]
output.logstash:
hosts: ["localhost:5044"]
By the above configurations and the files you have shared with me are working fine and got the expected result, please check it in https://pastebin.com/f6e0E52S

ELK | Log file grok filtered format not pushing into elastic search

I have log file having below format to extract into elastic search, but logstash filtered data not pushing into elastic search.
Same grok filtered configuration am able to get it from kibana devtools
Sample logfile:
OCDE - 2019-05-22 13:24:34.000 ERROR org.ramyam.ocde.task.NBALookupTask.checkResponsesToBeProcessed - checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019
Filebeat configuration:
filebeat.inputs:
- type: log
enabled: true
paths:
- C:\data\logs\OCDE.log
document_type: ocde
logstash configuration:
input {
file {
type => "ocde"
path => "C:\data\logs\OCDE.log"
}
beats {
port => 5044
ssl => false
}
}
filter {
grok {
match => [ "message" ,'%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}']
}
}
output {
if [type]=="ocde"
{
elasticsearch
{
hosts => ["localhost:9200"]
#manage_template => false
index => "enliven_be_log_yyyymmdd"
document_type=> ocde
}
}
}
I am expecting below result from an above configuration in elastic search
{
"level": "ERROR",
"loggerTime": "2019-05-22 13:24:34.000",
"moduleName": "OCDE",
"methodName": "checkResponsesToBeProcessed",
"className": "org.ramyam.ocde.task.NBALookupTask",
"loggermsg": "checkResponsesToBeProcessed started : Wed May 22 13:24:34 IST 2019"
}
Can anyone please explain or share sample configuration what I am missing
You can try below grok pattern -
%{DATA:moduleName}%{SPACE}*-%{SPACE}*%{TIMESTAMP_ISO8601:loggerTime}%{SPACE}*%{LOGLEVEL:level}%{SPACE}*%{JAVACLASS:className}\.%{DATA:methodName}%{SPACE}*-%{SPACE}*%{GREEDYDATA:loggermsg}
Change your grok from:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}}
to:
%{DATA:moduleName} - %{TIMESTAMP_ISO8601:loggerTime}\s+%{LOGLEVEL:level}\s+%{JAVACLASS:className}\.%{DATA:methodName} - %{GREEDYDATA:loggermsg}
To validate this, use http://grokdebug.herokuapp.com/ and paste the log message you provided into the "
Your pattern works fine, you just had one extra bracket at the end.

How can Filebeat specify match rules to Logstash

I want to let Logstash'gork filter use the match rules which Filebeat give
Here is my Filebeat config:
filebeat.inputs:
- type: log
enabled: true
paths:
- /root/Log-test/test.log
fields:
"#metadata":
formatter: "%{TIMESTAMP_ISO8601:timestamp} - %{NOTSPACE:module} - %{LOGLEVEL:level} - %{NOTSPACE:filename} - %{GREEDYDATA:log_message}"
fields_under_root: true
output.logstash:
hosts: ["localhost:5045"]
Here is my Logstash config:
input {
beats {
port => "5045"
}
}
filter {
grok {
match => { "message" => "%{[#metadata][formatter]}" }
}
}
output {
file {
path => "/tmp/log-test.log"
codec => rubydebug { metadata => true }
}
}
So, i want the grok know my match rules content (the message field) is "%{TIMESTAMP_ISO8601:timestamp} - %{NOTSPACE:module} - %{LOGLEVEL:level} - %{NOTSPACE:filename} - %{GREEDYDATA:log_message}"
But the setting above do not work, I want to know how can i implement the funciton like this? or is it possible to make it?
Thanks!
From this posting, "Grok expressions with dynamic %{field} references aren't supported". The original author of that post opened a github issue, which is now here (and unresolved after a year).

Logstash Simple File Input Configuration

I am new to Logstash, and I have been trying to make a simple .conf file to read logs from sample Log file. I have tried everything from making sincedb_path to $HOME/.sincedb to setting the start_path to "Beginning", but I can't seem to get the data to be read even to stdout. The following is a sample line from my sample log:
10.209.12.40 - - [06/Aug/2014:22:59:18 +0000] "GET /robots.txt HTTP/1.1" 200 220 "-" "Example-Prg/1.0"
www.example.com 10.209.11.40 - - [06/Aug/2014:23:05:15 +0000] "GET /robots.txt HTTP/1.1" 200 220 "-" "Example-Prog/1.0"
www.example.com 10.209.11.40 - - [06/Aug/2014:23:10:21 +0000] "GET /File/location-path HTTP/1.1" 404 25493 "http://blog.example.com/link-1" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36(KHTML, like Gecko) Chrome/36.0.1985.125 Safari/537.36"
The following is the .conf file that I am using:
input
{
stdin
{
}
file
{
# type => "access"
path => ["/home/user/Desktop/path1/www.example.com-access_log_20140806.log"]
start_position => "beginning"
sincedb_path => "/dev/null"
}
}
filter
{
# if [type] == "access"
# {
grok
{
break_on_match => false
match => {
"message" => '%{IP:sourceIP} %{DATA:User_Id} %{DATA:User_Auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:HTTP_Command} %{DATA:HTTP_Path} %{DATA:HTTP_Version}\" %{NUMBER:HTTP_Code} %{NUMBER:Bytes} \"%{DATA:Host}\" \"%{DATA:Agent}\"'
}
match => {
"message" => '%{HOST:WebPage} %{IP:sourceIP} %{DATA:User_Id} %{DATA:User_Auth} \[%{HTTPDATE:timestamp}\] \"%{WORD:HTTP_Command} %{DATA:HTTP_Path} %{DATA:HTTP_Version}\" %{NUMBER:HTTP_Code} %{NUMBER:Bytes} \"%{DATA:Host}\" \"%{DATA:Agent}\"'
}
}
# }
}
output
{
stdout
{
codec => rubydebug
}
}
I am running it through stdout to get check whether I am getting an output or not. I am getting the following output:
{
"message" => "",
"#version" => "1",
"#timestamp" => "2015-07-02T20:48:55.453Z",
"host" => "monil-Inspiron-3543",
"tags" => [
[0] "_grokparsefailure"
]
}
I have spent a good number of hours trying to figure out what is wrong. Please tell me where I am going wrong.
Thanks in Advance.
EDIT: It was an error in the file name.

Resources