Logging .net Core with Elastic stack - logstash

Trying to set up simple logging with Filebeats, Logstash and be able to view logs in Kibana. Running a simple mvc .net core app with log4net as logger. log4net FileAppender appending logs to C:\Logs\Debug.log just fine. However not able to push those to Kibana.
Based on this artice here, I would set up filebeats, then transform log via logstash and be able to view my logs in Kibana.
logstash.yml
- module: logstash
# logs
log:
enabled: true
#var.paths: ["C:/Logs/Debug.log"] - THIS CAUSES ERRROS - should this be UNCOMMENTED?
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
# Slow logs
slowlog:
enabled: true
#var.paths: ["C:/Logs/Debug.log"]
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
logstash-sample.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
file {
path => "C:\Logs\Debug.log"
type => "log4net"
codec => multiline {
pattern => "^(DEBUG|WARN|ERROR|INFO|FATAL)"
negate => true
what => previous
}
}
}
filter {
if [type] == "log4net" {
grok {
match => [ "message", "(?m)%{LOGLEVEL:level} %{TIMESTAMP_ISO8601:sourceTimestamp} %{DATA:logger} \[%{NUMBER:threadId}\] \[%{IPORHOST:tempHost}\] %{GREEDYDATA:tempMessage}" ]
}
mutate {
replace => [ "message" , "%{tempMessage}" ]
replace => [ "host" , "%{tempHost}" ]
remove_field => [ "tempMessage" ]
remove_field => [ "tempHost" ]
}
}
}
output {
elasticsearch {
host => localhost
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
Running logstash with config-sample output:
Filebeat.yml
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: false
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- c:\Logs\*.log
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#============================== Dashboards =====================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here, or by using the `-setup` CLI flag or the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "localhost:5601"
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
#============================= Elastic Cloud ==================================
# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
#============================== Xpack Monitoring ===============================
# filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#xpack.monitoring.enabled: false
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well. Any setting that is not set is
# automatically inherited from the Elasticsearch output configuration, so if you
# have the Elasticsearch output configured, you can simply uncomment the
# following line.
#xpack.monitoring.elasticsearch:
Output from my Browser windows:
I see my mvc app logging just fine (log4net) logs in C:\Logs\Debug.log, however not able to set it up so that these show up in Kibana.
How would I set it up so that I would see my logs in Kibana?
EDIT 1:
logstash.config
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:timestamp}~~\[%{DATA:thread}\]~~\[%{DATA:user}\]~~\[%{DATA:requestId}\]~~\[%{DATA:userHost}\]~~\[%{DATA:requestUrl}\]~~%{DATA:level}~~%{DATA:logger}~~%{DATA:logmessage}~~%{DATA:exception}\|\|" }
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
}
remove_field => ["message"]
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss:SSS" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
sniffing => true
index => "%{app_name}_%{app_env}_%{type}-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
#user => "elastic"
#password => "changeme"
}
stdout { codec => rubydebug }
}
filebeat.yml
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- c:\Logs\*.log
.....
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
I have filebeats enabled/running as service. Also logstash running (see powershell window below). When I change anything in Debug.log file and save, i see those changes being output to console right away.
However, when I go to dashboard I do not see any logs still. What am I doing wrong?

I was able to solve this. Logging in .net Core 2.0 using Log4Net.
1. Setup your log4net as always (make sure your logging works and you logs get written to some log file => for me it's C:\Logs\Debug.log"
Install Kibana, Elasticsearch, Logstash and Filebeat: https://www.elastic.co/start
configure filebeat.yml
filebeat.inputs:
#=========================== Filebeat inputs =============================
filebeat.inputs:
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- c:\Logs\*.log
multiline.pattern: '^(\d{4}-\d{2}-\d{2}\s)'
multiline.negate: true
multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
host: "localhost:5601"
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch: => MAKE SURE THIS IS COMMENTED OUT
# Array of hosts to connect to.
# hosts: ["localhost:9200"]
# Enabled ilm (beta) to use index lifecycle management instead daily indices.
#ilm.enabled: false
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
logstash.yml
- module: logstash
# logs
log:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths: -C:\Logs\*.log
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
# Slow logs
slowlog:
enabled: true
# Set custom paths for the log files. If left empty,
# Filebeat will choose the paths depending on your OS.
#var.paths: C:\Logs\*.log
# Convert the timestamp to UTC. Requires Elasticsearch >= 6.1.
#var.convert_timezone: false
logstash.conf
# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "(?m)^%{TIMESTAMP_ISO8601:timestamp}~~\[%{DATA:thread}\]~~\[%{DATA:user}\]~~\[%{DATA:requestId}\]~~\[%{DATA:userHost}\]~~\[%{DATA:requestUrl}\]~~%{DATA:level}~~%{DATA:logger}~~%{DATA:logmessage}~~%{DATA:exception}\|\|" }
add_field => {
"received_at" => "%{#timestamp}"
"received_from" => "%{host}"
}
remove_field => ["message"]
}
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss:SSS" ]
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
sniffing => true
index => "filebeat-%{+YYYY.MM.dd}"
document_type => "%{[#metadata][type]}"
#user => "elastic"
#password => "changeme"
}
stdout { codec => rubydebug }
}
Make sure logstash is running with this configuration (CMD):
\bin\logstash -f c:\Elastic\Logstash\config\logstash.conf
Open your Log file (C:\Logs\Debug.log) and add something. you should see output in powershell output window where logstash is running and pulling in data:
Open Kibana and go to index that you've written to (logstash.conf)
index => "filebeat-%{+YYYY.MM.dd}"

Related

filebeat send messages to certain index

I have an installed pair elasticsearch - logstash - kibana, 2 clients: ELKclient1 and ELKclient2. Filebeat is installed on clients. I need that both clients write logs in separate index, ELKclient1 in index test-%{+YYYY.MM.dd, ELKclient2 in index test2-%{+YYYY.MM.dd (sending nginx access logs). For some reason logs from clients are written in both indexes, eg, from client ELKclient2 logs are written in both indexes test-%{+YYYY.MM.dd and test2-%{+YYYY.MM.dd (attachment 1 and attachement 2). Do you have any clue why its happening?
#config filebeat on client2
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
type: nginx_access
fields_under_root: true
scan_frequency: 5s
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["ip-address_logstash:5044"]
index: "test2-%{+YYYY.MM.dd}"
bulk_max_size: 1024
shipper:
logging:
to_syslog: false
to_files: true
level: info
files:
path: /var/log/filebeat
name: filebeat.log
#config logstash output
output {
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
}
In order to make both clients write logs in a separate index, Take the workflow idea in the below picture, You need to add a tag to differentiate the logs coming from different servers.
Considering your requirement in your question one of the ways is to put the following code in the output section of your logstash config file.
output {
if [beat][hostname] == "ELKclient1"
elasticsearch {
hosts => "localhost:9200"
index => "test-%{+YYYY.MM.dd}"
}
else if [beat][hostname] == "ELKclient2"
elasticsearch {
hosts => "localhost:9200"
index => "test2-%{+YYYY.MM.dd}"
}
else
stdout {
codec => rubydebug
}
}

Filebeat fails to connect to logstash

I'm using two servers on the cloud on one server (A) I installed filebeat and on second server (B) I have installed logstash, elasticsearch, and kibana. So I'm facing problem while sending logs from server A to server B on logstash.
My filebeat configuration is
filebeat.inputs:
- type: log
enabled: true
paths:
- /home/vinit/demo/*.log
fields:
log_type: apache
fields_under_root: true
#output.elasticsearch:
#hosts: ["localhost:9200"]
#protocol: "https"
#username: "elastic"
#password: "changeme"
output.logstash:
hosts: ["XXX.XX.X.XXX:5044"]
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
#ssl.certificate: "/etc/pki/client/cert.pem"
#ssl.key: "/etc/pki/client/cert.key"
In logstash, I have enabled modules system, filebeat, and logstash.
Logstash configuration is
input {
beats {
port => 5044
}
}
filter {
grok {
match => { "message" => "^%{IP:CLIENT_IP} (?:-|%{USER:IDEN}) (?:-|%{USER:AUTH}) \[%{HTTPDATE:CREATED_ON}\] \"(?:%{WORD:REQUEST_METHOD} (?:/|%{NOTSPACE:REQUEST})(?: HTT$
add_field => {
"LOG_TYPES" => "apache-log"
}
overwrite => [ "message" ]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "apache-info-log"
}
stdout { codec => rubydebug }
}
In Elasticsearch I did
network.host: localhost
I'm getting error are below-
|2019-01-18T15:05:47.738Z|INFO|crawler/crawler.go:72|Loading Inputs: 1|
|---|---|---|---|
|2019-01-18T15:05:47.739Z|INFO|log/input.go:138|Configured paths: [/home/vinit/demo/*.log]|
|2019-01-18T15:05:47.739Z|INFO|input/input.go:114|Starting input of type: log; ID: 10340820847180584185 |
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.740Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.742Z|INFO|log/harvester.go:254|Harvester started for file: /home/vinit/demo/info-log.log|
|2019-01-18T15:05:47.749Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.763Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.763Z|INFO|crawler/crawler.go:106|Loading and starting Inputs completed. Enabled inputs: 1|
|2019-01-18T15:05:47.763Z|INFO|cfgfile/reload.go:150|Config reloader started|
|2019-01-18T15:05:47.777Z|INFO|log/input.go:138|Configured paths: [/var/log/auth.log* /var/log/secure*]|
|2019-01-18T15:05:47.790Z|INFO|log/input.go:138|Configured paths: [/var/log/messages* /var/log/syslog*]|
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 15514736912311113705 |
|2019-01-18T15:05:47.790Z|INFO|input/input.go:114|Starting input of type: log; ID: 4004097261679848995 |
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|log/input.go:138|Configured paths: [/var/log/logstash/logstash-slowlog-plain*.log]|
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 2251543969305657601 |
|2019-01-18T15:05:47.791Z|INFO|input/input.go:114|Starting input of type: log; ID: 9013300092125558684 |
|2019-01-18T15:05:47.791Z|INFO|cfgfile/reload.go:205|Loading of config files completed.|
|2019-01-18T15:05:47.792Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181223|
|2019-01-18T15:05:47.794Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181223|
|2019-01-18T15:05:47.797Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20181230|
|2019-01-18T15:05:47.800Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20181230|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190106|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure|
|2019-01-18T15:05:47.804Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/secure-20190113|
|2019-01-18T15:05:47.816Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190106|
|2019-01-18T15:05:47.817Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages|
|2019-01-18T15:05:47.818Z|INFO|log/harvester.go:254|Harvester started for file: /var/log/messages-20190113|
|2019-01-18T15:05:47.855Z|INFO|pipeline/output.go:95|Connecting to backoff(async(tcp://XXX.XX.X.XXX:5044))|
|2019-01-18T15:06:18.855Z|ERROR|pipeline/output.go:100|Failed to connect to backoff(async(tcp://XXX.XX.X.XXX:5044)): dial tcp XXX.XX.X.XXX:5044: i/o timeout|
|---|---|---|---|
|2019-01-18T15:06:18.855Z|INFO|pipeline/output.go:93|Attempting to reconnect to backoff(async(tcp://XXX.XX.X.XXX:5044)) with 1 reconnect attempt(s)|
Is anyone have any idea how to resolve this and make it work properly?
Related question is Failed to connect to backoff(async(tcp://ip:5044)): dial tcp ip:5044: i/o timeout.
There in the answer it was proposed to allow outgoing TCP connection on port 5044 directly in your cloud provider settings page, since it may be blocked by default.
In addition to present comments by #Vinit Jordan, who whitelisted port 5044 on EC2 with this steps, I propose possible solution for general case.
Please, check your default firewall on logstash server. Probably you have ufw simple firewall that was preconfigured during initial Nginx setup. I ran into this problem right after installation of ELK on the machine B and filebeat on the machine A.
I just added a new rule for filebeat server firewall and the error disappeared:
sudo ufw allow from <IP_address_of_machine_A> to any port 5044
Then filbeat log on machine A showed me:
"message":"Connection to backoff(async(tcp://<IP_address_of_machine_B>:5044)) established"
Probably it is also reasonable to add more general rule for your trusted servers:
sudo ufw allow from <IP_ADDRESS>

How to use the index specified in Filebeat in logstash.yml?

I am using Filebeat to send log files over to my Logstash with the following configurations:
filebeat.inputs:
- type: log
enabled: true
paths:
- ${PWD}/filebeat-volume/data/*.txt
output.logstash:
enabled: true
hosts: ["elk:5044"]
index: "custom-index"
setup.kibana:
host: "localhost:5601"
and
input {
beats {
port => "5044"
}
}
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "<WHAT SHOULD GO HERE???>"
}
}
In filebeat.yml, I am specifying an index ("custom index"). How can I set the same index in my logstash.yml to be sent to Elasticsearch?
I see what you want now, you should set Logstash with below output configuration, this way it will pass the index set in filebeat to Elasticsearch.
output {
elasticsearch {
hosts => [ "localhost:9200" ]
index => "%{[#metadata][beat]}"
}
}
Point 2 in this example

How to remove filebeat tags like id, hostname, version, grok_failure message

I am new to elk my sample log is look like
2017-01-05T14:28:00 INFO zeppelin IDExtractionService transactionId abcdef1234 operation extractOCRData received request duration 12344 exception error occured
my filebeat configuration is below
filebeat.prospectors:
- input_type: log
paths:
- /opt/apache-tomcat-7.0.82/logs/*.log
document_type: apache-access
fields_under_root: true
output.logstash:
hosts: ["10.2.3.4:5044"]
And my logstash filter.conf file:
filter {
grok {
match => [ "message", "transactionId %{WORD:transaction_id} operation %{WORD:otype} received request duration %{NUMBER:duration} exception %{WORD:error}" ]
}
}
filter {
if "beats_input_codec_plain_applied" in [tags] {
mutate {
remove_tag => ["beats_input_codec_plain_applied"]
}
}
}
;
In kibana dashboard i can see log output as below
beat.name:
ebb8a5ec413b
beat.hostname:
ebb8a5ec413b
host:
ebb8a5ec413b
tags:
beat.version:
6.2.2
source:
/opt/apache-tomcat-7.0.82/logs/IDExtraction.log
otype:
extractOCRData
duration:
12344
transaction_id:
abcdef1234
#timestamp:
April 9th 2018, 16:20:31.853
offset:
805,655
#version:
1
error:
error
message:
2017-01-05T14:28:00 INFO zeppelin IDExtractionService transactionId abcdef1234 operation extractOCRData received request duration 12344 exception error occured
_id:
7X0HqmIBj3MEd9pqhTu9
_type:
doc
_index:
filebeat-2018.04.09
_score:
6.315
1 First question is how to remove filebeat tag like id,hostname,version,grok_failure message
2 how to sort logs on timestamp basis because Newly generated logs not appearing on top of kibana dashboard
3 Is there any changes required in my grok filter
You can remove filebeat tags by setting the value of fields_under_root: false in filebeat configuration file. You can read about this option here.
If this option is set to true, the custom fields are stored as
top-level fields in the output document instead of being grouped under
a fields sub-dictionary. If the custom field names conflict with other
field names added by Filebeat, the custom fields overwrite the other
fields.
you can check if _grokparsefailure is in tags using, if "_grokparsefailure" in [tags] and remove it with remove_tag => ["_grokparsefailure"]
Your grok filter seems to be alright.
Hope it helps.

Logstash doesnt read from configured input file

I am trying to configure my Logstash to read from a specified log file. When I configure it to read from stdin it works as expected, my input results in a message from Logstash and displays in my Kibana UI.
$ cat /tmp/logstash-stdin.conf
input {
stdin {}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$./logstash -f /tmp/logstash-stdin.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
The stdin plugin is now waiting for input:
hellloooo
{
"#version" => "1",
"host" => "myhost.com",
"#timestamp" => 2017-11-17T16:05:41.595Z,
"message" => "hellloooo"
}
However, when I run Logstash with a file input I get no indication that the file is loaded into Logstash, and it does not show in Kibana.
$ cat /tmp/logstash-simple.conf
input {
file {
path => "/tmp/test_log.txt"
type => "syslog"
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
$ ./logstash -f /tmp/logstash-simple.conf
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Any suggestions of how I can troubleshoot why my Logstash is not ingesting the configured file?
By default the file input plugin starts reading at the end of the file, so only lines added after Logstash starts will be processed. To read all existing lines upon startup add the option "start_position" => "beginning" to the configuration, as explained in documentation.

Resources