StackOverflow community!
I am trying to collect some system logs using Filebeat and then further process them with LogStash before viewing them in Kibana.
Now, as I have different logs location, I am trying to add specific identification fields for each of them within filebeat.yml.
- type: log
enabled: true
paths:
- C:\Users\theod\Desktop\Logs\Test2\*
processors:
- add_fields:
target: ''
fields:
name:"drs"
- type: log
enabled: true
paths:
- C:\Users\theod\Desktop\Logs\Test\*
processors:
- add_fields:
target: ''
fields:
name:"pos"
Depending on that, I am trying to apply some Grok filters in the Logstash conf file:
input {
beats {
port => 5044
}
}
filter
{
if "pos" in [fields][name] {
grok {
match => { "message" => "\[%{LOGLEVEL:LogLevel}(?: ?)] %{TIMESTAMP_ISO8601:TimeStamp} \[%{GREEDYDATA:IP_Address}] \[%{GREEDYDATA:Username}] %{GREEDYDATA:Operation}] \[%{GREEDYDATA:API_RequestLink}] \[%{GREEDYDATA:Account_name_and_Code}] \[%{GREEDYDATA:Additional_Info1}] \[%{GREEDYDATA:Additional_Info2}] \[%{GREEDYDATA:Store}] \[%{GREEDYDATA:Additional_Info3}](?: ?)%{GREEDYDATA:Error}" }
}
}
if "drs" in [fields][name] {
grok {
match => { "message" => "%{TIMESTAMP_ISO8601:TimeStamp} \[%{DATA:Thread}] %{LOGLEVEL:LogLevel} (?: ?)%{INT:Sequence} %{DATA:Request_Header}] %{GREEDYDATA:Request}" }
}
}
}
output
{
if "pos" in [fields][name] {
elasticsearch {
hosts => ["localhost:9200"]
index => "[fields][name]logs-%{+YYYY.MM.dd}"
}
}
else if "pos" in [fields][name] {
elasticsearch {
hosts => ["localhost:9200"]
index => "[fields][name]logs-%{+YYYY.MM.dd}"
}
} else {
elasticsearch {
hosts => ["localhost:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
}
Now, every time I'm running this, the conditionals in the Logstash conf are ignored. Checking the Filebeat logs, I'm noticing that no fields are sent to Logstash.
Can someone offer some guidance and perhaps point out what am I doing wrong?
Thank you!
Your Filebeat config is not adding the field [fields][name], it is adding the field [name] in the top-level of your document because of your target configuration.
processors:
- add_fields:
target: ''
fields:
name:"pos"
All your conditional test the field [fields][name], which does not exist.
Change your conditionals to test the [name] field.
if "pos" in [name] {
... your filters ...
}
Related
I'm trying to collect logs using Logstash where I have different kinds of logs in the same file.
I want to extract certain fields if they exist in the log, and otherwise do something else.
input{
file {
path => ["/home/ubuntu/XXX/XXX/results/**/log_file.txt"]
start_position => "beginning"
}
}
filter {
grok{
match => { "message" => ["%{WORD:logger} %{SPACE}\-%{SPACE} %{LOGLEVEL:level} %{SPACE}\-%{SPACE} %{DATA:message} %{NUMBER:score:float}",
"%{WORD:logger} %{SPACE}\-%{SPACE} %{LOGLEVEL:level} %{SPACE}\-%{SPACE} %{DATA:message}"]}
}
}
output{
elasticsearch {
hosts => ["X.X.X.X:9200"]
}
stdout { codec => rubydebug }
}
for example, log type 1 is:
root - INFO - Best score yet: 35.732
and type 2 is:
root - INFO - Starting an experiment
One of the problems I face is that when a message doesn't contain a number, the field still exists as null in the JSON created which prevents me from using desired functionalities in Kibana.
One option, is just to add a tag on logstash when field is not defined to be able to have a way to filter easily on kibana side. The null value is only set during the insertion in elasticsearch (on logstash side, the field is not defined)
This solution looks like this in you case :
filter {
grok{
match => { "message" => ["%{WORD:logger} %{SPACE}\-%{SPACE} %{LOGLEVEL:level} %{SPACE}\-%{SPACE} %{DATA:message} %{NUMBER:score:float}",
"%{WORD:logger} %{SPACE}\-%{SPACE} %{LOGLEVEL:level} %{SPACE}\-%{SPACE} %{DATA:message}"]}
}
if ![score] {
mutate { add_tag => [ "score_not_set" ] }
}
}
Currently my logstash input is listening to filebeat on port XXXX,my requirement is to collect log data only from a particular hosts(let's say only from Webservers). I dont want to modify the filebeat configuration directly on to the servers but I want allow only the webservers logs to listen in.
Could anyone suggest how to configure the logstash in this scenario? Following is mylogstash input configuration.
**input {
beats {
port => 50XX
}
}**
In a word, "no", you cannot configure the input to restrict which hosts it will accept input from. What you can do is drop events from hosts you are not interested in. If the set of hosts you want to accept input from is small then you could do this using a conditional
if [beat][hostname] not in [ "hosta", "hostb", "hostc" ] { drop {} }
Similarly, if your hostnames follow a fixed pattern you might be able to do it using a regexp
if [beat][hostname] !~ /web\d+$/ { drop {} }
would drop events from any host whose name did not end in web followed by a number.
If you have a large set of hosts you could use a translate filter to determine if they are in the set. For example, if you create a csv file with a list of hosts
hosta,1
hostb,1
hostc,1
then do a lookup using
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {} }
#Badger - Thank you for your response!
As you rightly mentioned, I have large number of hosts, and all my web servers follows naming convention(for an example xxxwebxxx). Could you please brief me the following
translate {
field => "[beat][hostname]"
dictionary_path => "/some/path/foo.csv"
destination => "[#metadata][field]"
fallback => "dropMe"
}
if [#metadata][field] == "dropMe" { drop {}
Also, please suggest how to add the above to my logstash.conf, PFB this is how my logstash.conf looks like
input {
beats {
port => 5xxxx
}
}
filter {
if [type] == "XXX" {
grok {
match => [ "message", '"%{TIMESTAMP_ISO8601:logdate}"\t%{GREEDYDATA}']
}
grok {
match => [ "message", 'AUTHENTICATION-(?<xxx_status_code>[0-9]{3})']
}
grok {
match => [ "message", 'id=(?<user_id>%{DATA}),']
}
if ([user_id] =~ "_agent") {
drop {}
}
grok {
match => [ "message", '%{IP:clientip}' ]
}
date {
match => [ "logdate", "ISO8601", "YYYY-MM-dd HH:mm:ss"]
locale => "en"
}
geoip {
source => "clientip"
}
}
}
output {
elasticsearch {
hosts => ["hostname:port"]
}
stdout { }
}
I have Filebeat configured to watch several different logs on a single host, e.g. Nginx and my app server. However, as I understand it, you cannot have multiple outputs in any one Beat -- so my filebeat.yml has a single output.logstash directive which points to my Logstash server.
Does Logstash have the concept of pipeline routing? I have several pipelines configured on my Logstash server but it's unclear how to leverage that from Filebeat, e.g. I would like to send Nginx logs to my Logstash pipeline for Nginx, etc.
Alternatively, is there a way to route the beats for Nginx to logstash:5044, and the beats for my app server to logstash:5045.
For each of the filebeat prospectors you can use the fields option to add a field that logstash can check to identify what type of data the prospector is collecting. Then in logstash you can use pipeline-to-pipeline communication with the distributor pattern to send different types of data to different pipelines.
You can use tags on your filebeat inputs and filter on your logstash pipeline using those tags.
For example, add the tag nginx to your nginx input in filebeat and the tag app-server in your app server input in filebeat, then use those tags in the logstash pipeline to use different filters and outputs, it will be the same pipeline, but it will route the events based on the tag.
If you want to send the different logs to different ports, you will need to run another instance of Filebeat.
You can use tag concepts for multiple log files
filebeat.yml
filebeat.inputs:
- type: log
tags: [apache]
paths:
- "/home/ubuntu/data/apache.log"
- type: log
tags: [gunicorn]
paths:
- "/home/ubuntu/data/gunicorn.log"
queue.mem:
events: 4096
flush.min_events: 512
flush.timeout: 5s
output.logstash:
hosts: ["****************:5047"]
conf.d/logstash.conf
input {
beats {
port => "5047"
host => "0.0.0.0"
}
}
filter {
if "gunicorn" in [tags] {
grok {
match => {
"message" => "%{USERNAME:u1} %{USERNAME:u2} \[%{HTTPDATE:http_date}\] \"%{DATA:http_verb} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:android_client}\""
remove_field => ["message"]
}
}
date {
match => ["http_date", "dd/MMM/yyyy:HH:mm:ss XX"]
}
mutate {
remove_field => ["agent"]
}
}
else if "apache" in [tags] {
grok {
match => {
"message" => "%{IPORHOST:client_ip} %{DATA:u1} %{DATA:u2} \[%{HTTPDATE:http_date}\] \"%{WORD:http_method} %{URIPATHPARAM:api} %{DATA:http_version}\" %{NUMBER:status_code} %{NUMBER:byte} \"%{DATA:external_api}\" \"%{GREEDYDATA:gd}\" \"%{DATA:u3}\""
remove_field => ["message"]
}
}
date {
match => ["http_date", "dd/MMM/yyyy:HH:mm:ss +ssss"]
}
mutate {
remove_field => ["agent"]
}
}
}
output {
if "gunicorn" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["0.0.0.0:9200"]
index => "gunicorn-sample-%{+YYYY.MM.dd}"
}
}
else if "apache" in [tags] {
stdout { codec => rubydebug }
elasticsearch {
hosts => ["0.0.0.0:9200"]
index => "apache-sample-%{+YYYY.MM.dd}"
}
}
}
I have problem to send data from *.log file to logstash. This is filebeat configuration:
filebeat.prospectors:
- type: log
enabled: true
paths:
- /home/centos/logs/*.log
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.logstash:
hosts: "10.206.81.234:5044"
This is logstash configuration:
path.data: /var/lib/logstash
path.config: /etc/logstash/conf.d/*.conf
path.logs: /var/log/logstash
xpack.monitoring.elasticsearch.url: ["10.206.81.236:9200", "10.206.81.242:9200", "10.206.81.243:9200"]
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: logstash
queue.type: persisted
queue.checkpoint.writes: 10
And this is my pipeline in /etc/logstash/conf.d/test.conf
input {
beats {
port => "5044"
}
file{
path => "/home/centos/logs/mylogs.log"
tags => "mylog"
}
file{
path => "/home/centos/logs/syslog.log"
tags => "syslog"
}
}
filter {
}
output {
if [tag] == "mylog" {
elasticsearch {
hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
user => "Test"
password => "123456"
index => "mylog-%{+YYYY.MM.dd}"
}
}
if [tag] == "syslog" {
elasticsearch {
hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
user => "Test"
password => "123456"
index => "syslog-%{+YYYY.MM.dd}"
}
}
}
I tried to have two separate outputs for mylog and syslog. At first, it works like this: everything was passed to mylog-%{+YYYY.MM.dd} index even files from syslog. So I tried change second if statement to else if. It did not work so I changed it back. Now, my filebeat are not able to send data to logstash and I am receiving this errors:
2018/01/20 15:02:10.959887 async.go:235: ERR Failed to publish events caused by: EOF
2018/01/20 15:02:10.964361 async.go:235: ERR Failed to publish events caused by: client is not connected
2018/01/20 15:02:11.964028 output.go:92: ERR Failed to publish events: client is not connected
My second test was change my pipeline like this:
input {
beats {
port => "5044"
}
file{
path => "/home/centos/logs/mylogs.log"
}
}
filter {
grok{
match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
}
}
output {
elasticsearch {
hosts => [ "10.206.81.246:9200", "10.206.81.236:9200", "10.206.81.243:9200" ]
user => "Test"
password => "123456"
index => "mylog-%{+YYYY.MM.dd}"
}
}
If I add some lines to mylog.log file, filebeat will print the same ERR files but it is passed to logstash and I can see it in Kibana. Could anybody explain me why does it not work? What does those errors means?
I am using filebeat an logstash version 6.1.
Sorry if I make any mistake in english.
In the output section, you are using "tag" (note: is in singular) which doesn't exists. But changing this to "tags" will not work either because the field tags is an array and you will be comparing it to a string, so you should get the first item instead of getting the whole array and then compare. Try this:
if [tags[0]] == "mylog" { ......
I have log file with apache logs that i want to show in Kibana.
The logs start with IP. I have debuged my pattern and it passes.
I'm trying to add fields in the beats input configuration file, but are not show in Kibana even after refresh of the fields.
Here is the configuration file
filter {
if[type] == "apache" {
grok {
match => { "message" => "%{HOST:log_host}%{GREEDYDATA:remaining}" }
add_field => { "testip" => "%{log_host}" }
add_field => { "data_left" => "%{remaining}" }
}
}
...
Just to add that I have restarted all the services: logstash, elasticsearch, kibana after the new configuration.
The issue could be that your grok pattern is using too rigid of patterns.
Chances are that HOST should be IPORHOST based on your test_ip field's name.
Assuming that the data is actually coming in with the type defined as apache, then it should be:
filter {
if [type] == "apache" {
grok {
match => {
message => "%{IPORHOST:log_host}%{GREEDYDATA:remaining}"
}
add_field => {
testip => "%{log_host}"
data_left => "%{remaining}"
}
}
}
}
Having said that, your usage of add_field is completely unnecessary. The grok pattern itself is creating two fields: log_host and remaining, so there's no need to define extra fields called testip and data_left.
Perhaps even more usefully, you don't need to craft your own Apache web log grok pattern. The COMBINEDAPACHELOG pattern already exists, which gives all of the standard fields automatically.
filter {
if [type] == "apache" {
grok {
match => { "message" => "%{COMBINEDAPACHELOG}" }
}
# Set #timestamp to the log's time and drop the unneeded timestamp
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
remove_field => "timestamp"
}
}
}
You can see this in a more complete example in the Logstash documentation here.